id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
210260951
pes2o/s2orc
v3-fos-license
The landslide risk analysis based on human activity using Arc-GIS method This research aims to map the risk level of landslide based on human activity in the Guntur Macan Village, West Lombok. The method used in this study was descriptive quantitative research by mapping and assessing a human activity to classify the risk level of the landslides. The research variables were cropping pattern, excavation, and slope cutting, pond making, drainage, construction, population density, and mitigation. The result showed that the risk level of landslides caused by human activity was divided into three zones. Zone A is a place with a 2.1 risk level of landslides (moderate level), while Zone B is 2,0 (moderate level), and Zone C is 1,3 (low level). Introduction Guntur Macan is a village in West Lombok that has a risk of landslides. Based on the local regulation (PERDA) No.11/2011 about RTRW of West Lombok, Guntur Macan has been assigned as a protected area because most of the land is a hill with a 40% slope [1]. The rainfall of this village is relatively high (2000-3000 mm). Moreover, the soil is clay and a sandy type that makes it highly vulnerable to the landslides. The landslides in Guntur Macan Village in 2016 had destroyed many properties of the villagers. Therefore, in 2015, many residents were also killed and destroyed their properties. The recent landslides were not the first case in the Guntur Macan Village because in 1974 and 2000 there were landslides occurred that hit the village [2]. The Minister of Public Works Regulation No. 22/PRT/M on Guideline for Spatial Planning of Landslide Area said that the spatial planning need to consider the aspect of space usage by considering the ecosystem balance and the social welfare assurance through the assessment of spatial structure and space pattern of a landslide area based on its typology and the level of its vulnerability as well as maintaining the suitability between the implementation activities and the designated area functions [3]. The risk level classification is determined by 2 criteria, namely natural physical aspect and human aspect [4]. Physical, social, economic and environmental factors play a role in susceptibility to the landslide risk [5] [6]. The landslides that happened in the Guntur Macan village was actively influenced by the community activities, where 270 families (33%) live in the hills. The hills not only used as a residential area but also as farming and stockbreeding. Furthermore, a logging activity by the villagers in the hilly area will deplete the tree vegetation that supposes to conserve the soil and water. A large number of human activities without consideration of environmental sustainability is one of the reasons for the increasing frequency of landslide intensity occurring in the area [7]. Thus, the research aim is to analyze the risk level of the landslides based on the human activities in Guntur Macan Village. Types of Research Based on the introduction, the purpose of this research is "to map the risk level of the landslide based on the human activity in the Guntur Macan Village". The method that used in this research was descriptive quantitative by mapping and assessing a human activity to classify the risk level of the landslides. Research Focus The research was located in the Guntur Macan Village, Gunung Sari Subdistrict, West Lombok, with a total area of 2,749 ha, consisting of 7 hamlets: Guntur Macan, Barat Kokoq, Ladungan, North Poan, Pancor, Apit Aik, and Southern Poan. Analysis The analysis steps in this research were divided into several parts, which were: a. A collection of primary data for the human activity indicator to determine the risk level classification. The data were displayed in the human activities map for each indicator. b. A typology determination of disaster-prone areas based on the zonation determination by considering the character and the physical condition of nature, such are:  Zone type A, a place with a mountain slope, hillside, and riverbank, altitude above 2,000 meters above mean sea level and slope including more than 40%.  Zone B type, a place in the mountain foot, foothill, and riverbank, the height of 500 -2,000 meters above mean sea with the slope between 21 -40%.  Type C zone, a place of the highland, lowland, river banks or river valleys, elevations 0 -500 meters above mean sea and incline slope between 0 -20%. c. An assessment of each zone by using the weight of the predetermined criteria indicators. Each indicator is rated as below:  3 (three) if it is considered to have a significant impact on the landslide  2 (two) if it is considered to have a moderate impact on the landslide  1 (one) if it is considered to have less impact on the landslide d. An assessment of the risk level of the landslide based on a human activity aspect which was done by calculating the sum of the 7 indicators with a total between 1-3, meanwhile to indicate the risk level of each zona was conducted using these criteria below:  Risk level zone with high mudslide potential between 2.4 -3.0  Risk level zone with medium mudslide potential between 1.7 -2.39  Risk level zone with low mudslide potential between 1.0 -1.69 Typology of Guntur Macan Landslide Disaster-prone areas Geographically, the Guntur Macan Village is a hilly area that forms the "U" letter between the hills ( Figure 1). The altitude of the area varies between 57 -513 meters above sea level (mdpl). The Guntur Macan village is flown by two rivers that flow between the hills and form a basin. 18.11% of the area is used as a residential area, while 67,94% is used as a farming area that spread equally in the hills and the lowland. The Guntur Macan Village has hillside slope variation where most of the slope has a 40% tilt with the area around 105.68% ha (30.19%). The area with a slope of 0-8% is 43.63 hectares, 8-15% is 44.75 hectares, 15-25% is 89.19 hectares, 25 -40% is 89.19 hectares. According to the guidelines for the spatial planning of landslide areas, the Guntur Macan Village is classified into 3 zones (Table 1), namely: a. A type zones, it is an area that is located on the slopes, hillside, and river banks with a slope of more than 40% tilt. However, based on the area characteristic, which is hilly, the zone has variation contours which are foothill, hillside, and hilltop. The hillside slope with more than 40% tilt is designated as a type A landslide-prone area, while the area above the hill that has a slope of 0-8% is included conditionally as a type A with the following considerations:  The uphills area is adjacent to the hillside that has an extreme slope.  The uphills area has a similar characteristic to the hillside in terms of the potential of a landslide disaster.  The hillside and the uphills are an area that intrigued a landslide to the below area. Based on the landscape characteristic, all of the hamlets in Guntur Macan Village are part of type A zone, especially the North Puan Hamlet and the Southern Puan Hamlet. b. Zone Type B is a foothill that dominated by slope tilt 21% to 30%, although in some residential areas the contour is relatively flat. In addition, the area of the type B zone is the area that is located in the basin between two hills which has a relatively flat slope between 0-15 % which directly adjacent to a slope of above 40%. Thus, the Pancor, Apit Aik, Guntur Macan, Barat Kokoq are included in the type B zone. c. Zone Type C is a lowland area with a slope of 0-15%, which are the characteristic of the Barat Kokoq and Apit Aik Hamlet. Cropping Pattern The cropping patterns are designed based on the existing land types. The Type A Zones and Type B Zones are mostly utilized for plantation. The hillside is planted with taproot plants such are, sengon, and teak where these types of plants are very effective in preventing landslides. Hence, the area will have a low sensitivity to the landslide risk. However, the cropping pattern is not complemented by the community activities in utilizing the logs for sale, mainly sengon and jati. The legal logging in Guntur Macan Village that supported by the government has been depleting the trees which disturbing the hillside stability. On the other side, Zone Type C is a land utilized for paddy fields, planted with fibrous plants such as rice and maize. Although these plants are highly sensitive to the landslides, the plants are planted in a relatively flat slope of 0-8%; hence, the risk sensitivity is low. Excavation and cutting slopes Cutting slopes for cultivation, residential area, and road construction activity such as excavation and mining, technically could increase the soil movement. In zone type A, the excavation and the slope cutting are conducted in the natural hillside/artificial hillside, where those activities are used to build houses and roads. However, the construction was done without concerning about the soil/rock layer, and without calculating the slope stability. Hence, the excavation/cutting intensity is considerably high. At Zone Type B, the intensity of the excavation/slope cutting is also high in which the excavation and cutting slope is conducted for the construction of houses and roads, as well as brick production in the hillside which will increase the vulnerability sensitivity. On the other side, zone type C sensitivity is low due to the absence of excavation and slope cutting activity in the area. Furthermore, the area has a slope characteristic that relatively flat. manufacturing and placement of ponds Pond printing on steep slopes affects the potential for landslides because the water will affect the physical and generical properties of the soil, transforming the soil into soft and loose; hence, the soil strength is decreasing as the soil move quickly. Based on the field identification, the pond printing only found on the zone type C in the flat slope 0-8%. On the other side, zone type A and B has no activity of pond printing; thus, the risk level is low. Drainage In the Zone type A, the drainage system is inadequate, covered by dirt soil. There is no effort to repair it either from the government or the community. Hence, based on the drainage condition aspect, this region has high-risk sensitivity. On the other side, Zone Type B and Zone Type C drainages are quite adequate. Moreover, there are efforts to improve drainage; thus, the risk level in both zones is medium. Construction Build At Zone Type A, there is construction ongoing with considerably low loads but not exceeded the soil bearing capacity. The construction is conducted to build roads and houses, elementary school buildings, prayer facilities, and health care facilities like Posyandu (Integrated Healthcare Center). Although the construction activity is relatively infrequent, as the growth of the human needs of shelters, the construction will be increasing. Thus, the hillside loads increased as well as the landslide risk level. Even though the villagers are constructing in the flat area, as the hill slope more than 40% tilt, it is considered a steep hill. Hence, the risk level is still high. At Zone Type B, the construction and the load capacity are considerably small. The construction is focused on the road, houses, elementary schools, health care facilities such as Posyandu (Integrated Healthcare Center) and village health center, and governmental facilities like village offices. Based on the field observation, the construction has been exceeded the load capacity of the land, because most of the residents are building their houses in the slope area (more than 40% tilt) without calculating the hillside stability. This area is highly vulnerable to landslides compared to other zones. Hence, the risk level is high. In zone type C, the construction has a relatively small load capacity and has not exceeded the standard. The constructions are focused on houses, roads, elementary school facilities, prayer facilities, and health care facilities such as Posyandu (Integrated Healthcare Center). The landslide sensitivity in this area is medium based on the construction aspects because the area is considerably flat. Population Density Based on the area calculation, the population density in this area is considerably low, less than 20 residents/hectare. However, the population density was calculated based on the living area. Based on the number of inhabitants in each zone, the number of houses in Guntur Macan Village is corresponding to the zone typology, which interpreted by satellite imagery and primary survey with the help of ArcGIS, resulting population density in zone A is 34 resident/ha, zone type B is 37 resident/ha, and zone type C is 49 resident/ha. Consequently, the risk level of each zone is considerably medium due to the range of the population in the three zones (20-50 resident/ha). Mitigation After the landslide occurred in the Guntur Macan Village that killed the resident and destroyed their properties, the West Lombok government has been trying to keep the disaster away by coordinating the mitigation activities. The government has been forming a community of DESATANA (the resilient village disaster) in Guntur Macan Village. This forum is expected to be a forerunner of disaster management in Guntur Macan Village that vulnerable to the landslides ( Figure 2 and Table 2). The National Disaster Management Agency has been installing a natural disaster detection system (EWS) in the local hilly area. EWS installation is collaborative work between the National Disaster Management Agency and a team from Gadjah Mada University. There are four EWS tool mounting spots. Two spots for ground cracking detection, one spot for the land slope analysis, and the last spot for the rainfall. Disaster detection is expected to give an early warning to the community when the disaster comes. Consequently, all the zones will have a low-risk level of disaster. MAP A MAP B The Conclusion The landslide risk level in Guntur Macan village based on the human activity assessment is divided into three zones as a result of seven indicator analysis, namely cropping pattern, excavation, and slope cutting, pond making, drainage, construction, population density, and mitigation. Zone A has a risk level around 2.1, while zone B is 2.0 which classified as medium level. On the other side, zone C has 1.3 risk level that categorized as low-risk level of landslide.
2019-11-22T00:59:44.102Z
2019-11-14T00:00:00.000
{ "year": 2019, "sha1": "7baa044636a8f275d7163a1527f12c750af3eb97", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/674/1/012019", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "36a8ae82850a4aabf951f0b5b730de9733fb605e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Geology" ] }
245170246
pes2o/s2orc
v3-fos-license
Sustainable Development of Magnetic Chitosan Core–Shell Network for the Removal of Organic Dyes from Aqueous Solutions The wide use of alizarin red S (ARS), a typical anthraquinone dye, has led to its continued accumulation in the aquatic environment, which causes mutagenic and carcinogenic effects on organisms. Therefore, this study focused on the removal of ARS dye by adsorption onto a magnetic chitosan core–shell network (MCN). The successful synthesis of the MCN was confirmed by ATR-FTIR, SEM, and EDX analysis. The influence of several parameters on the removal of ARS dye by the MCN revealed that the adsorption process reached equilibrium after 60 min, pH played a major role, and electrostatic interactions dominated for the ARS dye removal under acidic conditions. The adsorption data were described well by the Langmuir isotherm and a pseudo-second order kinetic model. In addition to the preferable adsorption of hydrophobic dissolved organic matter (DOM) fractions onto the MCN, the electrostatic repulsive forces between the previously adsorbed DOM onto MCN and ARS dye resulted in lower ARS dye removal. Furthermore, the MCN could easily be regenerated and reused for up to at least five cycles with more than 70% of its original efficiency. Most importantly, the spent MCN was pyrolytically converted into N-doped magnetic carbon and used as an adsorbent for various dyes, thus establishing a waste-free adsorption process. Introduction Dyes and pigments are often used in the textile, paper, food, plastics, and medical industries; hence, they are closely associated with human life [1]. Approximately, 70 million tons of synthetic dyes are produced annually to meet the global demand. Of these, 10% is discharged as wastewater and thus accumulates in the environment, posing a potential threat to the environment [2]. Dyes can be poisonous to the aquatic organisms and cause significant harm to human organs, including the kidney, liver, brain, reproductive organs, and central nervous system [3]. Alizarin red S (ARS) is a typical anthraquinone dye and is extensively used in textile industries as well as in the medical industry as a staining agent; consequently, this has resulted in severe pollution of water with ARS [4]. In addition, it has been found that the ARS is a recalcitrant and durable dye that could induce oxidative damage in organisms and hence is mutagenic and carcinogenic [5]. Thus, it is very important for the environment to remove ARS from contaminated water. Plenty of methods, including biological methods [6], advanced oxidation processes [7], coagulation and flocculation [8], adsorption [9], membrane [10], and ion exchange [11] are nowadays used to remove dyes from wastewater [12]. Notably, adsorption is undoubtedly one of the most efficient dye removal technologies owing to easy handling, cost-effectiveness, adsorbent flexibility, regeneration ability, etc. Further, it can successfully remove dyes from contaminated water in a short amount of time and results in no secondary contamination to the water body [13]. The adsorbent is a critical component in the adsorption process and has received a great deal of attention as a result. Although activated carbon is commonly recommended as an adsorbent to remove dyes and other toxins, its use is often limited due to its high cost and carbon footprint [14,15]. Nowadays, many research studies have been devoted to the sustainable development of eco-friendly adsorbents for water treatment [16]. In this respect, chitosan, a biopolymer has attracted significant attention owing to its abundance in nature, non-toxicity, and surface functionality [17]. Chitosan and its derivatives play a strong role in the field of water and wastewater treatment due to their polycationic nature and the ubiquitous amino and hydroxyl groups present in their structures [18]. Chitosan-based adsorbents have been shown to have excellent removal capabilities for anionic dyes and heavy metals [19,20]. Nevertheless, common problems encountered in adsorption processes involving very fine chitosan-based adsorbents are the recovery and reuse of the spent adsorbent. To solve this problem, magnetically separable chitosan has been produced and used as an adsorbent in water treatment [21][22][23]. Magnetic chitosan-based materials have been found to be effective in removing several metal ions, such as mercury [24], copper, zinc, lead, cadmium [25,26], and chromium [27]. Besides, various magnetic chitosan-based adsorbents have been used to remove various dyes from water, such as magnetic chitosan/graphene oxide composite [28], magnetic carboxymethyl chitosan aerogel [29], magnetic chitosan/quaternary ammonium salt graphene oxide composite [30], magnetic chitosan nanocomposites modified by graphene oxide and polyethyleneimine [31], pectin/chitosan magnetic sponge [32], alginate beads impregnated with magnetic chitosan@zeolite nanocomposite [33], magnetic chitosan with ARS as imprinted molecules [34], and ethylenediamine-modified magnetic chitosan nanoparticles [35], have been tested for the removal of various dye molecules. Furthermore, they have also been applied to remove persistent organic pollutants [36] and inorganic pollutants, such as fluoride [37], from water. However, to the best of our knowledge, the removal of ARS dye molecules by adsorption onto magnetic chitosan core-shell network is yet to be investigated. Therefore, in the current study, we developed a magnetic chitosan core-shell network (MCN) and examined its adsorption potential towards the removal of ARS dye in aqueous solution under different conditions. In addition, to explain the adsorption mechanism and kinetics, the adsorption data were fitted to different isotherm and kinetic models. Furthermore, the desorption of ARS dye molecules from the exhausted MCN was approached by both chemical and thermal treatment. Overall, the obtained results revealed that the MCN could potentially be used as an adsorbent for the effective removal of ARS dye with subsequent reusability. Materials Medium molecular weight (190-310 kDa) chitosan powder and glutaraldehyde (25% solution in water) were obtained from Sigma-Aldrich. Glacial acetic acid 99. 8 conductivity < 20 µS/cm) quality water such that the molar ratio of Fe 3+ to Fe 2+ was 2 to 1. This salt mixture was then added to a three-necked round-bottom flask under nitrogen atmosphere, heated to 70 • C, and stirred constantly at 300 revolutions per minute (rpm). After 20 min, 2M NaOH (100 mL) solution was added dropwise, and the resulting black precipitate was stirred constantly for 30 min and cooled down to room temperature, filtered, and washed with RO water several times until neutral pH was reached. Eventually, the Fe 3 O 4 nanoparticles were dried in a hot air oven at 110 • C for 5 h and used in step 2. Step 2-MCN preparation: Co-precipitation method was chosen for preparing the MCN, as shown in Scheme 1. In this method, 2 g chitosan was first dissolved in 3% acetic acid solution. Then, 2 g Fe 3 O 4 nanoparticles were dispersed in the chitosan solution and stirred continually at 300 rpm at room temperature. After 1 h, 1M NaOH was added dropwise to this mixture, yielding the magnetic chitosan precipitate. To this precipitate, 3% glutaraldehyde was added, and stirring was continued overnight. The resulting MCN was washed with RO water several times and then dried for 14 h in a hot-air oven at 110 • C. Finally, the dried MCN was milled using a planetary ball mill (Fritsch, PULVERISETTE 7) and used for the characterization and adsorption experiments. The MCN was obtained by a two-step process. Step 1-Synthesis of magnetic (Fe3O4) nanoparticles: FeCl3·6H2O and FeSO4·7H2O salts were dissolved in 250 mL of reverse osmosis (RO) (Osmose 190, Dennerle, Germany; <0.2 mg/L dissolved organic carbon; conductivity < 20 µS/cm) quality water such that the molar ratio of Fe 3+ to Fe 2+ was 2 to 1. This salt mixture was then added to a three-necked round-bottom flask under nitrogen atmosphere, heated to 70 °C, and stirred constantly at 300 revolutions per minute (rpm). After 20 min, 2M NaOH (100 mL) solution was added dropwise, and the resulting black precipitate was stirred constantly for 30 min and cooled down to room temperature, filtered, and washed with RO water several times until neutral pH was reached. Eventually, the Fe3O4 nanoparticles were dried in a hot air oven at 110 °C for 5 h and used in step 2. Step 2-MCN preparation: Co-precipitation method was chosen for preparing the MCN, as shown in Scheme 1. In this method, 2 g chitosan was first dissolved in 3% acetic acid solution. Then, 2 g Fe3O4 nanoparticles were dispersed in the chitosan solution and stirred continually at 300 rpm at room temperature. After 1 h, 1M NaOH was added dropwise to this mixture, yielding the magnetic chitosan precipitate. To this precipitate, 3% glutaraldehyde was added, and stirring was continued overnight. The resulting MCN was washed with RO water several times and then dried for 14 h in a hot-air oven at 110 °C. Finally, the dried MCN was milled using a planetary ball mill (Fritsch, PULVERISETTE 7) and used for the characterization and adsorption experiments. Characterization MCN was characterized using environmental scanning electron microscopy (ESEM) coupled with energy dispersive X-ray analyzer (EDS) (Quanta 400 FEG, FEI, Munich, Germany). Samples were coated with gold before the SEM-EDS analysis. Attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR) (Bruker, ALPHA-Platinum, Ettlinglen, Germany) was used to identify the functional groups present in the chitosan, Fe3O4 nanoparticles, and MCN. pH zero-point charge (pHzpc) was determined using solid addition method, reported elsewhere [38]. The pH of the solution was measured using pH electrode and different solution pH was realized by adding 100 mM NaOH/HCl. Batch Adsorption Studies Adsorption of ARS dye onto MCN was studied in synthetic model water (SMW, pH 6.8 ± 0.1), composed of 10 mM ARS dye, 30 mM CaCl2, 50 mM NaHCO3, and 20 mM MgSO4. Batch experiments were performed by varying contact time (0-120 min), MCN dosage (0-50 mg), and pH conditions (pH 3-10). All the adsorption experiments were carried out in duplicate at room temperature and the average was reported. In a typical adsorption process, a known amount of MCN was added to a glass flask containing 50 Scheme 1. Schematic illustration of the MCN synthesis. Characterization MCN was characterized using environmental scanning electron microscopy (ESEM) coupled with energy dispersive X-ray analyzer (EDS) (Quanta 400 FEG, FEI, Munich, Germany). Samples were coated with gold before the SEM-EDS analysis. Attenuated total reflectance-Fourier transform infrared spectroscopy (ATR-FTIR) (Bruker, ALPHA-Platinum, Ettlinglen, Germany) was used to identify the functional groups present in the chitosan, Fe 3 O 4 nanoparticles, and MCN. pH zero-point charge (pH zpc ) was determined using solid addition method, reported elsewhere [38]. The pH of the solution was measured using pH electrode and different solution pH was realized by adding 100 mM NaOH/HCl. Batch Adsorption Studies Adsorption of ARS dye onto MCN was studied in synthetic model water (SMW, pH 6.8 ± 0.1), composed of 10 mM ARS dye, 30 mM CaCl 2 , 50 mM NaHCO 3 , and 20 mM MgSO 4 . Batch experiments were performed by varying contact time (0-120 min), MCN dosage (0-50 mg), and pH conditions (pH 3-10). All the adsorption experiments were carried out in duplicate at room temperature and the average was reported. In a typical adsorption process, a known amount of MCN was added to a glass flask containing 50 mL SMW solution and shaken with a mechanical shaker (Laboshake, Gerhardt, Germany) at 150 rpm. After a period of time, the MCN was magnetically separated, and the solution was filtered through a 0.45 µm cellulose acetate membrane (Ahlstrom GmbH, Germany) to obtain a dust-free filtrate. The filtrate was then analyzed for residual ARS dye at the maximum adsorption wavelength of 260 nm using a UV-Vis spectrophotometer (PerkinElmer, Lambda 20, Germany). The adsorption capacity and the removal efficiency were determined according to the following equations: where q is the adsorption capacity (mg/g); C 0 and C e are the initial and equilibrium ARS dye concentrations (mg/L); V L is the volume of the ARS dye solution (L); and m A is the mass of the adsorbent (g). To identify the suitable adsorption isotherm and kinetic model, adsorption experiments were performed with different initial ARS dye concentrations (10,20,40,80, and 100 mg/L) and with different contact times (0-60 min) at room temperature. The adsorption data obtained were fitted to the Langmuir [39] and Freundlich isotherm [40] models. The non-linear forms of the isotherm models are presented below. Langmuir non-linear isotherm: where q m is the maximum adsorption capacity (mg/g), K L is the Langmuir isotherm constant (L/mg), q e is the equilibrium adsorption capacity (mg/g), and C e is the equilibrium concentration (mg/L). Freundlich non-linear isotherm: where n is the adsorption intensity and K F is the Freundlich isotherm constant (mg/g(L/mg) 1/n ). Furthermore, the adsorption data were fitted to the following non-linear Lagergren pseudo-first order [41] and pseudo-second order [42] kinetic models. Pseudo-first order: where q t is the adsorption capacity (mg/g) at time 't', K 1 is the pseudo-first order rate constant (1/min), and t is time (min). Pseudo-second order: where K 2 is the pseudo-second order rate constant (1/min). To study the influence of dissolved organic matter (DOM) on ARS dye adsorption onto MCN, flower soil extract that has been used in previous works [43] was chosen as a model surrogate for DOM. A series of stock solutions with different DOC (dissolved organic carbon) content was prepared in SMW, treated using MCN, and filtered through a 0.45 µm cellulose acetate membrane. Then, the filtrate was analyzed to determine the ARS dye removal by UV-Vis spectrophotometry (λ = 260 nm), DOC (dissolved organic content), and UV 254 . DOC measurements were performed using a total organic carbon analyzer (Shimadzu, TOC-L). From the DOC and UV 254 values, specific ultraviolet absorbance (SUVA) was also determined using the following equation. SUVA (L/(mg·m)) = UV 254 (cm −1 )/(DOC (mg/g)) × 100 cm/m SUVA analysis can be used to measure the amount of hydrophilic and hydrophobic compounds in the water. If the SUVA value of the filtrate is ≥4 L/(mg·m), this indicates the presence of hydrophobic, aromatic, and high molecular weight DOM fractions, if it is ≤3 L/(mg·m), this indicates the presence of non-humic, hydrophilic, and low molecular weight DOM fractions [44]. Desorption and Reuse Experiment Experiments to desorb the ARS dye from the spent MCN were performed using a batch method. Due to the electrostatic repulsion forces, alkaline medium has been found to be very effective in desorbing the anionic dyes from the adsorbent surface [34,45]. In this study, 0.1 M NaOH was therefore chosen as an eluent to desorb the ARS dye from the spent MCN [34]. After adsorption equilibrium was achieved, MCN was magnetically separated, treated with 0.1 M NaOH solution for 60 min, washed with RO water several times until pH 7 was reached, and dried subsequently in a hot-air oven at 110 • C for 3 h. The dried MCN was then ready for the use in the next cycle of adsorption and desorption experiments as described above. This procedure was carried out five times; ARS dye removal was determined after each cycle. Thermal Treatment of the Spent MCN After the fifth reuse cycle, the ARS dye-loaded MCN was air-dried, placed in a boatshaped crucible, and heated from 22 • C to 700 • C in a rotary kiln furnace with a ramping rate of 6.3 • C/min under a nitrogen atmosphere. Subsequently, the obtained N-doped magnetic carbon material was used as an adsorbent for the removal of various dyes in aqueous solutions. that were assigned to Fe-O stretching mode, δ-OH, and γ-OH stretching vibrations, respectively, which confirm the presence of the most thermodynamically stable iron oxide (goethite) [46]. The reflectance of the characteristic peaks corresponding to both chitosan and Fe 3 O 4 nanoparticles is also seen in the FTIR spectrum of MCN, but with a slight peak shift. In particular, the peak corresponding to the stretching vibration of Fe-O is shifted from 546 cm −1 to 557 cm −1 and the peak corresponding to the stretching vibration of N-H is shifted from 1418 cm −1 to 1456 cm −1 , demonstrating the significant interactions (e.g., metal coordination and hydrogen bonding) between the Fe 3 O 4 nanoparticles and chitosan during MCN preparation. MCN Characterization It is obvious from Figure 2a,a',b,b' that the surface morphologies of raw chitosan and Fe 3 O 4 nanoparticles are quite different from each other. Chitosan powder has a rough, flat, and smooth film-like surface structure [47], while Fe 3 O 4 nanoparticles are spherical in shape [48]. It is evident from Figure 2c,c' that after encapsulating Fe 3 O 4 nanoparticles into chitosan matrix, the spherical shape of the Fe 3 O 4 nanoparticles is diminished and nearly a core-shell network is formed [49]. The EDS spectrum of the MCN (see Figure S1 It is obvious from Figure 2a,a',b,b' that the surface morphologies of raw chitosan and Fe3O4 nanoparticles are quite different from each other. Chitosan powder has a rough, flat, and smooth film-like surface structure [47], while Fe3O4 nanoparticles are spherical in shape [48]. It is evident from Figure 2c,c' that after encapsulating Fe3O4 nanoparticles into chitosan matrix, the spherical shape of the Fe3O4 nanoparticles is diminished and nearly a core-shell network is formed [49]. The EDS spectrum of the MCN (see Figure S1, supporting information) shows distinctive peaks at 0.28, 0.39, 0.52, and 0.75 keV that correspond to C, N, O, and Fe, respectively, confirming the successful formation of MCN. Removal of ARS Dye by Adsorption onto MCN The removal of ARS dye by MCN and Fe 3 O 4 nanoparticles was carried out as a function of contact time and the results are compared in Figure 3A. It is apparent that the MCN could rapidly achieve 62% ARS dye removal within the first five minutes, whereas Fe 3 O 4 nanoparticles resulted in only 16% ARS dye removal. This might be due to the availability of more adsorption sites for the ARS dye at the beginning of the adsorption process. However, the ARS dye removal by MCN was not increased greatly after 60 min. Therefore, 60 min was fixed as a contact time for the subsequent experiments. In the case of Fe 3 O 4 nanoparticles, the ARS dye removal increased with an increase in contact time up to 60 min and then decreased with a further increase in contact time, which might have been due to the occurrence of desorption of the ARS dye molecules with longer contact times. Similar results have already been reported for different dyes [50]. Compared to Fe 3 O 4 nanoparticles, the MCN shows excellent performance for ARS dye removal due to the presence of additional functional groups in the MCN. Removal of ARS Dye by Adsorption onto MCN The removal of ARS dye by MCN and Fe3O4 nanoparticles was carried out as a function of contact time and the results are compared in Figure 3A. It is apparent that the MCN could rapidly achieve 62% ARS dye removal within the first five minutes, whereas Fe3O4 nanoparticles resulted in only 16% ARS dye removal. This might be due to the availability of more adsorption sites for the ARS dye at the beginning of the adsorption process. However, the ARS dye removal by MCN was not increased greatly after 60 min. Therefore, 60 min was fixed as a contact time for the subsequent experiments. In the case of Fe3O4 nanoparticles, the ARS dye removal increased with an increase in contact time up to 60 min and then decreased with a further increase in contact time, which might have been due to the occurrence of desorption of the ARS dye molecules with longer contact times. Similar results have already been reported for different dyes [50]. Compared to Fe3O4 nanoparticles, the MCN shows excellent performance for ARS dye removal due to the presence of additional functional groups in the MCN. The pH of the dye solution influences the adsorption process by changing the zeta potential of the adsorbent [51]. Thus, ARS dye removal by MCN was examined when changing the pH of the SMW over the range from pH 3 to 10; the obtained results are shown in Figure 3B. It is obvious that the removal of ARS dye by the MCN is significantly affected by the solution pH and the removal is increased from 52% (q = 10.4 mg/g) to 84% (q = 17 mg/g) with decreasing pH, from pH 10 to pH 4. The removal of ARS dye remains The pH of the dye solution influences the adsorption process by changing the zeta potential of the adsorbent [51]. Thus, ARS dye removal by MCN was examined when changing the pH of the SMW over the range from pH 3 to 10; the obtained results are shown in Figure 3B. It is obvious that the removal of ARS dye by the MCN is significantly affected by the solution pH and the removal is increased from 52% (q = 10.4 mg/g) to 84% (q = 17 mg/g) with decreasing pH, from pH 10 to pH 4. The removal of ARS dye remains almost the same (83-84%) between pH 4 and pH 6. With a further decrease in pH from pH 4 to pH 3, the ARS dye removal decreased. It is believed that under strong acidic conditions, the sulfonate groups (-SO 3 − ) from the ARS dye combine with H + ions and form -SO 3 H, resulting in a lower adsorption performance. A similar trend was also observed by Fan et al. [34]. Moreover, the pH zpc of the MCN was determined as 6.6, suggesting that the The MCN surface was positively charged below pH 6.6 and negatively charged above pH 6.6. The ARS dye is negatively charged in aqueous solution; hence, the ARS dye molecules are attracted electrostatically below pH 6.6 and repelled electrostatically above pH 6.6. Further, in alkaline pH conditions, the ARS dye molecules may compete with the OH − ions for the same adsorption sites, resulting in lower ARS dye removal. Nevertheless, the MCN retains 50% ARS dye removal efficiency even at pH 10, which can be attributed to the involvement of other driving forces, such as Van der Waals interactions and hydrogen bonding [5]. pH 6.6. Further, in alkaline pH conditions, the ARS dye molecules may compete with the OH − ions for the same adsorption sites, resulting in lower ARS dye removal. Nevertheless, the MCN retains 50% ARS dye removal efficiency even at pH 10, which can be attributed to the involvement of other driving forces, such as Van der Waals interactions and hydrogen bonding [5]. As shown in Figure 3C, the ARS dye removal increased from 47% to 78% with an increase in MCN dosage from 0.005 g/L to 0.025 g/L, but then remained more or less constant up to the highest dosage of 0.05 g/L. Even a small reduction in the ARS dye removal from 78% to 74% could be observed. This might be due to the increase in number As shown in Figure 3C, the ARS dye removal increased from 47% to 78% with an increase in MCN dosage from 0.005 g/L to 0.025 g/L, but then remained more or less constant up to the highest dosage of 0.05 g/L. Even a small reduction in the ARS dye removal from 78% to 74% could be observed. This might be due to the increase in number of adsorption sites available for the ARS dye molecules at increasing dosages, which at the same time led to the formation of agglomerates at higher dosages, decreasing the surface area available for the ARS dye [52]. Thus, the two effects oppose each other, resulting in optimal removal at a medium dosage. The removal of the ARS dye was examined as a function of the initial ARS dye concentration (10,20,40,80, and 100 mg/L) and varied contact time (0-60 min); the obtained results are shown in Figure 3D. As expected, for the higher initial ARS dye concentrations and longer contact times, higher ARS loadings were achieved due to the greater adsorption of ARS dye molecules onto the MCN. The adsorption data obtained from this experiment were successfully fitted to the Langmuir and Freundlich non-linear isotherm models (see Figure 4A,B) and Lagergren pseudo-first order and pseudo-second order non-linear kinetic models (see Figure 4C,D). examined as a function of the initial ARS dye concentration (10,20,40,80, and 100 mg/L) and varied contact time (0-60 min); the obtained results are shown in Figure 3D. As expected, for the higher initial ARS dye concentrations and longer contact times, higher ARS loadings were achieved due to the greater adsorption of ARS dye molecules onto the MCN. The adsorption data obtained from this experiment were successfully fitted to the Langmuir and Freundlich non-linear isotherm models (see Figure 4A,B) and Lagergren pseudo-first order and pseudo-second order non-linear kinetic models (see Figure 4C,D). The determined values for the isotherm parameters are given in Table 1. Based on the correlation coefficient (R 2 ) values shown in Table 1, the experimental data fits better to the Langmuir isotherm model than to the Freundlich isotherm model. This implied that the adsorption layer of the ARS dye onto the MCN surface occurred in a homogeneous monolayer. The determined Langmuir maximum adsorption capacity of MCN for the ARS dye was 166.4 mg/g. Langmuir Isotherm Freundlich Isotherm qmax (mg/g) KL (L/mg) R 2 n KF (mg/g(L/mg) 1/n ) R 2 The determined values for the isotherm parameters are given in Table 1. Based on the correlation coefficient (R 2 ) values shown in Table 1, the experimental data fits better to the Langmuir isotherm model than to the Freundlich isotherm model. This implied that the adsorption layer of the ARS dye onto the MCN surface occurred in a homogeneous monolayer. The determined Langmuir maximum adsorption capacity of MCN for the ARS dye was 166.4 mg/g. To assess the performance, the maximum adsorption capacity of MCN was compared with other adsorbents for the ARS dye removal, as presented in Table 2. In addition to facile synthesis, eco-friendly properties, and easy separation, it is evident from the comparison that the prepared MCN is superior to many of the adsorbents listed in Table 2. Table 2. Comparison of maximum adsorption capacity (q max ) for ARS dye removal with other adsorbents. Adsorbent q max Reference Poly(catechol-tetraethylenepentaminecyanuric chloride)@hydrocellulose 284.1 mg/g [53] Activated carbon engrafted with Ag nanoparticles 232.6 mg/g [54] Gold nanoparticles loaded on activated carbon 123.4 mg/g [55] Activated carbon 85. Table 3 provides the obtained values for the different kinetic parameters. It is apparent from the R 2 value that the pseudo-second order kinetic model fits better to the adsorption data than to the Lagergren pseudo-first order (see Table 2) model, confirming that the rate of adsorption of the ARS dye onto the MCN can be considered as a chemisorption process [34]. Table 3. Lagergren pseudo-first order and pseudo-second order kinetic parameters for the ARS dye adsorption onto the MCN. Influence of DOM (as DOC) Concentration on ARS Dye Removal Wastewaters contain not only the ARS dye, but also DOM, which results in unavoidable interference during the adsorption process. Therefore, it is important to investigate the ARS dye adsorption onto the MCN also in the presence of DOM. It can be seen from the results (see Figure 5A) that the presence of DOM in SMW has a negative impact on the ARS dye removal. After 60 min of contact time, the ARS dye removal decreased by 45% (from 73% to 40%) with an increase in DOC concentration from practically 0 to 11 mg/L, revealing the significant influence of DOM on ARS dye removal. The same effect was reported in another context by Guillossou et al., who studied the competition between DOM and 12 organic micropollutants (OMPs) during adsorption onto powdered activated carbon dosed in either ultra-pure water or wastewater effluent (DOC 7.3 mg/L) for contact times of 30 min and 72 h [61]. They found that the OMP removal after a contact time of 30 min decreased on average by 73% and by 30% after 72 h and concluded that the lower removal could be attributed to competition for the same adsorption sites between DOM and OMPs and a hinderance for OMP diffusion due to pore blockage. In addition, the electrostatic repulsive forces between the previously adsorbed DOM onto MCN and ARS dye (i.e., both are negatively charged) might result in lower ARS dye removal [61]. The SUVA values determined for the SMW with different DOM content before and after treatment are shown in Figure S2 and ranged between 16.7 and 10.3 L/(mg·m) and from 6.2 to 7.1 L/(mg·m), indicating (≥4 L/(mg·m)) the presence of high content of hydrophobic, aromatic, and high molecular weight DOM fractions [44]. Notably, SUVA values obtained for the treated water had lower hydrophobic, aromatic, and high molecular weight DOM fractions than before treatment, confirming the preferential adsorption of hydrophobic, aromatic, and high molecular weight DOM fractions onto the MCN. Adsorption of Various Dyes onto MCN To investigate the capability of MCN to adsorb various kind of dyes, the adsorption of indigo carmine (IC), Alcian blue (AB), and methylene blue (MB) dyes onto the MCN was examined in a single system and the results are compared with those for ARS dye removal. According to the results shown in Figure 5B, the MCN had similar performance for the anionic dyes ARS (73%), IC (70%), and AB (81%), but showed poor performance for the cationic dye MB (11%). These results clearly exhibited that the developed MCN was better at removing anionic dyes than cationic dyes. On the other hand, the prepared MCN showed a distinct difference in ARS dye removal (55%) from the mixture of ARS+IC+AB+MB (ARS (mix)) dye solutions compared to 73% in a single system. This might be because of the competition between the different dyes for the same adsorption sites. Regeneration and Reuse of Spent MCN The sustainable reuse of an adsorbent is very important for its widespread applications. Therefore, the regeneration and reuse of the spent MCN were examined for five consecutive cycles and the results are shown in Figure 6A. The ARS dye removal efficiency decreased as the number of reuse cycles increased; however, the MCN retained almost 73% of its original efficiency at the end of the fifth cycle. ATR-FTIR analysis was therefore performed to identify either incomplete ARS desorption or a loss of stability of the MCN after five consecutive desorption and reuse processes (i.e., after the fifth cycle). This was further confirmed by comparing DOC and SUVA values before and after treatment. The DOC removal ranged from 19% to 8% for DOC background concentrations from 4 to 11 mg/L, demonstrating the occupation of adsorption sites by DOM compounds. The SUVA values determined for the SMW with different DOM content before and after treatment are shown in Figure S2 and ranged between 16.7 and 10.3 L/(mg·m) and from 6.2 to 7.1 L/(mg·m), indicating (≥4 L/(mg·m)) the presence of high content of hydrophobic, aromatic, and high molecular weight DOM fractions [44]. Notably, SUVA values obtained for the treated water had lower hydrophobic, aromatic, and high molecular weight DOM fractions than before treatment, confirming the preferential adsorption of hydrophobic, aromatic, and high molecular weight DOM fractions onto the MCN. Adsorption of Various Dyes onto MCN To investigate the capability of MCN to adsorb various kind of dyes, the adsorption of indigo carmine (IC), Alcian blue (AB), and methylene blue (MB) dyes onto the MCN was examined in a single system and the results are compared with those for ARS dye removal. According to the results shown in Figure 5B, the MCN had similar performance for the anionic dyes ARS (73%), IC (70%), and AB (81%), but showed poor performance for the cationic dye MB (11%). These results clearly exhibited that the developed MCN was better at removing anionic dyes than cationic dyes. On the other hand, the prepared MCN showed a distinct difference in ARS dye removal (55%) from the mixture of ARS+IC+AB+MB (ARS (mix)) dye solutions compared to 73% in a single system. This might be because of the competition between the different dyes for the same adsorption sites. Regeneration and Reuse of Spent MCN The sustainable reuse of an adsorbent is very important for its widespread applications. Therefore, the regeneration and reuse of the spent MCN were examined for five consecutive cycles and the results are shown in Figure 6A. The ARS dye removal efficiency decreased as the number of reuse cycles increased; however, the MCN retained almost 73% of its original efficiency at the end of the fifth cycle. ATR-FTIR analysis was therefore performed to identify either incomplete ARS desorption or a loss of stability of the MCN after five consecutive desorption and reuse processes (i.e., after the fifth cycle). In addition to noticeable changes in peak intensity, it was apparent from the ATR-FTIR spectra of the regenerated MCN (see Figure 6B) that there was a shift in peak position compared to those of the virgin MCN, suggesting the occurrence of intermolecular interactions during the regeneration process. At the same time, no peaks corresponding to the ARS dye molecule were observed, indicating complete desorption of the ARS dye molecules. The decrease in removal efficiency might thus be a result of the alteration of the MCN surface properties during the adsorption-desorption processes. In addition to noticeable changes in peak intensity, it was apparent from the ATR-FTIR spectra of the regenerated MCN (see Figure 6B) that there was a shift in peak position compared to those of the virgin MCN, suggesting the occurrence of intermolecular interactions during the regeneration process. At the same time, no peaks corresponding to the ARS dye molecule were observed, indicating complete desorption of the ARS dye molecules. The decrease in removal efficiency might thus be a result of the alteration of the MCN surface properties during the adsorption-desorption processes. Thermal Treatment of Spent MCN N-doped carbon materials have been produced successfully from chitosan via pyrolysis process and used in various applications [62,63]. The aim of this experiment was therefore to produce N-doped magnetic carbon material from the spent MCN. In this context, after the fifth reuse cycle, the ARS dye-loaded MCN was pyrolyzed under given experimental conditions to produce N-doped magnetic carbon (see Figure 6C) and subsequently used as the adsorbent for the removal of ARS, IC, and MB dyes in aqueous solution. It is obvious from the inserted image in Figure 6C that the magnetic property of the material was retained even after the pyrolysis process; hence it can be easily separated by an external magnetic field. Further, it can be seen from Figure 6D that the produced Ndoped magnetic carbon was able to remove 10%, 13%, and 13% of ARS, IC, and MB dyes, Thermal Treatment of Spent MCN N-doped carbon materials have been produced successfully from chitosan via pyrolysis process and used in various applications [62,63]. The aim of this experiment was therefore to produce N-doped magnetic carbon material from the spent MCN. In this context, after the fifth reuse cycle, the ARS dye-loaded MCN was pyrolyzed under given experimental conditions to produce N-doped magnetic carbon (see Figure 6C) and subsequently used as the adsorbent for the removal of ARS, IC, and MB dyes in aqueous solution. It is obvious from the inserted image in Figure 6C that the magnetic property of the material was retained even after the pyrolysis process; hence it can be easily separated by an external magnetic field. Further, it can be seen from Figure 6D that the produced N-doped magnetic carbon was able to remove 10%, 13%, and 13% of ARS, IC, and MB dyes, respectively. Though the produced N-doped magnetic carbon is less efficient for dye removal than the MCN, the adsorption properties of N-doped magnetic carbon could be most likely enhanced by performing the pyrolysis under other experimental conditions. Further characterization studies are also needed to identify the physicochemical properties of the N-doped magnetic carbon. Conclusions In this work, a magnetic chitosan core-shell network (MCN) was successfully synthesized for the sustainable removal of alizarin red S (ARS) dye in aqueous solution. It was deduced from the batch experiments that the ARS dye adsorption onto the MCN was quite fast, increased with increasing contact time, and reached equilibrium within 60 min. Noticeable ARS dye removal occurred in acidic conditions due to electrostatic interactions. The adsorption process was well described by the Langmuir isotherm and followed pseudo-second order kinetics. The Langmuir maximum monolayer adsorption capacity of the MCN for the ARS dye was determined to be 166.4 mg/g, which was higher than for many other adsorbents. The prepared MCN showed almost similar adsorption behavior to other anionic dyes such as indigo carmine and Alcian blue. In the presence of a 11 mg/L dissolved organic content (DOC) and a contact time of 60 min, the removal of the ARS dye decreased from 73% to 40%. This may have been due to a combination of pore blockage, competition between the ARS dye, and dissolved organic matters (DOMs) for the same adsorption sites, or electrostatic repulsive forces between the ARS dye in solution and DOM adsorbed onto MCN, or these factors individually. Furthermore, regeneration and reuse experiments showed that more than 70% of the original removal efficiency was retained, even after five consecutive cycles. Most importantly, the exhausted MCN was pyrolytically converted into N-doped magnetic carbon and explored as an adsorbent for different dyes. Nevertheless, further studies are required to optimize the pyrolysis process and to improve the physicochemical properties of the N-doped magnetic carbon. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ma14247701/s1, Figure S1: EDS spectra of MCN, Figure S2: Comparison of SUVA values for waters with different DOC concentration (before and after treatment). Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-12-16T16:38:47.759Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "80b3cf137edb592d464589305a8a8c1046832507", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/14/24/7701/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5b2869e02a78f1da605add651f1a1c108f876941", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
18426035
pes2o/s2orc
v3-fos-license
Generalized Mechanism of Field Emission from Nanostructured Semiconductor Film Cathodes Considering the effect of both the buffer layer and substrate, a series of ultrathin multilayered structure cathodes (UTMC) is constructed to simulate the field emission (FE) process of nanostructured semiconductor film cathodes (NSFCs). We find a generalized FE mechanism of the NSFCs, in which there are three distinct FE modes with the change of the applied field. Our results clearly show significant differences of FE between conventional emitters and nanofilm emitters, which the non-Fowler-Nordheim characteristics and the resonant FE will be inevitable for NSFCs. Moreover, the controllable FE can be realized by fine-tuning the quantum structure of NSFCs. The generalized mechanism of NSFCs presented here may be particularly useful for design high-speed and high-frequency vacuum nano-electronic devices. piezoelectric polarization P PZ . The total polarization P tot = P SP + P PZ is mutative for different nitrides, thus giving rise to the accumulation of interfacial polarization in nitride-based heterostructures. The built-in polarization causes a strong deformation of the quantum wells accompanied by a strong electrostatic field 22 . As group-III nitrides, Al x Ga 1−x N (AlGaN) would allow us to control both spontaneous and piezoelectric polarizations by choosing different aluminum compositions. Moreover, by appropriately controlling the aluminum compositions in AlGaN, the potential well depth can be modulated at wide range. Under this circumstance, the quantum structure band shape can be effectively controlled. Therefore, in order to realize various quantum structure of NSFCs, AlGaN are select as nanostructured semiconductor film layer in the present theoretical model. Since the AlGaN is strained to the buffer layer and band structure can be affected by the substrate, to simulate the band structure of the real FE cathode, both the buffer layer and substrate should be taken into account. On the basis of the aforementioned analysis, we assume that the quantum well cathodes are grown on an n-type (1 × 10 18 cm −3 ) Si substrate that is 1.0 μ m in thickness. On top of this Si substrate are a 200 nm-thick n-type (1 × 10 18 cm −3 ) GaN (000-1) buffer layer, followed by a 4 nm-thick AlGaN (000-1) potential barrier layer. Finally, a 2 nm-thick GaN (000-1) is grown as well layer. Generally, the FE current can be given by 23 is the supply function, k B is Boltzmann's constant, T is temperature, h is Plank's constant, and E F is the Fermi energy. J(E x ) is the expression of normal-energy distribution written as (2) is made up of the transmission coefficient D(E x ) and the supply function N(E x ). Transmission coefficient D(E x ) can be calculated by the transfer matrix (TM) method in our previous work 24 . In the TM method for computing D(E x ), potential barrier shape is a key parameter affecting the transmission coefficient dramatically. Herein, a more complicated and realistic image potential involving the image potential shifting was introduced. The parameters are the same as in our previous work 24 . Results and Discussions Due to the electron confinement in the QW, the quantum energy-levels induced resonant tunneling dramatically depress the surface effective barrier and significantly improve FE properties 15 . In order to effectively confine the electron in the QW, the difference in the energy gap Δ E g between the QW and the quantum barrier (QB) layer are fixed at 1.6 eV. As a result, Al 0.64 Ga 0.36 N ternary nitride is used as QB layer. The energy band diagrams of the Al 0.64 Ga 0.36 N/GaN (4 nm/2 nm) quantum-well are shown in Fig. 1(a). It can be found that the conduction bands are strongly deformed when the built-in polarization is considered. It is noted that the energy barrier height created by Al 0.64 Ga 0.36 N is substantially increased by the high density of positive polarization charges C p (3.5 × 10 13 cm −2 ) at the interface between the GaN buffer layer and the Al 0.64 Ga 0.36 N QB layer. Under this condition, the electrons are attracted by a Coulomb force and accumulate at this interface (AlGaN/GaN (well/ barrier)), which may leads to a strong band bending. Figure 1(b) shows the characteristic of the FE current density variation versus the applied field for the Al 0.64 Ga 0.36 N/GaN quantum-barrier/well. The semilog plot shows three characteristics regions for different field. In the region A, which is at the beginning of the electron emission, there are many FE oscillation peaks, which is similar to a resonant FE behavior. In the region B, it shows an increase in the current which is distinct different from that of the region A. In the region C, a very steep increase of the current has been found. Such three type of FE are similar to the I-V characteristics obtained by UTSCs 6 . However, the relative FE mechanism is not clear, especially for the different structure of FE cathode with different field, a generalized FE mechanism of USFCs is further needed to develop. Here, field emission energy distribution (FEED) was firstly used to explore the novel FE mechanisms. Figure 2 shows the calculated variation in the electron transmission and FEED for Al 0.64 Ga 0.36 N/GaN. It is obvious that there are two distinct peaks when electrons tunnel through the dual-barrier potential well, indicating that two quantum energy-levels localize in the USFCs. The electrons are emitted by a FE mechanism from the quantized subbands ins the quantum-well. With increasing of the fields, the entire transmission increases by orders and shift toward low energy sides, which origins from strong band bending. Base on the calculated variation in the J-E and FEED characteristics, we presented three field electron emission modes. Resonant-tunneling-type field emission (RT-FE). At a field below 2.5 V/nm, electrons origins from 2# quantum energy-level near the Fermi level E F and resonant tunnel through the vacuum barrier, there are some resonant FE peaks. When the external electric field ranging from 2.5 to 3.0 V/nm, the 1# quantum energy-level, which is far from E F in the low-field, gradually shift toward E F . Due to the dramatic increase of the electron supply when the energy shift from above E F to E F , the electrons origins from both 1# and 2# quantum energy levels when the field is at 2.5-3.0 V/nm. Thus FE current increases remarkably, corresponding to the region A in Fig. 1(b). Saturated Fowler-Nordheim field emission (S-FN). As the applied field is increased above 3.0 V/ nm, 1# quantum energy-level shift from E F to below the E F . Since the electron transmission of 1# quantum energy-level decrease with the increase of the applied field as shown in Fig. 2, considering the effect of electron transmission with electron supply, FEED peak of 1# quantum energy-level are increased tardily rather than that of the sharp increase at the lower field (from 2.5 to 3.0 V/nm), leading to the FE current increase slowly as shown in region B in the corresponding J-E curve. T-FN). When further increased external electric field beyond 4.0 V/nm, due to the vacuum-level fall to the E F , the FE electrons can directly tunnel through the single AlGaN barrier. As a result, the electron transmissions are dramatically promoted. Moreover, since the electrons cannot be effectively confined in the QW, and the quantized subbands disappeared, as well as the FEED peaks in the Fig 2. Therefore, FE modes changing from S-FN to T-FN, corresponding to the region C in Fig. 1(b). If we regard AlGaN QB as vacuum barrier, such mode is similar to mixed thermal and FN field electron emission. Structure Effects For The Multilayered Cathodes In order to confirm that such three field electron emission modes are universal for all USFCs, the effects of quantum-barrier/well width, quantum structure band shape, and quantum-well depth on the USFCs were also investigated, due to the limit paper, some important and necessary results were presented in the following part. For more results, it can also be found in the support information (Figs S1-S15). Quantum-barrier/well width effects. Figure 3 shows the characteristic of the emission current density versus the applied field from the Al 0.64 Ga 0.36 N/GaN quantum-barrier/well with different layer thickness proportion, where the total thickness of the AlGaN/GaN films are kept at 6 nm. It is clear that, three general FE modes have been observed with the change of the field. Calculated results shown that, with the change of the quantum structure, the I-V characteristic is distinctly changed. Moreover, Fig. 4 reveals the electron transmission coefficient and FEED of 6 nm Al 0.64 Ga 0.36 N/GaN quantum-barrier/well with the change in layer thickness proportion. It is visually obvious that the electron transmission coefficient is dramatically affected by the width of the QW. In addition, not only the magnitude of the transmission coefficients but also the positions of FEED peaks can be changed tremendously when the individual layer thicknesses of the FE structure are modulated. It is found that the difference in the neighbor quantum energy-levels Δ E s increase remarkably as the width of QW decreases. By comparing the FEED of these three FE configurations, it is easily found field electron emission changing from single-energy-level to multi-energy-levels. It can be approved clearly in Fig. 4 that there are larger Δ E s in the QW for having the lower QW width, and the larger Δ E s lead to the less quantum energy-level. In the case of the structure of 4 nm/2 nm, near the Fermi level (at zero position), there has only one quantum energy-level, which plays a major contribution to the FE current. On the other hand, the larger QW width leads to the decrease of Δ E s in the QW, and the decreasing Δ E s lead to the more quantum energy-level near the Fermi level, which may supply more FE electron. The above phenomenon was also supported by the experimental observations from Johnson 19 and Kildemo 25 et al. In addition, compared with single-energy-level electron emission, multi-energy-levels electron emission is sluggish response to external electric field, thus few resonant FE current peaks can be found from the FE J-E curves. Such distinct resonant FE current peaks act as negative differential conductance, as can be seen from Fig. 3(c), and similar experimental factss have been found 13,18,19 . Band structure shape effects. In order to investigate the band structure effect, the band shape or the well depth should be modulated individually. Recently, an approach to control the electrostatic fields by using quaternary Al x In y Ga 1−x−y N (AlInGaN) layers could be an attractive alternative for c-plane GaN-based heterostructures since the introduction of quaternary AlInGaN layers would allow us to control both spontaneous and piezoelectric polarizations by choosing different aluminum and indium compositions 8,26 . Therefore, by appropriately controlling the aluminum and indium compositions in AlInGaN, the built-in charge density at the interface between GaN and AlInGaN barrier can be adjusted. Under this circumstance, the quantum structure band shape can be effectively modulated. Base on the structure of 2 nm/4 nm quantum-barrier/well that mentioned above, by fine-tuning different aluminum and indium compositions of the quaternary AlInGaN layers, the band structure shape were effectively controlled by the way of built-in charge density modulating at the interface. And at the same time the QW depth was not changed. Seeing energy band of the quantum structure for different QB quaternary composition in Fig. 5(a), it is obvious that the AlInGaN barrier is evidently lowered by increasing Al composition. So in Fig. 5(b), we calculated the FE characteristic with the change of the applied field. It can be seen clearly that the FE properties is greatly improved with increasing applied field. The results also indicate that the three general FE modes were not influenced by the change of band structure shape. However, it is obvious that the resonant FE current peaks can shift toward low-field when the shape of quantum structure band is modulated. Quantum-well depth effects. In order to better understand the effect of quantum-well depth, the AlInGaN layer was used to control the potential well depth by the method of compositions modulation and keeping the built-in charge density in the interface unchanged (C p = 1.6 × 10 13 cm −2 ). Base on the calculated variation in the J-E [ Fig. 6(b)] and their FEED characteristics (not shown here), it is indicates that the three general FE modes are not influenced by the change of quantum-well depth. And the FE J-E curves show that, although the FE current density were reduced with increasing the depth of the QW, the number of resonant FE current peaks were remarkably reduced and the intensity of resonant FE current peaks were dramatically promoted, and single resonant FE current peak is distinctly observed. As it is well known, the operational speed of solid state microelectronic devices is hampered by the saturation velocity of electrons 27 . Combined with the band shape and QW depth modulation, the prominent NDC characteristic of NSFCs in the low-field are extremely promising for operation high-speed electronics, making the generalized mechanism presented here particularly useful for design high-speed and high-frequency vacuum microelectronic devices. In particular, the electron velocity in vacuum can approach the speed of light, which are far faster nearly three orders of magnitude than that of solid state electronic devices, making the FE-based devices suitable for high-speed and high-frequency applications. Conclusions In summary, considering the effect of both the buffer layer and substrate, we present a generalized FE mechanism for the NSFCs. This generalized FE mechanism can be divided into three electron emission modes, including resonant-tunneling-type field emission in the low-field, saturated Fowler-Nordheim field emission, and Mixed Thermal-FN field emission in the high-field. Moreover, by modulating the quantum-barrier/well width, band structure shape, and quantum-well depth of NSFCs, three general FE modes still obviously exist. In particular, the NSFCs is a special structure of ultrathin multilayered structure cathodes. Therefore, this mechanism can be considered as describing the novel physical properties of FE of NSFCs. It is also found that the electron emission characteristics of these NSFCs can be modulated by engineering the quantum structure. In addition, by fine-tuning the shape and depth of the quantum structure at a proper QB/QW width, single resonant FE current peak can be achieved at low-field, making it suitable for high-speed and high-frequency vacuum microelectronic devices.
2018-05-08T18:33:36.811Z
2017-03-08T00:00:00.000
{ "year": 2017, "sha1": "5bc33ab2cd980d225bb7aee17456bc248d145675", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep43625.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5bc33ab2cd980d225bb7aee17456bc248d145675", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
53746841
pes2o/s2orc
v3-fos-license
Deeper Interpretability of Deep Networks Deep Convolutional Neural Networks (CNNs) have been one of the most influential recent developments in computer vision, particularly for categorization. There is an increasing demand for explainable AI as these systems are deployed in the real world. However, understanding the information represented and processed in CNNs remains in most cases challenging. Within this paper, we explore the use of new information theoretic techniques developed in the field of neuroscience to enable novel understanding of how a CNN represents information. We trained a 10-layer ResNet architecture to identify 2,000 face identities from 26M images generated using a rigorously controlled 3D face rendering model that produced variations of intrinsic (i.e. face morphology, gender, age, expression and ethnicity) and extrinsic factors (i.e. 3D pose, illumination, scale and 2D translation). With our methodology, we demonstrate that unlike human's network overgeneralizes face identities even with extreme changes of face shape, but it is more sensitive to changes of texture. To understand the processing of information underlying these counterintuitive properties, we visualize the features of shape and texture that the network processes to identify faces. Then, we shed a light into the inner workings of the black box and reveal how hidden layers represent these features and whether the representations are invariant to pose. We hope that our methodology will provide an additional valuable tool for interpretability of CNNs. Introduction Hierarchical CNNs and their multiple nonlinear projections of visual inputs have become prime intuition pumps to model visual categorization in relation to the hierarchical occipito-ventral pathway in the brain ( [2,4,17,35,29]). However, understanding the information represented and processed in CNNs is a cornerstone of the research agenda whose resolution would enable more effective network de-signs (e.g. by using CNNs as modular building blocks that can perform specific functions), more robust practical applications (e.g. by predicting adversarial attacks) and broader usage (e.g. as information processing models of the brain). Here, we developed a new methodology to address the deeper interpretability of the information processing mechanisms of CNNs and testing its applicability in a case study. A starting point to understand information processing in CNNs (and the brain) is to identify the features represented across their respective computational hierarchies. In CNNs, multi-layered deconvolution techniques (deconvnet) [39] can identify features of increasing complexity and receptive field size represented in the lower convolution layers to the mid and higher-level layers. In the brain, reverse correlation has been successfully applied to visualize the receptive fields of different brain regions along the occipitoventral hierarchy [25,31,28,11,41]. An open question remains whether a well-constrained CNN (i.e. constrained by architecture, time, representation, function and so forth) could learn the mid-to-high-level features that flexibly represent task-dependent visual categories in the human visual hierarchy. To evaluate the usefulness of CNNs as models of brain computations, researchers can quantify their predictive power for neural responses (i.e. how accurately hidden layers predict the activity of specific brain regions), and also assess the algorithmic understanding they enable (i.e. the information processing light they shed on computations in brain networks). To quantify predictive power, researchers can compute the similarity between the activity of CNNs' hidden layers and that of brain regions in response to the same stimulus categories (e.g. [2,4]. However, a deeper similarity of computation is necessary to use CNNs as understandable models of the underlying visual categorization mechanisms. Without such deeper understanding of information processing, all that CNNs offer are layered silicon black boxes to predict the performance of the layered wet ones, not to explain how brains achieve these categoriza- tions across the occipito-ventral hierarchy. In both CNNs and the brain, we need to model how stimulus information is transformed across hidden layers and brain regions to produce task-dependent responses in the hierarchy that ultimately lead to categorization response. Our main contribution to this huge challenge is to propose a new psychophysical methodology based on information theory with which we could understand how the brain reduces the high dimensional visual input to the low dimensional features that support distinct behaviors [41]. Its key feature is to better control of stimulus variation to understand the stimulus information underlying CNN categorization responses, and its transformations across the layers. Thus, rather than using an existing database of varied images from multiple natural categories and benchmark CNN performance, we rigorously controlled the factors of image generation using a single stimulus category and taski.e. faces and their identification. Our approach enables a deeper understanding of the information processing within CNNs, which in turn enables their usage as understandable information processing models of the brain [16,41]. Related Work Face categorization is an important benchmark in human and machine vision research, because is a well constrained stimulus class that nevertheless conveys a wealth of different social signals that can be mathematically modelled for real world applications [24,15]. In human vision, the challenge is to understand where, when and how information processing mechanisms in the brain realize face identification, given extensive image variations such as those presented in Figure 1 (plus translation and scaling). Psychophysical reverse correlation techniques (e.g. Bubbles, [7]) enabled reconstruction of the stimulus information underlying various face recognition tasks, using either behavioral or brain measures [33,14,41]. In particular, new methods can represent the features relevant to different categorization tasks and isolate the brain regions where, and when these features are combined to achieve behavior [41]. Representational Similarity Analysis (RSA, [18]) is another popular method that compares the responses of different architectures (e.g. human behavior, computational models and brain activity) to the same input stimulus categories. In its current applications, it does not isolate the stimulus features responsible for the responses and thus does not reveal the deeper similarities of information processing that cause the responses. In computer vision, the challenge has been to increase categorization performance using deep learning methods. The approach is to use large datasets of images (e.g. Deep-Face [34], FaceNet [30], face++ [43], Labeled Faces in the Wild database (LFW) [10], Youtube Faces DB [36]) and demonstrate that well designed and trained deep neural network can outperform humans [24]. However, understanding the information processing underlying their high performance levels remains a challenge that must be resolved to address the shortcomings revealed by adversarial testing. There is therefore a strong focus on better understanding CNNs. For example, Zeiler and Fergus [39] famously used deconvolutional networks to identify the image patches responsible for patterns of activation. Relatedly, Simonyan et al. [32]'s visualization technique based on gradient ascent can generate a synthetic image that maximally activates a deep network unit. The Class Activation Maps (CAM) of Zhou et al. [42] can highlight the regions of the image the network uses to discriminate [24]. [27] built a locally inter- Figure 2. Generalized Linear Model of Face Identity -Random Identity Generation. A. A given 3D face identity comprised random multivariate shape and texture information dimensions. B. We constructed a generative model by applying a Generalized Linear Model, independently for shape and texture, to a database of 3D scanned faces. We extracted the variance associated with the intrinsic factors face age, sex, ethnicity and their interactions, leaving out identity residuals for each scanned face (illustrated only for shape). We applied Principal Components Analysis to the residuals. In generative mode, to produce one random face identity, we multiplied a random vector defining each random identity by the principal components of identity residuals, to create random identity residuals which were then added to the categorical average. pretable model around a particular stimulus, to determine the parts of the image (or words of a document) that are driving the model's classification. Here, to develop an understandable AI of CNNs and understand their inner information processing, we examined the relationships between three classes of variables: stimulus feature dimensions, hidden layer responses and output responses [41]. This is a different approach to typical CNN research because we aim to: (1) isolate and control the main factors of stimulus variance to (2) precisely measure the layer-by-layer co-variations of these factors that influence network output responses.Such tight psychophysical control is difficult to achieve with the large datasets of unconstrained 2D images. Generative Model for 3D Faces To achieve these goals, we used a generative model of the face information that controls and tests the effect of each factor of face variance (i.e. the objectively available information) on CNNs' performance. Though Generative Adversarial Networks (GANs) [6] provide image-to-image translation (e.g. CycleGAN [44] and StarGAN [3]) with reasonable quality, and so can be treated as generative models, they do not explicitly characterize the generic generative parameters of the translated image (e.g. parameters for 3D face shape). Our Generative Model of 3D Faces (GMF) [38,40] mixed explicitly defined and latent generative parameters to generate 2,000 face identities with intrinsic variance factors of 500 random face variants × 2 genders × 2 ethnicities × 3 age (25, 45 and 65 years) and × 7 emotions (i.e. "happy", "surprise", "fear", "disgust", "anger", "sad" and "neutral"). Each of these combinations was further varied according to extrinsic factors of rotation and illumination (both ranges from -30°to +30°by increments of 15°) along the X and Y axes to produce a controlled database of a total of 26M images. Figure 1 illustrates the extrinsic and intrinsic variations for one example of face morphology. Figure 2 illustrates the image generation using 3D face shape (face texture is separately and similarly handled). Briefly, (see Zhan et al. [40] for details), a Generalized Linear Model (GLM) extracted the explicit factors of age, ethnicity, gender and their interactions from a database of 872 scanned real 3D faces while the remaining unexplained part of each identity was controlled by the principal components of the GLM residual matrix. To generate 2,000 random identities, we inverted the model, and multiplied the principal components of identity residuals with 500 random coefficients vectors to produce a distinct residual identity vector that defines each face identity which was then added to all permutations of the GLM factors (for a total of 2,000 identity vectors computed separately for shape, as shown in Figure 2, and texture, not shown). In Section 5 below, we added noise to the shape and texture vectors defining each identity and generated face images to test network performance. CNN: 10-layer ResNet A 10-layer ResNet learned to associate the face images with their identity. We chose ResNet because it is a state-ofthe-art architecture that achieved high classification performance on various datasets. We used only 10 layers (ResNet-10) to keep network complexity relatively low for the analysis of hidden layers detailed later. We applied the training and testing regime of [37]. For training, we randomly selected 60% of the generated face images, for a total of 15,750,000 images. At training, we applied data augmentation to increase data complexity and to alleviate overfitting by randomly scaling (between 1× and 2×) and translating images in the 2D plane (between 0 and 0.3 of the total image width and height). At testing, we used the remaining 40% images, for a total of 10,500,000 images. Generalization and Adversarial Testing ResNet correctly identified the testing image set with very high accuracy (i.e. over 99.9% across all variations of intrinsic and extrinsic factors, which would easily outperform humans using the same dataset [24]), though it was sensitive to the similarity of training data [37]. Generalization Testing A powerful methodology to test the boundaries of human categorization performance is to add or multiply the original stimulus with noise [22,15]. We applied a similar approach with ResNet, by adding noise directly into the generative model. Specifically, we added a random vector of multivariate Gaussian noise with diagonal covariance (separately for 3D shape and 2D texture) to the vector defining the identity of a face in the generative model. This produced stimulus variations in shape (and texture) around this identity. We kept the noise level at a 0.8 proportion of the coefficients defining the identity. In Figure 4A, the left (vs. right) column of images illustrates the top face identity with added shape (vs. texture) noise, while keeping its texture (vs. shape) constant. Note that whereas variations in shape (left panel) look like slightly different face identities to human observers, variations in texture (right panel) do not apparently change the identity of the face [40,23]. We generated 10,000 such noisy variations of shape and texture for two example face identities and rendered them as testing images. Adversarial Testing To illustrate this counterintuitive performance, we adversarially tested ResNet with a 3D shape noise level 5 times higher than that defining each random identity, while leaving texture unchanged. Using 1,000 such adversarial faces for each identity, ResNet nevertheless overgeneralized them as the target identity (at 97% and 94% performance, respectively). Figure 4B reveals several examples of over-generalized grotesque faces (see green tick signs; red cross signs represent a rejection of the distorted face as an exemplar of the target identity). Adversarial testing compellingly illustrates that 3D face shape information is less important to ResNet than texture. It also demon- Figure 4. A. We added multivariate noise (shape, left, or texture, right panel) while keeping texture (left) vs. shape (right panel) constant and measured network identification accuracy in response to both. B. Adversarial testing with addition of extreme shape noise nevertheless revealed high identification accuracy, but with generalization to grotesque faces for both identities. strates that our network would fail face identification tasks where immunity to adversarial exemplars is critical. Information Representation for ResNet Decision Using again the multivariate noise procedure, we derived a deeper interpretability of the layers of ResNet, starting with its top decision layer-i.e. its categorization behavior. Across testing trials, noise introduce variations in the 3D location of each shape vertex and in the RGB values of each 2D texture pixel. ResNet responds to these variations both in its hidden layers, and on its output layer. We first explain how we visualized the shape and texture features that modulated output unit responses. Following this, we apply a similar analysis to the hidden layers. The input variations due to noise produced real-valued variations of the output unit that responds maximally to the targeted identity-i.e. before Argmax on calculated across the 2,000 units of the decision layer. To visualize the shape vertices and texture pixels that modulate ResNet output response, we computed with Mutual Information (MI) the relationship between stimulus variation (S) and output unit response (R) using a semi-parametric lower bound estimator (Gaussian-Copula Mutual Information, GCMI, [13] GCMI identifies vertices and pixels that affect response for a given identity (i.e. the diagnostic vertices and pixels). GCMI therefore reveals the stimulus features the network must necessarily process, between the input faces and their identification on the decision layer. In our methodology, we illustrate diagnostic information as the set cyan intersection between input information samples (the blue set) and the corresponding output decision responses (the green set, see Figure 5). Figure 5 shows the cyan diagnostic information reported on the two example identities. That is, the shape and texture features that support the network decisions for identity 1 (e.g. 3D vertices around the jaw line, mouth and forehead; 2D RGB pixels around the mouth) and for identity 2 (i.e. 3D vertices forming the cheeks and the forehead texture). We repeated this analysis across the 5 face viewpoints the network was trained on and found usage of the same diagnostic face features across viewpoints-i.e. viewpointinvariance of the diagnostic features. Know what features ResNet uses to identify the two faces, we now track the organization of feature representation (diagnostic and not) in the hidden layers. . For each 3D shape vertex and 2D RGB pixel and each face orientation of two face identities, we computed the Mutual Information between their variations due to multivariate noise and the real-valued identity response of ResNet on its top layer. This analysis informed the diagnostic information that the network must process in its hidden layers between stimulus and response. Hidden Layer Representation (magenta intersection and face features). For shape and texture information, mutual information reveals, for the first 6 principal component (PC) of Resnet layer 9.5 activity, the face features represented. Hidden Layer Representation for Decision (white intersection and face features). For shape and texture information, mutual information reveals, for the first 6 principal component (PC) of Resnet layer 9.5 activity, the subset of face features represented for decision. Figure 6. Dissimilarity analysis of hidden layer 9.5 in ResNet. We organized the PC scores response of the layer to the face identity inputs by their orientation in depth (5 orientations, from -30°to + 30°with 15°increments, with 10,000 noisy face exemplars per orientation and identity). The two identities demonstrate viewpoint-dependent responses of layer 9.5 to shape, and viewpoint-invariant response to texture. Information Representation in the Hidden Layers We analyzed hidden layer representations, starting one layer down from the response layer (i.e. the average pooling layer after layer 9, henceforth called "layer 9.5"). First, we computed the multivariate activation of layer 9.5 by feeding ResNet with the 10,000 shape and 10,000 texture variations images for each identity and viewpoint used earlier. We reduced the dimensionality of the multivariate activation with a randomized Principal Components Analysis (PCA) algorithm [8,20] computed separately for each combination of identity and their shape and texture variations. This stage produced 4 matrices of 50,000 PCs score vectors (5 viewpoints x 10,000 variations) for each combination of identity and their shape and texture variations. Property of Representations in the Hidden Layers Remember that the output layer of ResNet responds to the same shape and texture face features across viewpoints (cf. Section 6). Here, we asked whether the activation of layer 9.5 represents viewpoint. To this end, we ordered the 4 matrices of PC scores vector by the 5 face viewpoint (i.e. -30°to + 30°, by 15 deg increments). We computed a dissimilarity matrix [18] by cross-correlating the 50,000 PC score vectors-using the dissimilarity measure (1 -Pearson correlation) between any pair of score vectors. Figure 6 presents the results. For each identity, face shape elicited viewpoint-dependent activations on layer 9.5. The dissimilarity matrices reflect such viewpoint representations with a blocked structure across the diagonal, which demonstrates that the blocks of 10,000 face images at the same orientation are represented more similarly on the hidden layer than face images at any other orientation. In contrast, face texture elicited viewpoint invariant activations on layer 9.5. Thus, the activity of layer 9.5 represents face orientation, but only for shape. For texture, the network has reduced this varying input dimension in the layers underneath 9.5. Representations Viewpoint Dependent and Viewpoint Invariant Features in the Hidden Layers We now know from the dissimilarity analysis of layer 9.5 activations that it differently represents face shape and texture. However, we still do not know which specific face shape and face texture features represented on layer 9.5 underlie the reported viewpoint-dependent/invariant performance. To derive such a deeper understanding of the information processed, we focussed our analysis on the first 6 principal components of activation of layer 9.5 that explain (26% -30%) of this layer's activation variance. We used again GCMI to quantify the relationship between the 10,000 input variations (in shape and texture) and the corresponding variable activations of the 6 PCs, separately at each of the five orientations (i.e. 10,000 trials per orientation). This analysis reveals all the shape and texture face features represented on layer 9.5. However, a subset of these features (the diagnostic features) are used by layer 10 for the final classification output. To dissect the diagnostic from the nondiagnostic features represented on layer 9.5, we repeated the analysis, substituting GCMI with information theoretic redundancy. Redundancy quantifies how the samples (S) (i.e. variations of each 3D shape vertex and RGB texture pixels) are co-represented (i.e. redundantly represented) in layer 9.5 activity (L) and output response (R). Formally, redundancy (Red) is the intersection of two mutual information quantities as shown below [21,1,12]: (2) We compared the feature representations derived with GCMI and redundancy on layer 9.5 to understand how this layer selects and inherits shape and texture for final decision on output layer 10. In Figure 6, the magenta GCMI faces demonstrate that the layer represents many different shape and texture features on its PCs. In contrast, the white faces computed with redundancy from the same PCs directly visualize the subset of shape and texture features represented for decision. In Figure 5, we can now compare the three critical classes of features derived in our framework (they represented as three colored set intersections): Namely, the cyan features of the top decision layer, the magenta GCMI features of layer 9.5 and the redundant white features of features. They reveal that only a subset of the shape and texture features represented in layer 9.5 (see magenta faces) are used by ResNet for final decision (see white and cyan faces, respectivey): for Identity One: primarily the white PC2 (shape) and PC1 (texture): for Identity Two primarily white PC2 (shape) and PC3 (texture). In sum, Figure 5 demonstrates how mutual information and redundancy methods can assist the interpretation of the hidden layers of deep networks, by separating information represented on a given layer that affects categorization response from that which does not (see [41] for a similar dissociation in brain representations). The methodology can be also extended to other layers to understand the information flow within the network. Conclusion and Discussion We trained a deep network on a controlled set of face images and found that it behaved dramatically differently to human face perception: performance was almost invariant to shape deformations, while being extremely sensitive to variations of texture. We achieved a deeper interpretation of the network with a methodology that tightly controls the generative dimensions of the tested visual category. Following learning of varying but controlled face identity images, we used psychophysical testing with targeted multivariate noise (i.e. noise on the generative dimensions defining the face identity). We applied information theoretic measures to the triple samples; hidden layer Response; Decision and made several important new findings. First, we visualized the specific diagnostic shape and texture features the network uses to identify faces. Second, using redundancy we tracked the representation of diagnostic features in a hidden layer, separating it from other represented features. Finally, we dissociated properties of viewpoint-dependent representation of shape features from viewpoint-invariant representation of texture features, on the same hidden layer. We believe such deeper understanding of information processing in deep networks is now necessary to start establishing their algorithmic similarities to other architectures (e.g. brains or other networks). Our methodology can be extended to measure the relationship between input information samples and its representation in the layers of architecture 1 and architecture 2 as Red Samples; layer architecture 1; layer architecture 2 . It could also be fruitfully applied to better understand the information causes of adversarial attacks and, with further developments, to build CNN modules that perform specific functions on their inputs (e.g. a face identifier, pose identifier and so forth). Building from our work, the main challenge to further a deeper information processing understanding of CNNs is to better control the information they learn so we can test how it is represented and transformed in the network for various output responses. This can be achieved with two main approaches: First, by directly engineering new generative models of face, object and scene categories that faithfully reflect the statistics of real-world faces, object and scenes [26]. Second, by indirectly modelling (e.g. with CNNs) the latent generative factors of very large databases of face, object and scene images [5]. As with understanding information processing in the brain, we will only get out of CNNs what we put in.
2018-11-20T09:43:21.000Z
2018-11-19T00:00:00.000
{ "year": 2018, "sha1": "4b7cd8c7a9dd28f0c1959c36dc7dfeab46be1caa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f756b8bc46d0390232ec689362b416e00214ba9a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
224812876
pes2o/s2orc
v3-fos-license
Molecular and Biochemical Differences in Leaf Explants and the Implication for Regeneration Ability in Rorippa aquatica (Brassicaceae) Plants have a high regeneration capacity and some plant species can regenerate clone plants, called plantlets, from detached vegetative organs. We previously outlined the molecular mechanisms underlying plantlet regeneration from Rorippa aquatica (Brassicaceae) leaf explants. However, the fundamental difference between the plant species that can and cannot regenerate plantlets from vegetative organs remains unclear. Here, we hypothesized that the viability of leaf explants is a key factor affecting the regeneration capacity of R. aquatica. To test this hypothesis, the viability of R. aquatica and Arabidopsis thaliana leaf explants were compared, with respect to the maintenance of photosynthetic activity, senescence, and immune response. Time-course analyses of photosynthetic activity revealed that R. aquatica leaf explants can survive longer than those of A. thaliana. Endogenous abscisic acid (ABA) and jasmonic acid (JA) were found at low levels in leaf explant of R. aquatica. Time-course transcriptome analysis of R. aquatica and A. thaliana leaf explants suggested that senescence was suppressed at the transcriptional level in R. aquatica. Application of exogenous ABA reduced the efficiency of plantlet regeneration. Overall, our results propose that in nature, plant species that can regenerate in nature can survive for a long time. Introduction The ability of plants to regenerate into plantlets has been employed in "cutting," which is an agricultural and horticultural technique for plant propagation and may include root cutting, leaf cutting, and stem cutting [1]. Rorippa aquatica (Brassicaceae) propagates asexually by plantlet regeneration from leaf explants in nature [2,3]. R. aquatica is an amphibious plant belonging to the Brassicaceae family Leaf Explants of R. aquatica Are Highly Viable To test whether leaf explants of R. aquatica survive longer than those of A. thaliana, leaf explants 7 days after cutting were compared ( Figure 1A). Leaf explants of R. aquatica remained green ( Figure 1A), whereas those of A. thaliana exhibited etiolation at the margin and cut surface, indicating senescence ( Figure 1A). These results suggest that R. aquatica leaf explants are more viable than those of A. thaliana. To further assess the physiological ability of leaf explants to survive, their photosynthetic activity was measured from 0 to 14 days after leaf cutting. A. thaliana leaf explants cultured for more than seven days could not be used for photosynthetic analysis due to plant death; hence, the photosynthetic activity of these explants was measured from 0 to 7 days after cutting. The maximum quantum yield of photosystem II (Fv/Fm) of R. aquatica remained at approximately 0.8 up to day 7, after which it exhibited a gradual decrease ( Figure 1B). In A. thaliana, the Fv/Fm value began to decrease gradually from day 2 ( Figure 1B). The Fv/Fm value of A. thaliana leaf explants on day 7 (0.732) was nearly equal to that of R. aquatica on day 14 (0.731) ( Figure 1B). The electron transport rate (ETR) of R. aquatica leaf explants exhibited a gradual decrease from day 0 to 7 and then to day 14 ( Figure 1C). The ETR of A. thaliana on day 7 was lower than that on day 0 ( Figure 1C). The ETR of A. thaliana on day 7 was lower than that of R. aquatica on day 14 ( Figure 1C). These results indicate that R. aquatica leaf explants maintained their photosynthetic ability 14 days after cutting, when plantlets start to emerge. (ETR). Vertical line and horizonal line indicate relative photosynthetic electron transport rate (rel. ETR) and time points, respectively. As A. thaliana explants cultured for more than seven days could not be assessed, measurements were only recorded from 0 to 7 days in A. thaliana leaf explants. Dotted and solid lines indicate the expression levels at the distal and proximal sides, respectively. Data are presented as mean ± standard error (SE) (n = 5). Photosynthesis Is Required for the Survival of Leaf Explants and Plantlet Regeneration in R. aquatica Photosynthetic activity measurements showed that the leaf fragments of R. aquatica could maintain their photosynthetic ability for a longer time. To assess if the photosynthetic activity of leaf explants supports the ability of plantlet regeneration, R. aquatica leaf explants were cultured in dark conditions. After 23 days, almost all leaf explants were etiolated ( Figure 2A). However, few plantlets were regenerated and also etiolated ( Figure 2A). Under dark conditions, shoot regeneration was decreased while root regeneration was increased ( Figure 2B). Furthermore, leaf explants of R. aquatica were cultured on agar medium containing DCMU, a photosynthesis inhibitor that blocks electron transfer between the primary (QA) and secondary (QB) quinone electron acceptors on the reducing side of PSII [6]. DCMU exerted two types of effects on the leaf explants, i.e., strict and permissive effects. Almost all leaf explants exhibited discoloration followed by death (strict effect) ( Figure 2C); 75% of leaf fragments showed a strict effect. Few leaf explants remained green and regenerated shoots (permissive effect) ( Figure 2C,D). These results indicate that maintenance of photosynthesis might be important for plantlet regeneration. Time-course measurements of light-intensity dependent electron transport rate (ETR). Vertical line and horizonal line indicate relative photosynthetic electron transport rate (rel. ETR) and time points, respectively. As A. thaliana explants cultured for more than seven days could not be assessed, measurements were only recorded from 0 to 7 days in A. thaliana leaf explants. Dotted and solid lines indicate the expression levels at the distal and proximal sides, respectively. Data are presented as mean ± standard error (SE) (n = 5). Photosynthesis Is Required for the Survival of Leaf Explants and Plantlet Regeneration in R. aquatica Photosynthetic activity measurements showed that the leaf fragments of R. aquatica could maintain their photosynthetic ability for a longer time. To assess if the photosynthetic activity of leaf explants supports the ability of plantlet regeneration, R. aquatica leaf explants were cultured in dark conditions. After 23 days, almost all leaf explants were etiolated ( Figure 2A). However, few plantlets were regenerated and also etiolated ( Figure 2A). Under dark conditions, shoot regeneration was decreased while root regeneration was increased ( Figure 2B). Furthermore, leaf explants of R. aquatica were cultured on agar medium containing DCMU, a photosynthesis inhibitor that blocks electron transfer between the primary (Q A ) and secondary (Q B ) quinone electron acceptors on the reducing side of PSII [6]. DCMU exerted two types of effects on the leaf explants, i.e., strict and permissive effects. Almost all leaf explants exhibited discoloration followed by death (strict effect) ( Figure 2C); 75% of leaf fragments showed a strict effect. Few leaf explants remained green and regenerated shoots (permissive effect) ( Figure 2C,D). These results indicate that maintenance of photosynthesis might be important for plantlet regeneration. Phytohormones Are Regulated Differently in R. aquatica and A. thaliana Leaf Explants To assess if the phytohormone levels trigger senescence in leaf explants, endogenous ABA and JA levels in R. aquatica and A. thaliana leaf explants were quantified over time separately for the distal and proximal sides. In R. aquatica leaf explants, the ABA level increased at both distal and proximal sides for three days, following which it decreased ( Figure 3A). The ABA levels in A. thaliana leaf explants also increased up to day 3, at both the distal and the proximal sides ( Figure 3A). However, the levels at later time points could not be analyzed due to the death of these explants. JA levels in R. aquatica decreased gradually, at both distal and proximal sides ( Figure 3B). In A. thaliana, JA levels increased rapidly after cutting (i.e., 1 h), and then decreased on day 1 ( Figure 3B). Overall, both ABA and JA levels decreased in R. aquatica, at both distal and proximal sides of the explants ( Figure 3A,B). Additionally, to test the differences in the immune response of the two explants at the phytohormone level, endogenous SA levels were also quantified in R. aquatica and A. thaliana leaf explants. The SA level in R. aquatica was considerably lower than that in A. thaliana, and it did not change over time ( Figure 3C). Phytohormones Are Regulated Differently in R. aquatica and A. thaliana Leaf Explants To assess if the phytohormone levels trigger senescence in leaf explants, endogenous ABA and JA levels in R. aquatica and A. thaliana leaf explants were quantified over time separately for the distal and proximal sides. In R. aquatica leaf explants, the ABA level increased at both distal and proximal sides for three days, following which it decreased ( Figure 3A). The ABA levels in A. thaliana leaf explants also increased up to day 3, at both the distal and the proximal sides ( Figure 3A). However, the levels at later time points could not be analyzed due to the death of these explants. JA levels in R. aquatica decreased gradually, at both distal and proximal sides ( Figure 3B). In A. thaliana, JA levels increased rapidly after cutting (i.e., 1 h), and then decreased on day 1 ( Figure 3B). Overall, both ABA and JA levels decreased in R. aquatica, at both distal and proximal sides of the explants ( Figure 3A,B). Additionally, to test the differences in the immune response of the two explants at the phytohormone level, endogenous SA levels were also quantified in R. aquatica and A. thaliana leaf explants. The SA level in R. aquatica was considerably lower than that in A. thaliana, and it did not change over time ( Figure 3C). Senescence and Immune Responses Are Regulated at the Transcriptional Level in R. aquatica Leaf Explants Phytohormone quantification revealed changes in ABA and JA levels during plantlet regeneration in R. aquatica ( Figure 3A,B). To analyze how these hormones were regulated at the transcriptional level, RNA-seq transcriptome data of R. aquatica and A. thaliana leaf explants were used. NAC-like, activated by apatala3/pistillata (RaNAP), an ortholog of an important positive regulator of leaf senescence in A. thaliana and Oryza sativa [7][8][9][10], was upregulated only at the distal side of the explant at later time points ( Figure 4B). AtNAP was rapidly upregulated at both the distal and proximal sides after leaf cutting in A. thaliana ( Figure 4B), indicating earlier activation of the ABA response in A. thaliana. SENESCENCE ASSOCIATED GENE 113 (RaSAG113), on ortholog of a negative regulator of ABA signaling in A.thaliana [11], was not changed until day 6 in R. aquatica, whereas it was upregulated on day 1 in A. thaliana leaf explants ( Figure 4A). Additionally, upregulation of RaSAG113 was observed only at the proximal side of the explant ( Figure 4A). Senescence and Immune Responses Are Regulated at the Transcriptional Level in R. aquatica Leaf Explants Phytohormone quantification revealed changes in ABA and JA levels during plantlet regeneration in R. aquatica ( Figure 3A,B). To analyze how these hormones were regulated at the transcriptional level, RNA-seq transcriptome data of R. aquatica and A. thaliana leaf explants were used. NAC-like, activated by apatala3/pistillata (RaNAP), an ortholog of an important positive regulator of leaf senescence in A. thaliana and Oryza sativa [7][8][9][10], was upregulated only at the distal side of the explant at later time points ( Figure 4B). AtNAP was rapidly upregulated at both the distal and proximal sides after leaf cutting in A. thaliana ( Figure 4B), indicating earlier activation of the ABA response in A. thaliana. SENESCENCE ASSOCIATED GENE 113 (RaSAG113), on ortholog of a negative regulator of ABA signaling in A.thaliana [11], was not changed until day 6 in R. aquatica, whereas it was upregulated on day 1 in A. thaliana leaf explants ( Figure 4A). Additionally, upregulation of RaSAG113 was observed only at the proximal side of the explant ( Figure 4A). upregulated after 1 h in A. thaliana, whereas no such change could be detected in R. aquatica ( Figure 4D). An ortholog of a gene which activates JA signaling during saprobe infection in A. thaliana [13], RaMYC2, was downregulated following an initial upregulation, whereas it exhibited rapid downregulation in A. thaliana ( Figure 5A). It was noteworthy that both RaMYC3 and RaMYC4 were upregulated, whereas they were downregulated in A. thaliana ( Figure 5B,C). Although orthologs of JA biosynthesis genes LIPOXYGENASE (LOX) and OXOPHYTOSIENOATE-REDUCTASE (OPR) were upregulated at both the distal and the proximal sides at earlier time points, their expression levels remained low at later time points in R. aquatica ( Figure 4C). LOX2 was upregulated in A. thaliana after 1 h ( Figure 4C). Orthologous genes of TEOSINTE BRANCHED/CYCLOIDEA/PCF (TCP2), TCP4, and TCP10, transcription factors related to biosynthesis of JA [12], were downregulated in R. aquatica ( Figure 4E-G). These results further support the persistence of R. aquatica leaf explants from a transcriptional aspect. OPR1 was upregulated after 1 h in A. thaliana, whereas no such change could be detected in R. aquatica ( Figure 4D). An ortholog of a gene which activates JA signaling during saprobe infection in A. thaliana [13], RaMYC2, was downregulated following an initial upregulation, whereas it exhibited rapid downregulation in A. thaliana ( Figure 5A). It was noteworthy that both RaMYC3 and RaMYC4 were upregulated, whereas they were downregulated in A. thaliana ( Figure 5B,C). Exogenous ABA and JA Affect the Efficiency of Plantlet Regeneration To examine effect of ABA and JA on the efficiency of plantlet regeneration, R. aquatica leaf explants were cultured on agar media containing either ABA or MeJA. On the ABA-containing medium, the color of the leaf explants changed to red and yellow ( Figure 6A), and the number of regenerated organs decreased significantly ( Figure 6B). The leaf explants also changed color to red on the MeJA-containing medium ( Figure 6C). However, the number of regenerated organs were increased ( Figure 6D). Exogenous ABA and JA Affect the Efficiency of Plantlet Regeneration To examine effect of ABA and JA on the efficiency of plantlet regeneration, R. aquatica leaf explants were cultured on agar media containing either ABA or MeJA. On the ABA-containing medium, the color of the leaf explants changed to red and yellow ( Figure 6A), and the number of regenerated organs decreased significantly ( Figure 6B). The leaf explants also changed color to red on the MeJA-containing medium ( Figure 6C). However, the number of regenerated organs were increased ( Figure 6D). Internal Leaf Structure May Affect the Viability of Leaf Explants Next, to investigate if there are any other factors affecting the viability of leaf explants of R. aquatica, the intercellular structure of rosette leaves of R. aquatica and A. thaliana were compared. The intercellular space was narrower in R. aquatica than in A. thaliana leaves ( Figure 7A). Furthermore, to explore the differences between plant species that can and cannot regenerate plantlets from leaf explants in nature, the amounts of endogenous auxins and expression profiles of auxin-related genes were analyzed in A. thaliana. Similar amounts of indole-3-acetic acid (IAA) and its metabolites were observed at both distal and proximal sides of A. thaliana leaf explants ( Figure 7B). AtCYP79B2, one of the auxin biosynthesis genes, was upregulated after day 1 ( Figure 7C). Remarkably, YUCCA (YUC), an auxin biosynthesis gene, was almost unchanged (Figure 7D,E). These suggest that increased endogenous IAA was synthesized via the CYP79B2 pathway. Internal Leaf Structure May Affect the Viability of Leaf Explants Next, to investigate if there are any other factors affecting the viability of leaf explants of R. aquatica, the intercellular structure of rosette leaves of R. aquatica and A. thaliana were compared. The intercellular space was narrower in R. aquatica than in A. thaliana leaves ( Figure 7A). Furthermore, to explore the differences between plant species that can and cannot regenerate plantlets from leaf explants in nature, the amounts of endogenous auxins and expression profiles of auxin-related genes were analyzed in A. thaliana. Similar amounts of indole-3-acetic acid (IAA) and its metabolites were observed at both distal and proximal sides of A. thaliana leaf explants ( Figure 7B). AtCYP79B2, one of the auxin biosynthesis genes, was upregulated after day 1 ( Figure 7C). Remarkably, YUCCA (YUC), an auxin biosynthesis gene, was almost unchanged (Figure 7D,E). These suggest that increased endogenous IAA was synthesized via the CYP79B2 pathway. The expression profile of the PIN-FORMED1 (AtPIN1) gene, encoding an auxin transport protein, was upregulated at the distal side ( Figure 7F). AUXIN RESPONSIVE FACTOR7 (ARF7) and ARF19, which are auxin responsive genes, were also downregulated in A. thaliana ( Figure 7G,K), however, expression of RaARF7 and RaARF19 were not downregulated like A. thaliana ( Figure 7H The expression profile of the PIN-FORMED1 (AtPIN1) gene, encoding an auxin transport protein, was upregulated at the distal side ( Figure 7F). AUXIN RESPONSIVE FACTOR7 (ARF7) and ARF19, which are auxin responsive genes, were also downregulated in A. thaliana ( Figure 7G,K), however, expression of RaARF7 and RaARF19 were not downregulated like A. thaliana ( Figure 7H-J,L-N). Discussion We studied the effect of the viability of detached leaves on the efficiency of plantlet regeneration in terms of photosynthetic activity, leaf senescence, and immune response in R. aquatica and A. thaliana. The measurements of photosynthetic activity and the inhibition thereof suggested that the ability of plantlets to regenerate from leaf explants depends on the photosynthetic activity of the explant ( Figure 1B,C and Figure 2B,D). With regard to leaf senescence, although ABA was increased at the proximal side on day 3, it exhibited a decrease from day 8 in R. aquatica ( Figure 3A). This suggests a mechanism to reduce endogenous ABA levels, even if they increase initially, suppressing senescence in R. aquatica leaf explants. JA levels remained low in R. aquatica ( Figure 3B). It is possible that the leaf explants of R. aquatica suppress senescence by maintaining low levels of these phytohormones throughout the explant. On a molecular level, ABA signaling is upregulated during senescence in many plant species [14]. Some studies have reported that NAP is an important positive regulator of leaf senescence in A. thaliana and O. sativa [7][8][9][10]. Overexpression of AtNAP and OsNAP promotes leaf senescence, and knockdown mutants of these genes exhibit delayed senescence [8,9]. Furthermore, SAG113, a negative regulator of ABA signaling, controls water loss in aging A. thaliana leaves [11]. SAG113 is induced by ABA, and the loss-of-function mutant of this gene exhibits delayed leaf senescence [11]. AtSAG113 gene is a direct target gene of AtNAP transcription factor [15]. Orthologs of these genes in R. aquatica were upregulated at time points later than A. thaliana ( Figure 4A). JA is another factor promoting senescence [16]. Exogenous JA promotes leaf senescence in wild-type A. thaliana [17]. In senescent A. thaliana leaves, the transcriptional expression of JA biosynthesis genes, including LOX and OPR, increases gradually [17]. JA biosynthesis genes are regulated by miR319 (JAGGED AND WAVY (JAW)) and TCP transcription factors [12]. miR319 can repress the expression of LOX2 and reduce JA levels through degradation of TCP [12]. This results in delayed leaf senescence, which can be rescued by exogenous JA application [12]. RaTCP genes were downregulated during plantlet regeneration ( Figure 4E-G). Considering that the expression profiles of these orthologous genes are related to ABA and JA, it is possible that senescence is delayed at the transcriptional level in R. aquatica leaf explants. JA is also known to be involved in plant immune responses [18], and the biosynthesis of JA and SA has previously been reported to be upregulated in A. thaliana leaves infected with saprobes and parasites, respectively [19]. The SA level in R. aquatica did not change over time ( Figure 3C). This suggests that SA is not important for plantlet regeneration. On a molecular level, JA is biosynthesized and the JA receptor CORONATINE INSENSITIVE 1 (COI1) transmits downstream signals via binding to JASMONATE ZIM DOMAIN (JAZ) in leaves infected with saprobes [20,21]. In normal leaves, JAZ binds to JA signaling activators, i.e., the helix-loop-helix (bHLH) transcription factors MYC2, MYC3, and MYC4, and suppresses JA signaling [13]. In conditions inducing JA production, such as saprobe infection, COI1 degrades JAZ via the proteasome system [13], resulting in the binding of MYC transcription factors to the recognition sequence (G-box; CACGTG) and inducing the expression of JA responsive genes [13]. RaMYC3 and RaMYC4 were upregulated ( Figure 5B,C). These results suggest that the immune response is induced by JA, rather than SA, in R. aquatica leaf explants. Additionally, exogenous ABA changed the color of leaf explants and reduced the efficiency of plantlet regeneration ( Figure 6A,B). This might result from the promotion of senescence of the leaf explant. This proposes that the suppression of ABA-dependent senescence in leaf explants is important for plantlet regeneration. Reddening of leaf explants on the application of exogenous JA ( Figure 6C) might indicate stress from an upregulated immune response rather than from senescence because exogenous JA did not reduce the efficiency of plantlet regeneration. This result proposes that JA has little involvement in senescence. Overall, these results propose the possibility that senescence in R. aquatica leaf explants is delayed by the suppression of ABA levels and a delayed ABA response. In a previous study, we sectioned R. aquatica leaf explants [2] and observed narrow intercellular spaces and tightly packed cells, in both palisade and spongy tissues. Therefore, in the present study, we hypothesized that the internal structure of R. aquatica leaves is different from that of A. thaliana, and that the packed cells allow leaf explants to survive longer by retaining water, a high photosynthetic efficiency, and efficient auxin transport across the membrane. As expected, the intercellular space of R. aquatica was narrow ( Figure 7A). Additionally, we have previously reported a greater accumulation of auxin and its metabolites at the proximal side of R. aquatica leaf explants, when compared with the distal side, on day 1 [2]. To confirm if A. thaliana leaf explants accumulate auxins similarly, endogenous auxin levels at the proximal and distal sides of A. thaliana leaf explants on day 0 and day 1 after leaf cutting were compared. In A. thaliana, IAA and its metabolites were increased at both the distal and proximal sides on day 1 ( Figure 7B). To examine where increased endogenous IAA after day 1 was synthesized, expression profiles of genes related to IAA biosynthesis were analyzed using transcriptome data from A. thaliana. AtYUC was not upregulated in this study using leaf explants from aged leaves of A. thaliana ( Figure 7D,E). A previous study has reported that YUC genes are required for de novo root organogenesis from young leaves of A. thaliana [22]. This suggests that upregulation of YUC genes depends on aging on an individual level in A. thaliana. In addition, our previous study has reported that RaYUC2 were upregulated at the distal side of leaf explants of R. aquatica during plantlet regeneration [2]. R. aquatica leaves may have the mechanism to upregulate orthologs of YUC, even if the individuals are aged, and this mechanism may be a major contribution to plantlet regeneration. AtARF7 was temporarily upregulated ( Figure 7G), and AtARF19 was immediately downregulated ( Figure 7K). This suggests that the difficulty in regenerating plantlets from leaf explants of A. thaliana is attributed to the lack of continuous upregulation of auxin responsive genes. Taken together, the wide intercellular spaces of A. thaliana leaves may be related to retaining water and a high photosynthetic efficiency rather than efficient auxin transport. In this study, we focused not only on molecular mechanisms, but also leaf internal structure as factors affecting plantlet regeneration from leaf explants in nature. Some plant species that can regenerate plantlets (for example, Sedum, Saintpaulia, and Peperomia [1]) develop thick leaves, which may possess packed cells and narrow intercellular spaces, and thus remain viable for longer, similar to R. aquatica. Molecular and morphological analyses may effectively explain the viability of leaf explants during plantlet regeneration. Conclusions Leaf explants of R. aquatica can survive for long time. This viability may depend on maintaining photosynthetic activity. ABA may also be related to the viability of R. aquatica leaf explants by delaying senescence at the transcriptional level. Exogenous JA, a phytohormone that is inferred to upregulate the immune response in R. aquatica leaf explants, increased the efficiency of regeneration. Because these results regarding phytohormones only indicate correlations between the viability of leaf explants and the regeneration ability of the plant, further studies will have to reveal their causations. Moreover, our findings, including the difference in internal structure between R. aquatica and A. thaliana, can be used as a starting point for future studies to understand the fundamental differences between plant species that can or cannot regenerate in nature. Plant Materials and Plantlet Regeneration Rorippa aquatica (Accession "N" [23]) plants were grown and propagated as previously described [2]. In brief, R. aquatica plants were grown for over 50 days in a growth chamber at 30 • C under continuous light and 50 µmol m −2 s −1 light intensity by fluorescent lamp. Plantlet regeneration was allowed for approximately 2 weeks at 23 • C under continuous light. In addition, Arabidopsis thaliana "Col-0" plants were grown at 23 • C for 40 days under continuous light. Leaf explants of R. aquatica and A. thaliana were placed on wet paper towels in a plastic tray and covered with plastic wrap. Photosynthetic Activity Measurement Photosynthetic activity was evaluated by the maximum quantum yield of photosystem II (Fv/Fm) and electron transport rate (ETR) using a Mini-PAM (pulse amplitude modulation) portable chlorophyll fluorometer (Walz, Effeltrich, Germany). Minimum fluorescence (Fo) was obtained from the open photosystem II reaction centers in the dark-adapted state by a low-intensity light (650 nm, 0.05-0.1 µmol photons m −2 s −1 ). A saturating pulse of white light (800 ms, 3000 µmol photons m −2 s −1 ) was applied to determine the maximum fluorescence with closed photosystem II centers in the dark-adapted state (Fm) and during illumination with actinic light (Fm'). The steady-state fluorescence level (Fs) was recorded during actinic light illumination. The Fv/Fm and photosystem II quantum yield (Φ PSII ) was calculated as (Fm − Fo)/Fm and (Fm' − Fs)/Fm', respectively [24]. The relative ETR was calculated as Φ PSII × light intensity (µmol photons m −2 s −1 ). Five leaf explants of R. aquatica were placed in dark conditions to stabilize the photoresponse for 30 min before measurement of photosynthetic activity in each time point. For regenerating R. aquatica plantlets in the dark, leaves of R. aquatica were cut into 30 explants and placed on wet paper towels in a plastic tray. The plastic tray was then covered with plastic wrap and aluminum foil, and maintained at 30 • C for 22 days under continuous light conditions. The plastic tray for the light condition was covered with only plastic wrap. The regenerated roots and shoots were counted separately and presented as box plots. Quantification of Phytohormones To assess if the phytohormone levels trigger senescence in leaf explants, endogenous ABA and JA levels in R. aquatica and A. thaliana leaf explants were quantified over time. Furthermore, as R. aquatica plantlets are regenerated only at the proximal side of the leaf explant, we hypothesized that the phytohormone levels are different at the distal and proximal sides, and that senescence is more conveniently triggered at the distal side than at the proximal side. Therefore, the phytohormone levels at the distal and proximal sides were quantified separately. The distal and proximal sides from five explants of R. aquatica and A. thaliana leaf explants were collected separately at different time points, following a previous report [2]. ABA, JA, and SA were extracted following previously described methods [25,26]. For the quantification of IAA and its metabolites, the distal and proximal sides of R. aquatica and A. thaliana leaf explants were collected separately at different time points as previously described [2]. These were then extracted and detected following a previous report [27]. Analysis of Gene Expression Profiles Using Transcriptome Data RNA-seq of R. aquatica has previously been performed using the NextSeq500 sequencing platform (Illumina, CA, USA) [2]. Transcriptome data of R. aquatica have previously been reported [2]. RNA-seq of A. thaliana leaf explants was performed at 0 h, 1 h, 3 [2]. RNA was extracted from approximately 3-5 mm square pieces of leaf explant of A. thaliana. Extracted RNA samples (n = 3) were assessed for quality (integrity number ≥ 0.8) using the Agilent RNA6000 Nano assay (Agilent, CA, USA). Libraries were prepared using the Illumina TruSeq ® Stranded RNA LT Kit (Illumina, CA, USA) and quantified using the QuantiFluor ® dsDNA System (Promega, WI, USA). The quality of libraries was checked using the Agilent High Sensitivity DNA Assay (Agilent, CA, USA). Libraries were pooled and sequenced on the NextSeq500 sequencing platform (Illumina, CA, USA). The obtained 75-bp single-end reads were mapped to the genome sequence of A. thaliana. Gene expression profiles with A. thaliana genome sequence (TAIR10) were plotted using 'ggplot2 and 'gridExtra' packages in R. Observation of the Internal Structure of Rosette Leaves To observe the internal structures of leaves, R. aquatica and A. thaliana were grown for 50 days and 40 days, respectively. To prepare transverse sections, R. aquatica and A. thaliana rosette leaves were obtained after embedding in Technovit 7100 resin (Kulzer, Hanau, Germany), as previously described [3,28]. Sectioned samples were stained with 0.1% toluidine blue (w/v) (n = 3). Statistical Analysis The experimental design was completely randomized. For measurement of photosynthetic activity and quantification of endogenous phytohormones, data from at least four independent experiments were averaged. To quantify the regenerated roots and shoots, data from at least 30 leaf explants were analyzed using Student's t-test, where p < 0.05 indicates significance. p values in graphs indicated exact values. Sequence Data The RNA-seq data of R. aquatica are available from the DNA Data Bank of Japan Sequenced Read Archive (DRA006777). The RNA-seq data of A. thaliana are available from the DNA Data Bank of Japan Sequenced Read Archive (DRA010684).
2020-10-21T13:06:18.925Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "3773e832b6a49f1ef2157312585d5a96b8d61ba3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/9/10/1372/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46a9290b502d77c5303aff91e003a47f21f90717", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261337173
pes2o/s2orc
v3-fos-license
Cytotoxic peripheral T-cell lymphomas and EBV-positive T/NK-cell lymphoproliferative diseases: emerging concepts, recent advances, and the putative role of clonal hematopoiesis. A report of the 2022 EA4HP/SH lymphoma workshop Cytotoxic peripheral T-cell lymphomas and EBV-positive T/NK-cell lymphoproliferative diseases were discussed at the 2022 European Association for Haematopathology/Society for Hematopathology lymphoma workshop held in Florence, Italy. This session focused on (i) primary nodal EBV-positive T and NK-cell lymphomas (primary nodal-EBV-TNKL), (ii) extranodal EBV-positive T/NK lymphoproliferative diseases (LPD) in children and adults, (iii) cytotoxic peripheral T-cell lymphomas, NOS (cPTCL-NOS), EBV-negative, and (iv) miscellaneous cases. Primary nodal-EBV-TNKL is a newly recognized entity which is rare, aggressive, and associated with underlying immune deficiency/immune dysregulation. All cases presented with lymphadenopathy but some demonstrated involvement of tonsil/Waldeyer’s ring and extranodal sites. The majority of tumors are of T-cell lineage, and the most frequent mutations involve the epigenetic modifier genes, such as TET2 and DNMT3A, and JAK-STAT genes. A spectrum of EBV-positive T/NK LPD involving extranodal sites were discussed and highlight the diagnostic challenge with primary nodal-EBV-TNKL when these extranodal EBV-positive T/NK LPD cases demonstrate predominant nodal disease either at presentation or during disease progression from chronic active EBV disease. The majority of cPTCL-NOS demonstrated the TBX21 phenotype. Some cases had a background of immunosuppression or immune dysregulation. Interestingly, an unexpected association of cPTCL-NOS, EBV-positive and negative, with TFH lymphomas/LPDs was observed in the workshop cases. Similar to a published literature, the genetic landscape of cPTCL-NOS from the workshop showed frequent mutations in epigenetic modifiers, including TET2 and DNMT3A, suggesting a role of clonal hematopoiesis in the disease pathogenesis. Supplementary information The online version contains supplementary material available at 10.1007/s00428-023-03616-4. Introduction Cytotoxic T lymphocytes (CTLs) and natural killer (NK) cells represent subsets of immune cells with a major role in host epithelial immune surveillance.Both populations display a similar approach of target cell "killing" by releasing granule-associated cytotoxic proteins into the immunological synapse.However, their mechanism of target recognition is distinct, allowing for their complementarity in ensuring host defense [1].The CTLs are mostly represented by alpha/ beta CD8+ T cells and in minor proportion by gamma/delta T cells and CD4+ T cells [2,3]. Mature T-and NK-cell neoplasms displaying cytotoxic phenotype are uncommon and highly heterogeneous, with 21 entities described by the 5th edition of the WHO classification and the 2022 International Consensus Classification (ICC) [4,5].This diversity likely reflects the array of normal cytotoxic cells, their functional plasticity and differentiation state, disease localization, and association with pathogens [e.g., Epstein-Barr virus (EBV)].They are commonly extranodal, following the normal distribution of cytotoxic lymphocytes with few entities presenting primarily in the Extended author information available on the last page of the article / Published online: 30 August 2023 Virchows Archiv (2023) 483:333-348 lymph node (LN).The improved knowledge of T-cell ontogeny combined with integrated genomic and transcriptomic approaches have led to a better understanding of the putative cell-of-origin for some of these entities.Although most cytotoxic T-cell lymphomas are highly aggressive, broadly speaking the cytotoxic phenotype alone cannot accurately predict a patient's outcome.Some T-and NK-cell lymphoproliferative disorders follow an indolent clinical course and a subset of them may progress/transform to aggressive diseases [e.g., localized/indolent forms of chronic active EBV disease (CAEBVD) and T-cell large granular lymphocytic leukemia (T-LGL)] [6][7][8][9]. Session 4 of the Lymphoma Workshop (LYWS) organized by the European Association for Haematopathology and Society for Hematopathology (EA4HP-SH) in Florence September 2022 was dedicated to cytotoxic T-cell lymphoma (excluding skin) and EBV-positive nodal T/NKcell lymphoma (excluding extranodal NK/T cell lymphoma, nasal type).A total of 35 cases were reviewed by the expert panel members, and in this paper, they were grouped into the following categories: (1) Primary nodal EBV-positive T/ NK-cell lymphoma, (2) extranodal EBV-positive T/NK lymphoproliferative disorders (LPD) in childhood and adults, (3) cytotoxic T-cell lymphoma, EBV-negative (4) miscellaneous cytotoxic PTCL, and (5) findings from the workshop. Based on the submitted cases, this workshop report aims to discuss and summarize the distinctive features of primary nodal EBV+ T/NK-cell lymphoma and the role played by the underlying immune deficiency/impairment and clonal hematopoiesis (CH) in cytotoxic TCL biology and their potential progression from an indolent T-LPD.Furthermore, the novel and unusual association of cytotoxic TCL with TFH LPDs/lymphomas will be addressed. EBV-positive nodal T-and NK-cell lymphoma (primary nodal EBV+ T/NK-cell lymphoma) Nodal EBV-positive T-and NK-cell lymphoma (primary nodal-EBV-TNKL) is now recognized as a distinct entity in 5th edition of the WHO lymphoma classification; previously, it was subsumed as a subtype under the entity of PTCL-NOS [4].In the 2022 ICC, it is listed as a provisional entity and termed "primary nodal EBV-positive T/NK-cell lymphoma" to highlight the primary nodal disease origin and to distinguish them from other T/NK EBV+ LPDs that may infiltrate predominantly lymph nodes [5].This disease is rare and occurs mostly in older adults from East Asia [10][11][12][13][14][15].The tumor is mostly of T-cell lineage and is characterized by frequent loss of 14q11.2, and upregulation of immune pathways, NFκB and PD-L1 [15,16].Most cases show type II EBV latency pattern.The disease has an aggressive behavior with median overall survival ranging 2.5-8.0 months [12][13][14][15]17].Despite its aggressiveness, the tumor demonstrates lower genomic instability compared to extranodal NK/T-cell lymphoma (ENKTL), nasal type, and PTCL-NOS [16]. A total of 8 cases, 5 females and 3 males were submitted to the workshop (Supplementary Table 1).The age ranged from 46 to 81 years (median 54.5 years).Five patients were Asians, and the remaining 3 were Caucasians.Consistent with the literature, an association with underlying immune deficiency or conditions which may impair immune responses was present in some cases, including HIV (n = 1), hepatitis B (n = 2), and prior history of angioimmunoblastic T-cell lymphoma (AITL) (n = 2, cases LYWS-1190 and LYWS-1396) [17][18][19].Case LYWS-1190 from RKH Au-Yeung was a typical example of primary nodal-EBV-TNKL occuring in a 46-year-old female who had a prior history of AITL 2 years ago.Case LYWS-1396 submitted by Wang L occurred in a patient with AITL diagnosed in 2012 and subsequently developed primary nodal-EBV-TNKL in 2019 which was clonally unrelated to the AITL (see section 6.1 for further discussion). All the cases presented with lymphadenopathy, but some cases additionally demonstrated other sites of involvement, including spleen (n = 1), tonsil (n = 1), and extranodal sites such as pleural effusion (n = 1), skin (n = 1), and lacrimal glands (n = 1).Notably, nasal disease was not detected.The involvement of the tonsil/Waldeyer's ring in rare cases of primary nodal-EBV-TNKL may be interpreted as upper aerodigestive tract involvement and be mistaken for ENKTL.This is illustrated by LYWS-1207 submitted by K. Ofori, a 67-year-old Caucasian female with extensive lymphadenopathy and involvement of the spleen, tonsil, and lacrimal glands.The tumor demonstrated monoclonal TR gene rearrangement and mutations of TET2 and DNMT3A, which are uncommon in ENKTL.In the study from Wai CMM et al., 4 of 25 cases of primary nodal-EBV-TNKL presented primarily with nodal disease and also displayed tonsil/Waldeyer's ring involvement [16].In all 4 cases, the tumors were of T-cell origin and 3 demonstrated TET2 and/or DNMT3A mutations (supplementary table 1).Interestingly, Nicolae et al. described 7 cases of EBV-positive cytotoxic PTCL and 3 cases had "Ear-Nose-Throat" involvement, although it was uncertain if the nasal site was involved by tumor [20].These 3 cases revealed both TET2 and DNMT3A mutations.It is worth noting that the tonsils, Waldeyer's ring, and spleen are considered nodal tissue, not extranodal tissues, according to the Lugano classification and staging of lymphoma [21].Therefore, the involvement of the tonsil/Waldeyer's ring does not necessarily exclude the diagnosis of primary nodal-EBV-TNKL.Similarly, primary nodal-EBV-TNKL should also be distinguished from ENKTL with nodal involvement.In such cases, a thorough assessment of clinical, histopathological, and molecular features is necessary to distinguish primary nodal-EBV-TNKL involving extranodal sites and tonsil/Waldeyer's ring from ENKTL with nodal involvement.The presence of primary nodal disease with tumors involving mainly lymph nodes, T-cell lineage, the absence of nasal disease, and the presence of epimutations would favor the diagnosis of primary nodal-EBV-TNKL over ENKTL. Six of the 8 cases submitted demonstrated medium to large tumor cells, and one case had a mixed small to large cell morphology.In the case LYWS-1138 submitted by L Goh, the tumor cells were large and showed CD8+/ CD56−/TCRgamma+ phenotype (Fig. 1A-H).LYWS-1190 submitted by RKH Au-Yeung demonstrated a tumor composed of small cells with abundant histiocytes in the background, resembling lymphoepithelioid (Lennert) lymphoma, and a low Ki-67 proliferation index (Fig. 1I-M).A rich inflammatory background was present in three cases and necrosis is seen in two cases.Unlike ENKTL, an angiocentric growth was only present in 1 out of 8 cases.Case LYWS-1176, submitted by Y Zhang, represented an unusual example of a composite primary nodal-EBV-TNKL and classic Hodgkin lymphoma (CHL). Phenotypically, all eight cases were positive for CD3 and/or CD2 and demonstrated an activated cytotoxic [15].Therefore, the expression of CD8 and CD56 can provide a clue to the T vs NK lineage especially when clonality testing is not available. Based on a combination of positive expression of TCR alpha/beta and/or TCR gamma/delta using immunohistochemistry (IHC) and/or monoclonal TR gene rearrangement, 6 out of the 7 cases analyzed show T-cell lineage.NK-cell origin is defined as the absence of IHC expression of TCR alpha/beta and TCR gamma/delta, absence of clonal TR gene rearrangement, and frequent expression of CD56.Only one case (14%), LYWS-1227 from C Bárcena, was likely of NK-cell lineage as the tumor revealed negative expression for CD4, CD8, CD56, TCR alpha/beta, and TCR gamma/delta and was also polyclonal for TR gene rearrangement.Of the 6 cases tested by IHC, 3 expressed TCR alpha/beta, one expressed TCR gamma/ delta, and the remaining were silent for TCR alpha/beta and TCR gamma/delta. An issue raised during the workshop but remains unresolved was the biologic relationship between primary nodal-EBV-TNKL and EBV-negative cytotoxic PTCL NOS (cPTCL-NOS).Preliminary data suggests that primary nodal-EBV-TNKL has worse outcome and lower genomic instability compared to EBV-negative cPTCL-NOS [16].However, their mutational profiles are similar with frequent mutations of epigenetic modifier genes.Further studies are needed to clarify if these 2 entities are indeed related. There is currently limited data on the treatment of this rare and aggressive disease.The outcome of patients treated with etoposide and chemotherapy regimens with and without anthracycline is poor [14,22,23].It remains uncertain if patients with nodal-EBV-TKNL will show a similar favorable response to L-asparaginase-based regimens, such as SMILE (steroid, methotrexate, ifosfamide, l-asparaginase, and etoposide) as observed in patients with advanced ENKTL [24].Overexpression of PD-L1 has been reported in ENKTL [25], and anti-PD1 immunotherapy has been shown to be effective in patients with relapsed and refractory ENKTL [26].Interestingly, PD-L1 protein is significantly overexpressed in both tumor and non-tumor cells in primary nodal-EBV-TNKL compared to ENKTL, and this PD-L1 upregulation in primary nodal-EBV-TNKL may have potential therapeutic implications for anti-PD1 treatment [16]. In conclusion, the key take-home messages for primary nodal-EBV-TNKL from the workshop are summarized below: 1. Commonly primary nodal-EBV-TNKL is associated with underlying immune deficiency or conditions which may impair immune responses.2. Primary nodal-EBV-TNKL shows frequent mutations of epigenetic modifier genes, including TET2 and DNMT3A, suggesting a possible role of CH. 3. Primary nodal-EBV-TNKL can involve extranodal sites and, less commonly, the Waldeyer's ring and should be distinguished from ENKTL with nodal disease.In such cases, it is essential to perform a thorough assessment of clinical, histopathological, and molecular features to distinguish them from ENKTL with nodal involvement.The primary nodal presentation with tumor mainly involving lymph nodes, T-cell origin, lack of nasal involvement, and presence of mutations in TET2 and DNMT3A supports the diagnosis of primary nodal-EBV-TNKL. EBV-positive extranodal T/NK-cell lymphoproliferations A total of 9 cases of EBV-positive T/NK-cell LPD involving extranodal sites in children and adults were submitted to the workshop (Supplementary Table 2).Three cases demonstrated prominent nodal involvement either at disease presentation or during transformation and illustrate the potential diagnostic difficulty with primary [30,31].Although reported in the literature, the term CAEBVD should not be used in the context of immunodeficiency, and therefore, a descriptive term such as systemic EBV+ T-cell LPD is preferable in this case.There were episodes of EBV reactivation, but the patient was otherwise clinically well at the last follow up 10 years later. Two cases submitted in this group illustrated the diagnostic challenge between CAEBVD and a self-limiting EBV+ LPD or infectious mononucleosis (IM) with protracted clinical course [32].One case occurred in an elderly patient (LYWS-1126 from X Huang) who presented with Based on the elevated EBV-IgM and IgG levels, it is likely that the patient initially had acute EBV infection or EBV reactivation [33], which subsequently developed a protracted clinical course.Since a double stain for EBER/CD79a and EBER/CD3 was not performed to confirm the lineage of the EBV-infected cells, the panel acknowledged the submitter's diagnosis that a mild form of CAEBVD cannot be excluded due to the prolonged clinical course.A second case of a self-limiting EBV-associated LPD was also submitted by L Zhang (LYWS-1348).This occurred in a 21-year-old man with lymphadenopathy and splenomegaly.A cytotoxic and polyclonal CD8+ T-cell proliferation was present in the lymph node and spleen.The patient's symptoms resolved spontaneously after 4 months without treatment, which is unusual for CAEBVD.There were no manifestations of HLH, which makes the diagnosis of EBV-associated HLH unlikely.The panel acknowledged the differential diagnosis of IM and CAEBVD at the time of initial diagnosis.To help distinguish between the two, a double stain for EBER with CD3 and CD20 or CD79A would have been needed, as EBV typically infects B cells and rarely CD8+ T cells in IM [34] and T or NK cells in CAEBVD.The T cells in CAEBVD are more often CD4-positive while those in IM are often CD8-positive [32,35].Additionally, the positive expression of LMP1 and EBNA2 supports the diagnosis of IM over CAEBVD.However, due to the limited material available for further workup and the small number of EBER-positive cells present in this case, the panel favored a self-limiting EBV+ LPD, most likely IM. During the workshop, the panel discussed the differential diagnosis of EBV-positive T/NK LPD (Table 1) and reached several important conclusions: 1. Depending on the T-or NK-cell lineage, CAEBVD can progress to more a aggressive EBV+ T/NK-cell lymphoma or leukemia, such as SEBVTCL, ENKTL, or ANKL; these aggressive diseases should not be diagnosed as primary nodal-EBV-TNKL.2. Both SEBVTCL and ANKL can have prominent lymph node involvement, either at presentation or following transformation/progression from localized/indolent forms of CAEBVD and should not be mistaken as primary nodal-EBV-TNKL.The presence of systemic (leukemic) disease and HLH distinguish SEBVTCL and ANKL from primary nodal-EBV-TNKL.In addition, an NK-origin, leukemic disease and/or BM involvement, and complex karyotype will favor ANKL over primary nodal-EBV-TNKL.3. The majority of IM is self-limiting, and patients usually recover without complications.These cases do not pose diagnostic challenges with CAEBVD.Less commonly, IM can develop a protracted course lasting more than 3 months and may be mistaken for CAE-BVD.In this context, it is important to determine the Cytotoxic PTCL-NOS, EBV-negative As described in a recent paper [20], cytotoxic PTCL-NOS (cPTCL-NOS), is defined by the expression of at least one cytotoxic molecule in more than 50% of tumor cells and they frequently present with a background of impaired immunity, including malignancies, autoimmune diseases, and other immune disorders.Cytotoxic PTCL-NOS, is associated with mutations in epigenetic modifier genes and signaling pathways [36] and shows an activated cytotoxic phenotype [37].They often fit into the PTCL-TBX21 subgroup and have poor prognosis [38]. In the workshop, 10 cases were submitted with the diagnosis of cytotoxic T-cell lymphoma.Nine cases represented examples of cPTCL-NOS (supplementary table 3).All 9 cases were EBV-negative.There was a slight predominance of male patients (M:F; 5:4) with a median age of 57 years (range 13-78 years).Three patients had a background of immune dysregulation.Histologically, the cases showed effacement of the nodal architecture, and the neoplastic cells were predominantly medium to large.In contrast to the other cases, the cells in case LYWS-1367 submitted by H Shao were small and without atypia.In addition, the Ki67 proliferation index was low, suggesting that it may represent a form of indolent PTCL [39].This case also displayed aberrant positivity for CD20, a well-recognized phenomenon in mature T-cell malignancies [40].Immunophenotypically, 5 cases were CD8+, 3 cases were CD4+/CD8+, and 1 case was CD4+.Of the cases tested, 3 expressed TCR alpha/ beta, and 2 cases were TCR silent.Five out of 8 cases had an activated cytotoxic phenotype, and all 3 cases with available data were classified into the PTCL-TBX21 subtype.CD56 expression was rare with only 1 case positive.CD30 positivity was present in 5 out of 8 cases. In agreement with the literature, NGS studies identified mutations in the epigenetic modifier genes in 5 out of 5 cases for molecular studies [20] (supplementary table 3).Information regarding the outcome was available in 6 of 9 cases.Case LYWS-1235 achieved complete remission, and three cases were in partial response after 8, 12, and 60 months of follow-up.Two patients died of lymphoma, LYWS-1200 after 6 months and case LYWS-1213 after 19 months from the diagnosis. The case of LYWS-1416, as submitted by F Gutierrez-Llamas, provides a remarkable illustration of large granular lymphocytic leukemia (LGL) transformation shedding light on the possibility of cPTCL-NOS progressing from an indolent T-cell leukemia.This phenomenon has been sparsely documented in existing literature, with only a handful of cases reported [6,7].The case highlights the clonal relationship between the two LPDs, and the whole exome sequencing analysis further suggested a hierarchical multihit evolution, with early epigenetic events possibly playing a significant role (Fig. 4).This case LYWS-1416 and another case of LGLL with transformation from the French LGLL registry have recently been published, and the authors discussed the pathogenesis of LGLL transformation [9]. The case LYWS-1200, submitted by G Frigola, is a unique example of a cPTCL-NOS, where the tumor cells co-expressed cytotoxic (granzyme B, TIA1, and perforin) and TFH (PD1, CXCL13 and BCL6) markers within the same cells.This can be clearly observed in Fig. 4, where the double stain for PD1 and TIA1 highlights this colocalization.The exact origin of these cells is unclear, but it is possible that they may represent distinct subtypes of TFH cells with cytotoxic activity [41][42][43][44]. The last case (LYWS-1462 submitted by D Dueñas) was a good example of T-prolymphocytic leukemia based on the history of the patient (a white blood cell count of 56.9 × 10 9 /L), morphology, and TCL-1 expression by IHC. The most important messages gleaned from cPTCL-NOS cases in the workshop and recent literature are summarized as follow: 1. cPTCL-NOS, EBV negative, frequently present in a background of impaired immunity, similar to primary nodal-EBV-TNKL.2. The majority of cases are subclassified into PTCL-TBX21 subtype based on CXCR3, TBX21, CCR4, and GATA3 expression pattern.3. The mutational landscape of cPTCL-NOS, EBV negative, is similar to primary nodal-EBV-TNKL and is characterized by mutation of epigenetic modifiers, suggesting a potential role of CH in the lymphoma pathogenesis. Miscellaneous cytotoxic PTCL The workshop received 4 cases of miscellaneous cytotoxic T-cell proliferations (supplementary table 4).Case LYWS-1131 from A Tzankov illustrated an unusual example of primary cutaneous gamma delta T-cell lymphoma that presented with a solitary cutaneous lesion.Case LYWS-1418 from J Coviello was a case of CD30+ large cell lymphoma with features of ALK-negative anaplastic large cell lymphoma (ALCL) and possible PAX5 expression in rare CD30+ tumor cells, raising the differential diagnosis of CHL with expression of T-cell markers.TR gene rearrangement performed by the panel revealed monoclonal rearrangement of TRB and TRG, confirming the diagnosis of ALK-negative ALCL.Case LYWS-1293 submitted by J Gao was a good example of EBV-negative ANKL.The case presented with HLH, systemic disease, NK-origin, and complex karyotype [45].As reported in the literature, these cases are indistinguishable clinically and pathologically from EBV-positive ANKL.Case LYWS-1467 from E.I. Cytotoxic PTCL NOS associated with TFH lymphoproliferations One of the interesting findings in this workshop was the remarkable, and unexpected, association of cPTCL-NOS with TFH lymphomas/LPDs.The workshop received six cases that illustrated the association of a TFH proliferation with a cytotoxic PTCL, both EBV-positive and negative (supplementary table 5 and supplementary table 5-mutations).Interestingly, three of them had a history of immune dysregulation (methotrexate treatment, hepatitis B, and rituximab-fludarabine cyclophosphamide for chronic lymphocytic leukemia (CLL)).cPTCL-NOS often occurs in patients who are immunocompromised, and this immune dysfunction likely contributes to lymphomagenesis [16,20,47].In three of the cases, the initial LPD was the TFH lymphoma while in the other three cases, the LPD that presented first was the cPTCL-NOS.In five cases with available material for molecular studies, the TR gene rearrangement showed that the TFH and the cytotoxic proliferations were clonally unrelated. Interestingly, two of the six TFH LPDs were clonal but the infiltration was focal without effacement of the nodal architecture.In these two cases, the abnormal clone was originally identified by flow cytometry analysis.One example of these 2 cases is case LYWS-1402 submitted by M Klimkowska.The patient had a background of immune dysregulation, given the history of CLL, and developed enlargement of an axillary lymph node in 2014.Flow cytometry analysis, IHC, and TR rearrangement were performed, and a TFH clonal proliferation was detected; the patient received corticosteroids and the symptoms improved.The panel agreed with the submitter that the morphological changes were insufficient to render a diagnosis of TFH lymphoma.In 2017, the patient presented with generalized lymphadenopathy and the excised lymph node displayed a diffuse infiltrate of large pleomorphic cells partially effacing the nodal architecture.The T-cell proliferation was positive for CD4, CD56, and perforin, and monoclonal for TR gene rearrangement.This case underscores the importance of flow cytometry, IHC, and molecular testing (TR and NGS) in identifying and characterizing a small population of TFH cells for accurate diagnosis.TFH proliferations can be challenging to diagnose as they may represent either a smoldering or an early manifestation of AITL [48].Furthermore, expansions of reactive TFH cells can be seen in reactive lymphadenopathies and B-cell lymphomas, such as nodal and extranodal marginal zone lymphomas [49].Unfortunately, material was not submitted for case LYWS-1402 (2014 lymph node) for additional NGS testing.The panel would support the diagnosis of TFH lymphoma for the clonal TFH proliferation in the 2014 lymph node biopsy if typical mutations of TFH lymphoma, such as RHOA mutation, could be demonstrated. Case LYWS-1396 submitted by L Wang represented a case of primary nodal-EBV-TNKL occurring in the context of an untreated TFH lymphoma of angioimmunoblastic-type (AITL) with an indolent course spanning 6 years.This case highlights the potential for primary nodal-EBV-TNKL to develop presumably in the presence of immune dysfunction related to TFH proliferation/lymphoma and possibly aggravated by EBV reactivation, even in the absence of treatment [50].In this case, the two lymphomas were clonally unrelated.Mutations in TET2, RHOA, and IDH2 were present in the AITL.However, NGS did not detect mutations in the primary nodal-EBV-TNKL confirming further that these two neoplasms were not related [20] (Fig. 5). TBX21/GATA3 subtypes The definition of PTCL-NOS as a mature T-cell lymphoma not meeting the criteria for other specific entities remains unchanged in the 2022 ICC [5] and the 5th WHO classification [4].Gene expression profiling studies have identified two major molecular subgroups: one overexpressing TBX21 and the other overexpressing GATA3 [51].Compared to PTCL-TBX21, PTCL-GATA3 has a worse prognosis, with higher genomic complexity, including 17p del (TP53), 9p del (CDKN2A), and 10p del (PTEN), and gains of STAT3 and MYC.PTCL-TBX21 has a better prognosis, less genomic complexity, and a higher frequency of mutations involving epigenetic modifying genes [37].An immunohistochemical algorithm using antibodies to TBX21, CXCR3, GATA3, and CCR4 has been proposed to stratify PTCL-NOS into TBX21 and GATA3 subgroups [38]. The panel performed the IHC algorithm [38] in 9 of 21 cases of cytotoxic PTCL, both EBV+ and EBV−, AITL submitted to the workshop with available material.The results are detailed in Table 2. Seven out of the 9 cases analyzed were classified into the TBX21-subtype, one case corresponded to the GATA3-subtype, and one case was unclassifiable.Our findings are in line with previous reports describing the expression of cytotoxic markers to be more frequently associated with PTCL-TBX21 compared with PTCL-GATA3 [37,38]. The mutational profile of PTCL-NOS is enriched in TET2 and DNMT3A mutations.Mutations in these two genes cooccurred, suggesting an oncogenic cooperation, as observed in TFH lymphomas [52].The loss of 5-hydroxymethylcytosine due to TET2 mutation and DNA hypomethylation because of DNMT3A loss in critical target genes may act synergistically in promoting lymphomagenesis [53,54].While TET2 mutation occurs at near-equal frequencies in PTCL-GATA3 and PTCL-TBX21, DNMT3A, TET,1 and TET3 mutations were more commonly detected in PTCL-TBX21 [37]. Herek TA et al. recently reported the association of DNMT3A mutations with PTCL-TBX21 subtype and demonstrated that the R882 variant particularly correlated with cytotoxic differentiation and inferior clinical outcome [55].One of 3 cases of primary nodal-EBV-TNKL with DNMT3A mutation, LYWS-1207, submitted by K Ofori, demonstrated the DNMT3A R882P variant but displayed the PTCL-GATA3 phenotype based on IHC (TBET−, CXCR3−, GATA3+, CCR4−).The distinct prevalence of the DNMT3A R882H/C variant in PTCL-NOS compared to AITL is intriguing and requires further investigations [55]. Role of clonal hematopoiesis (CH) CH is common among patients with lymphoma, and its frequency increases with age [56].TFH lymphomas frequently harbor TET2 and DNMT3A mutations, and identical mutations have been identified in both the malignant T-cells and the myeloid component of patients, suggesting a common ancestral clone with subsequent divergent evolution [57].Notably, our analysis of 61 cytotoxic PTCL cases, EBV+ and EBV−, from the workshop and 2 recent studies [16,20] revealed 44 out of 61 cases (72%) with mutations of epigenetic modifier genes.Co-occurrence of TET2 and DNMT3A mutations were present in 23%.These findings suggest a potential role of CH in the pathogenesis of cytotoxic PTCL.In addition, one workshop case (LYWS-1094 submitted by A Vogelsberg) nicely illustrated the role of CH in the development of PTCLs with different phenotypes and origin from a common progenitor.This case provided evidence for a divergent evolution of two clonally-unrelated T-cell lymphomas (cPTCL-NOS and AITL) originating from a common progenitor, which shared the same mutations in TET2 and DNMT3A (Fig. 6).These mutations were detected in the BM biopsies, which were morphologically and molecularly negative for lymphoma, suggesting that CH is not only a precursor of AITL but also a precursor of cPTCL-NOS.Notably, a recent case report by Attygalle et al. described two cases showing parallel evolution of two distinct and neoplastic lymphoid proliferations from a common TET2-DNMT3A mutated hematopoietic progenitor cell population [50].It remains uncertain if the other 5 workshop cases showing the association between TFH lymphoma/LPD and cPTCL-NOS are derived from a common progenitor (Supplementary table 5-mutations). Conclusion Primary nodal-EBV-TNKL is a rare and aggressive lymphoma characterized by T-cell lineage, lack of nasal involvement, low genomic instability, frequent loss of 14q11.2, and upregulation of immune pathways, NFκB and PD-L1.Primary nodal TNKL can occasionally involve the tonsils/ Waldeyer's ring and be misdiagnosed as upper aerodigestive tract involved by ENKTL.In addition, EBV+ T/NK LPD involving extranodal sites in children and adults, such as SEBVTCL and ANKL, can occasionally show prominent LN involvement either at disease presentation or following transformation from CAEBV disease and should not be diagnosed as primary nodal-EBV-TNKL, which affects elderly patients and often associated with immunosuppression. Based on the workshop cases and recent literature, the mutational landscape of primary nodal-EBV-TNKL is similar to cPTCL-NOS and is characterized by frequent mutation of epigenetic modifiers, such as TET2 and DNMT3A, and JAK/STAT pathway genes suggesting a potential role of CH and JAK/STAT pathway in the pathogenesis of cytotoxic PTCL.Whether primary nodal-EBV-TNKL represents an EBV-positive counterpart of cPTCL-NOS, or a distinct entity requires further study.cPTCL-NOS are associated with settings of immune dysregulation, derive from mature T lymphocytes, commonly alpha/beta, display an activated cytotoxic phenotype, and the majority are of PTCL-TBX21 subtype.These cases show frequent mutations in epigenetic modifier genes.The cases submitted to the workshop underline the need for close examination of the T-cell proliferations to identify and better characterize the TFH and cytotoxic proliferations.The workshop cases have not only identified a novel association between TFH LPD/lymphoma and cPTCL-NOS, but also highlighted the potential role of CH in the development of neoplastic proliferations of different phenotypes.More cases and studies are warranted to further understand this important observation. Fig. 1 Fig. 1 Histologic features of nodal EBV-positive T and NK-cell lymphoma.a-f Case LYWS-1138, courtesy of L. Goh. a The tumor shows areas of necrosis and diffuse sheets of neoplastic cells.b The tumor cells are large with irregular vesicular nuclei, coarse chromatin, and prominent nucleoli.Immunohistochemistry reveals positive expression for c CD3, d CD8, f TIA1, g TCRgamma, and h EBER and negativity for e CD56.i-m Case LYWS-1190, courtesy of R.K.H. Au-Yeung.i The tumor reveals predominantly small cells with abundant histiocytes in the background, resembling lymphoepithelioid (Lennert) lymphoma.j Tumor cells display small monotonous nuclei with mild nuclear atypia and indistinct nucleoli.They are positive for k TCRβF1.l Ki67 proliferation index is low.m EBER/CD8 double stain demonstrates that the neoplastic cells are positive for CD8 and EBER Fig. 2 Fig. 2 Mutational landscape of nodal cytotoxic peripheral T-cell lymphomas (PTCL) based on workshop cases and cases from recent literature with next generation sequencing data (Wai CMM.Haematologica (2022) 1;107(8):1864, Nicolae A. Modern Pathology (2022) 35:1126-1136).Histogram plot (a) and heatmap (b) illustrate the frequency of mutations in all cases of cytotoxic PTCL, both EBV-positive and EBV-negative.A total of 61 cases analyzed revealed that the most common mutations involve epigenetic modifiers, such as TET2 and DNMT3A, JAK/STAT pathway and TCR signaling genes ◂ Fig. 4 Fig. 5 Fig.4LGL transformation case (LYWS-1416 submitted by F. Gutierrez-Llamas). a The BM biopsy in this 78-year-old male shows interstitial infiltrate of small lymphocytes b expressing CD3.The lymphocytes are also positive for CD8 and Granzyme B. c The lymph node shows an atypical infiltrate of large cells (H&E), that are positive for d CD8, CD56, CD30, Granzyme B, and p53.Clonal analysis revealed the same TR gamma rearrangement in both samples.e NGS confirms the presence of the same TET2 mutations in both sites, without the STAT3 mutation and the presence of new mutations (JAK3 and KRAS) in the transformation sample (f and g).Case LYWS-1200 submitted by G Frigola displays the co-expression of cytotoxic and TFH markers in the same cells [double stain PD1 (brown) TIA-1 (red)] Fig. 6 . Fig. 6.Case LYWS-1094, presented by A. Volgelsberg, describes the divergent evolution of two clonally unrelated T-cell lymphomas from clonal hematopoiesis.A 71-year-old woman presented with enlarged cervical LNs.The biopsy shows diffuse infiltration of large cells (a).Immunohistochemistry illustrates positivity for CD3 (b), TIA-1 (c), CD8 (not shown), and βF1 (not shown).The patient received CHOP therapy and achieved complete remission.Five months later, the patient relapsed and the lymph node reveals a mixed infiltration of lymphoid cells (d) and proliferation of high endothelial venules (e).TR beta sequencing of both lymphomas confirms the presence of distinctly different rearrangements (f and g) Table 1 Differential diagnosis of EBV-positive T and NK-cell lymphoproliferative diseases Table 2 [20,16]henotypic features of cytotoxic PTCL-NOS based on the workshop cases and cases published in recent literature(Nicolae et al. and Wai et al.)[20,16]
2023-08-31T06:18:32.067Z
2023-08-30T00:00:00.000
{ "year": 2023, "sha1": "3919a9678e6a8f12dc05252d53baac952f0e5994", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00428-023-03616-4.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "6e071fd22a7c5e7fac1c982e6c5cc28a0562bdcb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224854827
pes2o/s2orc
v3-fos-license
The effects of cutting environment on surface roughness and tool life in milling of AISI 4340 In this work, AISI 4340 alloy was machined using PVD tungsten multilayer TiAlN/AlCrN coated carbide inserts under two different cutting environments: cryogenic cooling using liquid nitrogen (LN) and dry cutting condition. The experiments were performed at varying cutting parameters of cutting speed: 200–300 m/min, feed rate: 0.15–0.30 mm/tooth, axial depth of cut: 0.3–0.5 mm, and radial depth of cut: 0.2-0.5 mm. Nine variations in tool life and surface roughness were observed when machining under different machining parameters of AISI 4340 at 32 HRC. The analysis includes the tool life and surface roughness as well as its relationship. Cryogenic LN showed significant improvement towards increasing the tool life to a maximum of 41.7% relative to dry cutting. The experiment showed that the cryogenic application was able to reduce the surface roughness by up to 43.9 % when compared to dry machining. Thus application of cryogenic cooling reduces tool wear, reduces cutting temperature and produces good surface quality which is believed to be the main factor causing an improvement compared to dry cutting. Introduction Surface roughness is used to foretell the functional of an engineering parts as surface abnormalities may cause in the development of fractures or oxidization. The work material, geometrical factors, vibrations and machine tool factors have an effect on the surface roughness of a machined face [1].Machining parameters influences on surface roughness of a machined surface have brought about challenges for engineers and researchers. Therefore, necessary methods of predicting the surface roughness of a product before machining are needed to verify the suitability of machining parameters for a required surface roughness. Eq. 1, shownthe higher feed rates result in machined surface that is rougher. This theoretical relation also shows that with the same feed, a wider nose range leads to a superior surface finish as a wider nose range makes the feed marks less prominent. The effect of cutting specifications on temperature and surface roughness are explored by [2]. The hardness of work material was found to be the greatest substantial determining aspect. The researchers discovered that compared to dry machining, the advantage of a coolant jet with high-pressure reduces the roughness values (12.9 %), cutting temperature (10.8 %), and tool wear (29.4 %) respectively. Other than surface roughness, another subject matter, i.e.tool life, were explored by several researchers. In order to measure the performance of coated ceramic tools with AISI 4340 steel, Panda et al., [3] studied surface roughness, tool life as well as commercial viability. The outcome of this investigation revealed that 47 min of coated ceramic tool life is attainedunderoptimum cutting conditions. The cut down of downtime and greater tool life also meant the total machining cost per piece lowered to only USD $0.29. Research by Al-Ghamdi et al., [4]on an empirical investigation of the machining of AISI 4340 under cryogenic cooling environment using CO2 snow as coolant. Feed rate and machining speed were the predominant factors to achieve longer tool life. A lower combination of machining parameters affects the machining performance and life of the tool. In this present work, the influences of machining conditions on surface roughness and tool life were evaluated in machining AISI 4340 steel (32 HRC) duringcryogenic and dry conditions using a multilayer coated carbide (TiAlN/AlCrN). These findings may be utilized by the machining industry with the aims of improving the surface roughness and prolonging the tool life, which are related to achieving a sustainable environment. Experimental procedures Bars of AISI 4340 steel were end milled under cryogenic and dry machining conditions on a DMG-ECO vertical milling machine. The Taguchi L9 orthogonal array was adopted in designing the experiments for specific cutting scenarios based on the control elements (Table 1). The experimental tests were carried out using a PVD multi-coated TiAlN and AlCrN cemented carbide insert. A new cutting tool was used for each test, Liquid nitrogen (LN) at -197°C was applied in the middle of the freshly machined surface and tool flank front using a flexible hose and a copper pipe connected to the cylindrical liquid nitrogen (LN) tank as presents in Figure 1(a and b) during the machining process. A 20-mm diameter of tool holder was utilized to affix the insert and a spray distance of 50 mm and spray angle of 45° was applied. In (Fig. 2a) a portable surface roughness profile meter was commissioned to quantify the arithmetic average roughness value (Ra) in micrometers (µm) of the first machining path. Measurements were captured thrice at three specific points: one in the middle and the other two on the edge. The average of these values were then determined to show the surface roughness. The machining test was paused on various intervals, at which point the tool was removed and the tool wear of the insert was calculated using Mitutoyo Toolmaker's microscopes (Fig. 2b). Then, the tool was returned to the machine and the machining experiment was continued until the following tool wear measurement. In this experiment, the tool life criteria were set when the average flank wear achieved (VBavg = 0.3mm) as mentioned in ISO 8688-2 (1989) [5]. Thus, machining time was used to represent tool life. Results and Discussion The surface roughness for both conditions were as shown in Table 2. The main effects analysis was used to study the trend of the effects of each of the factors.The main effects plot has been shown in Figures 3 and 4 for dry and cryogenic cooling respectively. The figures reveal that the feed rate significantly affects the surface roughness in both conditions.It was observed that the value of surface roughness increases with an increase of feed rate from 0.15 mm/tooth to 0.3 mm/tooth. This is in line with Bashiret al., [6],who surmised that the increase in feed rate resulted in an increase in the surface roughness. It was also observed that axial depth of cut is an insignificant factor on surface roughness in dry conditions, while speed is an insignificant factor in cryogenic conditions. The surface roughness values attained were in the range of 0.114-0.319 µm for both cutting conditions ( Figure 5). This made the process a match to manual grinding as the surface roughness values did not exceed 1.6 μm. The values of surface roughness were lower when machining with cryogenic coolant than those obtained during dry conditions. A lowest surface roughness of about 0.114 µm was observed for Experiment no 1 under cryogenic conditions. Meanwhile, the maximum surface roughness of 0.319 µm was achieved in Experiment no 2 under dry conditions. From Figure 5, the values showed that when under cryogenic conditions, an excellent surface finish was produced by as much as 43.9% as compared to surface finish under dry machining. This is due to the application of LN which lowered the surface roughness as the it reduces the coefficient of friction (CoF) [7][8]. According to Hong [8], LN generates a lubricating hydrodynamic film between the workpiece material and the cutting tool which yields a lessen coefficients of friction. Natasha et al., [9] had proved that the usage of LN lowers the CoF by up to 73% at the boundary of the chip and tool, which leads to lower surface roughness than that observed in dry machining. Furthermore, effective penetration by the LN removes the chips from the cutting zones, which results in the just-machined surface not being subjected to friction, which in turn leaves no marks on the surface. Nonetheless, as cryogen is highly evaporable, the most appropriate approach is for the cryogen to seep into the cutting zone to improve the surface finish. Therefore, the use of cryogenic machininggenerates the best surface characteristic compared to the surface feature obtained by dry cutting. The flank face wear value exceeded the 0.3 mm criteria for both cutting conditions. The tool life for both conditions were as shown in Table 3. Figures 6 and 7 show the analysis of mean SN ratio plots for tool life obtained in both conditions. The analysis indicated that lower feed rate (0.15 mm/tooth) is more favorable when the tool life is of interest. The figures reveal that an increase in cutting speed leads to lower tool life. By increasing the cutting speed, the friction between cutting edge and workpiece surfaces increases and it may cause higher temperaturesat the toolworkpieceinterface. Therefore, the tool flank wear increases correspondingly. According to the work of Krahmer et al., [10], it was stated that the cutting speed plays an important role in cutting tool life of free-cutting steels (SAE 1212, SAE 12L14, and SAE 1215). Figure 8 shows that the tool life which was obtained was in the range of 12.3-55.3 min for both cutting conditions. A minimum tool life of about 12.3 min was observed for Experiment no 1 under dry conditions. Meanwhile, the maximum tool life of 55.3 min was achieved in Experiment no 1 under dry conditions. The results indicate that in comparison to dry cutting, the use of cryogenic coolants greatly prolongs the lifespan of a tool by a maximum of 41.7% , which is in line with the results of literature [11][12][13]. This happens because cryogenic temperatures increase tool hardness and consequently lowers the rate of wear. Therefore, cryogenic cooling substantially improved the cutting tool performance. Therefore, the use of higher cutting speeds is suitable of machining AISI 4340 steel with 32 HRC with coated carbide tools. Yet the tool life increases with cutting speeds from 200 to 300 m/min without altering the surface roughness. This argument is supported from the study made by Halim et al., [14] which shows that cryogenic CO2 is able to reduce the flank wear rates, thus enhancing the life of a cutting tool. The applications of a cryogen can improve the removal of chips that have adhered in the toolworkpiece-chip area, which may then lead to lower wear and surface roughness. A good chip breaking mechanism at the tool-chip interface also leads to lower surface roughness. Panda et al., [3] stated that the intensity of surface roughness directly impacts the increase of flank wear of thetool.This may be due to the larger the area of flank wear, the higher the friction of the tool on the workpiece, resulting in high heat generation which will eventually result in the higher of surface roughness. This is in line with Kumar et al., [15], who claimed that the growth of flank wear increased with the surface roughness. From the various reported research, it was shown that the prolonged tool life and better surface roughness were because of reduction of flank wear of the tool when a cryogenic is properly applied. The application of cryogenics results in a good surface finish by reducing the temperature and tool wear. Conclusion This paper has discussed the effects of LN application on the end milling of AISI 4340 alloy steel. Tool life and surface roughness for both dry and cryogenic cutting conditions were compared experimentally. The following can be concluded:  Cryogenic cooling could be employed as an effective alternative to dry milling, due to reasonably good performance within the range of parameters selected in this study.  Under cryogenic conditions, excellent surface finish was reported, as compared to that under the dry conditions, by 43.9%.  In terms of tool life, LN is more functioning in machining at high cutting speeds. Tool life is extended up to a maximum of 41.7% by cryogenic cutting as contrasted to dry cutting.  Further studies are needed to verify whether the tool and surface hardness affect the surface roughness and tool life.
2020-10-19T18:13:03.535Z
2020-09-12T00:00:00.000
{ "year": 2020, "sha1": "348dc4980fd576e5c056c08eefa27ff3adc99aca", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/912/3/032087", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "61429f30877eee2ead83e55860a9c2df7e2a7e89", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
55057814
pes2o/s2orc
v3-fos-license
Aerosol light absorption from attenuation measurements of PTFE-membrane filter samples: implications for particulate matter monitoring networks Aerosol light absorption from attenuation measurements of PTFE-membrane filter samples: implications for particulate matter monitoring networks Apoorva Pandey1, Nishit J. Shetty1, and Rajan K. Chakrabarty1,2 1Center for Aerosol Science and Engineering, Department of Energy, Environmental and Chemical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA 2McDonnell Center for the Space Sciences, Washington University in St. Louis, St. Louis, MO 63130, USA Introduction Aerosol light absorption affects the radiative balance of the Earth's atmosphere through direct and indirect mechanisms (Bond et al., 2013;Kanakidou et al., 2005;Ramanathan et al., 2001).The light absorption metric relevant to climate modelers-mass absorption cross-section (MAC)-depends on the size, shape and composition of the aerosols (Andreae and Gelencsér, 2006;Bond and Bergstrom, 2006;Moosmüller et al., 2009).This property has a complex dependency on the emission source, especially for carbonaceous aerosols (Andreae and Gelencsér, 2006;Bond and Bergstrom, 2006;Chakrabarty et al., 2010). A first-principle method of measuring contact-free aerosol light absorption is using photoacoustic spectroscopy, which employs lasers at selected wavelengths to heat the aerosols, thereby producing a detectable pressure signal (Arnott et al., 1999).Absorption can also be estimated as the difference between in-situ measurements of extinction and scattering (Schnaiter et al., 2005;Sheridan et al., 2005).Alternatively, a commonly adopted technique for estimating light absorption uses measurements of transmittance and/or reflectance for aerosol particles collected on a filter substrate.Instruments developed based on this technique, including the aethalometer (Hansen et al., 1984) and the Particle Soot Absorption Photometer or PSAP (Virkkula et al., 2005), facilitate semi-continuous sampling of particles and produce time-averaged bulk absorption measurements.Particles may also be collected on quartz fiber or Teflon filters and analyzed for their absorption using standalone spectrophotometers (Pandey et al., 2016;White et al., 2016;Zhong and Jang, 2011). Filter-based measurements are attractive because of their ease of deployment in field settings and low cost, but they suffer from several artifacts.Particles embedded in a multiple-scattering medium experience a larger optical path length than particles in their native suspended state, leading to the appearance of enhanced light absorption (Bond et al., 1999;Clarke, 1982;Gorbunov et al., 2002).This is referred to the as the multiple scattering artifact, and it depends on the choice of filter medium.A higher loading of absorbing aerosols can diminish the effect of multiple scattering, inducing an aerosol dependent loading artifact (Arnott et al., 2005;Weingartner et al., 2003).Highly scattering aerosols could enhance multiple scattering and lead to increased backscatter, which leads to an overestimation of absorption (Lack et al., 2008;Weingartner et al., 2003).These artifacts have been evaluated for several commonly used filter-based instruments, such as those aforementioned, by comparing their measurements with contact-free aerosol light absorption measurements or using reference materials with known optical properties.Typically, correction algorithms for these artifacts are formulated as functions of some combination of filter and aerosol properties (Arnott et al., 2005;Collaud Coen et al., 2010;Virkkula, 2010;Weingartner et al., 2003) and are specific to a given measurement system. In various field settings, aerosol samples are collected on polytetrafluoroethylene (PTFE) membrane filters (commonly known as Teflon filters) for inferring ambient or near-source particulate mass concentrations using gravimetric analysis (Koistinen et al., 1999).Major aerosol monitoring networks, such as the Interagency Monitoring of PROtected Visual Environments (IMPROVE) network (Chow et al., 2010;Solomon et al., 2014), the Chemical Speciation Network (CSN) (Solomon et al., 2014) and the Surface PARTiculate mAtter Network (SPARTAN) (Snider et al., 2015), collect particle samples on Teflon filters for gravimetric and elemental measurements.PTFE filters are chemically inert and unlike quartz fiber filters, present a very low surface area for organic vapor adsorption (Kirchstetter et al., 2001;Vecchi et al., 2014). Correction schemes developed for instruments that use fiber filters (like the PSAP and aethalometer) cannot be applied to infer aerosol light absorption properties using measurements of transmittance and/or reflectance on PTFE filters.A previous study on the artifacts associated with this estimation used a reference material and provided a constant multiple scattering correction factor for optical loadings smaller than a certain threshold (Zhong and Jang, 2011).Another recent study (White et al., 2016) proposed a theory-based model to calibrate attenuation measurements for Teflon filter samples and applied this new model to a historical dataset from IMPROVE network.They found that the reevaluated absorption values for the PTFE samples were well-correlated with thermo-optical elemental carbon (EC) measurements for co-located quartz fiber filters. In this work, we generated carbonaceous aerosols with varying physicochemical properties from the combustion of biomass fuels and kerosene.Combustion conditions were varied to yield a range of intrinsic aerosol optical properties.Kerosene combustion was used as a surrogate for fossil fuel burning, which is linked with soot or EC emissions (Andreae and Gelencsér, 2006;Bond et al., 2013).The combustion of wildland-and fuel-biomass is implicated in emissions of EC as well as light absorbing organic carbon (LAOC) (Andreae and Gelencsér, 2006;Chakrabarty et al., 2010;Chen and Bond, 2010). EC is known to absorb light throughout the visible and UV wavelengths, while LAOC absorbs preferentially in the near-UV and UV regions (Andreae and Gelencsér, 2006;Bond and Bergstrom, 2006;Kirchstetter et al., 2004;Sun et al., 2007). Therefore, we measured in-situ and contact-free aerosol light absorption and scattering coefficients using integrated photoacoustic-nephelometer (IPN) spectrometers operated at three wavelengths -375, 405 and 532 nm.Co-located with these measurements was a sampling system to collect particles onto Teflon membrane filters.Subsequent measurements of light attenuation, using ultraviolet-visible (UV-vis) spectrophotometer, were performed on the filter samples.Observed empirical relationships between particle light absorption and filter attenuation were established in conjunction with predictions from a one-dimensional (1-D) two-stream radiation transfer model. Experiments Diverse biomass fuels including wood and needles from pine, fir and sage trees, grass, peat and cattle dung were burned in a 21 m 3 stainless steel combustion chamber located at Washington University (Sumlin et al. (2017); Sumlin et al. (2018)). Flaming, smoldering and mixed combustion phases were employed (see Supplement) to generate a range of intrinsic aerosol properties: single scattering albedo (SSA) values at 375, 405 and 532 nm ranged 0.25-0.99 and Absorption Ångström Exponents (AÅE) for 375-532 nm ranged 1.2-6.8.A kerosene lamp was used to generate soot particles, with an SSA of 0.3 and AǺE within 0.70-1.1.A schematic of the experimental setup is shown in Figure 1.Approximately 10-50 g of a given type of woody biomass/grass/dung was placed in a stainless-steel pan and ignited using a flame.It was either allowed to continue flaming or brought to a smoldering phase by starving the flame with a lid.In the same type of pan, 5-15 g of peat was smoldered by using a ring heater to raise its temperature to 200 ⁰C.In one set of experiments, smoke from the chamber was directly sampled, while in another set, a hood placed over the pan was used for sampling the aerosols.The chamber exhaust was closed during the burns.The outlet from the hood or chamber was passed through a diffusion dryer and a semivolatile organic compound (SVOC) denuder into a mixing volume, from which aerosols were continuously sampled by the four IPNs. During each burn, optical (absorption and scattering) signals were monitored using IPNs until a steady state was reached. During the steady state, particle samples were collected on 47 mm PTFE membrane (Pall) filters.The filter sampling flow rate was set to 5 liters per minute and the sampling durations were between 2 and 20 minutes.For each filter sample, an absorption optical depth (τa,s) of the deposited aerosols was calculated from the absorption coefficients measured using the IPNs: where b abs,av is the average absorption coefficient (in Mm -1 ) during the sampling duration t s (in min), Q is the flow rate (in liters per minute or lpm) through the filter and A s is the filter sample area (in m 2 ).Optical depth τ a,s for the samples in this study ranged between 0.01 and 0.68.The uncertainty in these estimates was predominantly from the standard deviation in b abs,av over the averaging interval, and was within 10% for all samples. Transmittance (T) and reflectance (R) for the filter samples were measured using a Perkin-Elmer LAMBDA 35 UV-vis spectrophotometer (described in Zhong and Jang (2011)).Attenuation (ATN) through the filter samples was calculated using (Bond et al., 1999;Campbell et al., 1995): When this equation is applied to blank filters, it results in ATN values between 0.01-0.03.A wavelength dependent "blank attenuation" was subtracted from the sample attenuation values.Replicate transmission and reflection measurements were used to estimate measurement error; these yielded an uncertainty of 5% in the calculated attenuation. A correction factor (C) that captures the net effect of multiple scattering and aerosol loading can be defined as: Two-stream radiative transfer model A 1-D two-stream radiative transfer framework for multiple scattering in absorbing media was developed in Bohren (1987) and subsequently discussed in relation to aerosol-filter systems in several studies (Arnott et al., 2005;Clarke, 1982;Gorbunov et al., 2002;Petzold and Schönlinner, 2004).Solving a radiation balance for an aerosol-laden filter medium yields the following expressions for transmittance (T l ) and reflectance (R l ), respectively: Here, ω l , g l and τ e,l denote the SSA, asymmetry parameter and extinction optical depth, respectively, of the composite layer. The parameter K is defined as: than unity and Eq.s (4A )and (4B) can be replaced by simplified approximations.In contrast, the Teflon filters used in this study are optically thin and constitute a weak multiple scattering medium: they transmit 70-80% of incident visible light. Therefore, the full equations for T l and R l were solved for the filter-particle system, using a range of values of τ a,s and SSA consistent with experimental observations.Two other required inputs could not be measured: the penetration of aerosols into the filter was assumed to be 10%, and the asymmetry parameter of the aerosols was fixed at 0.6, based on the typical values reported for biomass burning emissions (Martins et al., 1998;Reid et al., 2005).Transmittance and reflectance through a two layer system -the aerosol laden layer with properties T l and R l and a pristine filter layer with properties T f and R f -were calculated (Gorbunov et al., 2002): Using the above results, ATN was calculated (per Eq. ( 2)), and the results were used to examine the relationship between the properties of the aerosol deposits and the attenuation of light through the two-layer composite system.Further details of the calculations in this section are provided in the Supplement. Results and discussion Modeled and experimental values of light attenuation through filter samples are shown in Fig. 2. Certain combinations of SSA and τ a,s (shaded region in the figure) were never observed in our experiments (black dots): high SSA aerosols are associated with lower absorption per unit mass, therefore very high mass loadings would be required to yield the upper range of the τ a,s in this study.For SSA<0.9, the modeled attenuation values show little spread with changing SSA.Like the model predictions, experimental data show a non-linear nature relationship between attenuation and aerosol absorbance.However, attenuation values calculated from measurements are slightly lower than model predictions.This may be due to differences between assumed parameters in our model and their deviation from real-world values. The well-constrained relationship between aerosol and filter measurements in Fig. 2 suggests that τ a,s of deposited particles could be directly estimated by measuring light attenuation.The best fit relationship (R 2 = 0.87) between both parameters is given by: , = 0.48 () 1.32 (7) In Fig. 3, we combined all experimental data corresponding to the three wavelengths since our measurements showed no clear stratification with varying wavelength.Also shown in the figure are estimated τ a,s using a constant correction factor C of 0.67 proposed by Zhong and Jang (2011) (black perforated line); this correction factor clearly overestimates τ a,s for most ATN values investigated in this study.We find our data to be better represented by an approximate C = 0.46 based on a linear least-squares fit (R 2 = 0.79).However, any constant C value does not capture the non-linearity of the interaction between aerosol properties and the multiple-scattering within the filter medium.It should be noted that C in Eq. ( 3) represents the net effects of all filter artefacts.There are measurement errors associated with both ATN and τ a,s , and therefore, C contains propagation of uncertainties from both parameters.There was no correlation between C and ATN (see Fig. S3).We observed an inverse relationship between C and SSA (Fig. 4), consistent with results the from the two-stream radiative transfer model.For a given value of τ a,s , measured ATN will always be higher for aerosols with higher SSA values. Consequently, we should expect C to decrease with increasing SSA; this decreasing relationship for our experimental data is given by: Values of C and SSA for individual samples (shown in supplemental Fig. S3B) were aggregated into five SSA bins to demonstrate the inapplicability of an empirical correction factor formulation to low SSA data points in this study.The large spread in C values for low SSA is likely due to noise amplification from dividing two small (τ a,s and ATN < 0.2) numbers. For SSA>0.6, the correction factor decreases linearly. Implications for aerosol monitoring networks Teflon filters are routinely used for gravimetric and elemental analysis across monitoring networks (Chow et al., 2010;Snider et al., 2015;Solomon et al., 2014), as well as field and laboratory source characterization studies.The τ a,s versus ATN relation from Eq. ( 7) was applied to IMPROVE network's attenuation dataset from the year 2010, representing samples collected at 223 sites (details in Supplement).Their measurements were carried out at 633 nm wavelength using a Hybrid Integrating Plate and Sphere (HIPS) method (Bond et al., 1999).A detailed description of the measurement technique can be found in White et al. (2016).At each of these sites, co-located quartz filters were used to measure EC mass concentrations using thermo-optical analysis (Chow et al., 2007).EC is considered to be a surrogate for light absorbing aerosol species (Chow et al., 2010) and is therefore expected to correlate with estimates of light absorption.The HIPS-measured ATN data, after being corrected using our formulation showed a slightly improved correlation with corresponding EC measurements than the uncorrected measurements (Fig. S4).The ratio of uncorrected to corrected HIPS filter absorption coefficients (Fig. 5) is inversely correlated with EC mass concentrations.This is consistent with model predictions (Fig. 1) and experimental data (Fig. S3) that show that the relative contribution of the filter medium to ATN is high when τa,s is small.Overall, Fig. 5 shows that for lightly loaded filters (EC concentration < 0.1 µgm -3 ), uncorrected filter ATN measurements could lead to a 4to 16-fold overestimation in the inferred absorption coefficients. Conclusions We evaluated the relationship between in-situ aerosol light absorption and attenuation of aerosol deposits on Teflon filters for combustion aerosols (encompassing 0.25 ≤ SSA ≤ 0.99), at 375, 405 and 532 nm wavelengths.An empirical non-linear relationship was found between the absorption optical depth of sampled aerosols and attenuation through filter samples; the nature of this function was consistent with predictions from a two-stream radiative transfer model of the filter-aerosol system.Following Eq. ( 7), we propose the estimation of Aerosol MAC (m 2 g -1 ) values from filter ATN measurements using: where A s is the filter sample area (in m 2 ) and m is the mass on deposited particles (in g).Additionally, aerosol absorption coefficients (b abs ; in Mm -1 ) can also be calculated using: The quantities Q and t s are as used in Eq. ( 1).Caution must be taken, as suggested by the two-stream model results, on the limits of applicability of the empirical relationships (equations 7-10)-significant errors could result from application of the relationships if the aerosol SSA>0.9 and ATN values are beyond the range of this work. Teflon filters are routinely used for gravimetric and elemental analysis, across aerosol monitoring networks, as well as field and laboratory source characterization studies.Therefore, we applied this Eq.( 7) to attenuation data from all IMPROVE sites for the year 2010 and found that there was a slightly improved correlation with independent measurements of EC mass concentration.For low aerosol concentrations, the measured attenuation coefficients may be 4-16 times larger than the aerosol absorption coefficient. Supplement Includes a schematic of the experimental setup and table of experiments (Text S1, Figure S1, Table S1), detailed equations for the two-stream model (Text S2, figures S2 and S3), correction factor plots (Text S3, figures S4 and S5), and addition information on the implications (Text S4 and Figure S6). ) Arnott et al. (2005) used the above model to derive the form for an approximate correction factor for the aethalometer.The aethalometer uses optically-thick quartz fiber filters, which are strongly multiple scattering, transmitting only ~10% of light in the visible wavelengths.A mathematical consequence of strong multiple scattering is that the term Kτ e,l is much greater Atmos.Meas.Tech.Discuss., https://doi.org/10.5194/amt-2018-228Manuscript under review for journal Atmos.Meas.Tech.Discussion started: 1 August 2018 c Author(s) 2018.CC BY 4.0 License. Figure 1 :Figure 2 : Figure 1: Schematic representation of the experimental setup.Inlet to the semi-volatile organic compound denuder was taken from either the chamber sampling port or the hood. Figure 3 : Figure 3: Relationship between absorption optical depth and attenuation through carbonaceous aerosol filter sample measured at 5 Figure 4 : Figure 4: Correction factor C for filter artefacts as a function of single scattering albedo of the deposited aerosols.Error bars show 10 Figure 5 : Figure 5: Ratio of uncorrected filter absorption coefficients at 633 nm, measured from IMPROVE samples to corrected absorption coefficients obtained using Eq.(7), as a function of EC mass concentration.Absorption coefficients are from attenuation measurements on Teflon filters and EC concentrations are from thermo-optical measurements on quartz filters; IMPROVE network collects samples every 3 days at 223 sites.
2018-12-05T10:56:47.861Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "230b41a007da04c9af9716f4e08e3a32fa5c0fa5", "oa_license": "CCBY", "oa_url": "https://amt.copernicus.org/articles/12/1365/2019/amt-12-1365-2019.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "230b41a007da04c9af9716f4e08e3a32fa5c0fa5", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Materials Science" ] }
253510235
pes2o/s2orc
v3-fos-license
On automorphism group of a possible short algorithm for multiplication of $3\times3$ matrices Studying algorithms admitting nontrivial symmetries is a prospective way of constructing new short algorithms of matrix multiplication. The main result of the article is that if there exists an algorithm of multiplicative length $l\leq22$ for multuplication of $3\times3$ matrices then its automorphism group is isomorphic to a subgroup of $S_l\times S_3$. It is often denoted also by m, n, p , cf. [7,Section 14.2]. The question of main interest in the algebraic complexity theory is the rank of T (or at least the estimates for this rank), because of the following fact (see [7,Section 15.3]). Proposition 1. Assume that for some m, n, p ≥ 1 the tensor m, n, p has a decomposition of length l. Then there exists an algorithm for multiplication of N × N matrices over an arbitrary field of characteristic 0 of complexity O(N τ ) arithmetical operations, where τ = (3 ln l)/ ln mnp. One of prospective ways of search for short decompositions of m, n, p is to study decompositions admitting nontrivial symmetry groups. This approach was proposed by the author in preprints [8], [9] and independently by Landsberg and co-authors in [17], [18], [3]. Recall the definitions related to decompositions automorphisms. Let V = V 1 ⊗ . . . ⊗ V l be as above. To avoid long formulae, we consider only the case l = 3. Let S( V ) be the group of all nondegenerate linear transformation of V that preserve the tensor decomposition of this space, but possibly permuting the factors. For example, the transformations of the form where α : V 2 −→ V 1 and β : V 1 −→ V 2 are isomorphisms, and γ is a nondegenerate transformation of V 3 (note that in this case it is necessary to suppose that dim V 1 = dim V 2 ). Obviously, S( V ) preserves the set of decomposable tensors. (Sometimes the elements of S( V ) are called Segre automorphisms, because when acting on the projectivization ( V \{0})/C * they preserve the Segre variety (the image in the projective space of the set of all nonzero decomposable tensors)). The subgroup of elements of S( V ), corresponding to the trivial permutation of the factors, that is of the form For a tensor w ∈ V call the isotropy group of w, and the intersection Γ 0 (w) = Γ(w) ∩ S 0 ( V ) the small isotropy group. Let P = {w 1 , . . . , w l } be a decomposition for w. Consider P as a multiset, that is an unordered set some of whose elements may be equal. The automorphism group for P is the subgroup of all elements of S( V ) preserving P: The isotropy group of m, n, p can be easily described (but the proof is not trivial !). Such a description was obtained in [19], [6], and more accurately in [10]. Specifically, Γ 0 ( m, n, p ) consists of all transformations of M mn ⊗ M np ⊗ M pm of the form where a ∈ GL(m, C), b ∈ GL(n, C), and c ∈ GL(p, C) (and observe that T (λa, µb, νc) = T (a, b, c), where λ, µ, ν ∈ C * ). If m, n, p are pairwise distinct, then Γ( m, n, p ) coincides with Γ 0 , whereas if some of m, n, p are equal, then Γ is the semidirect product Γ( m, n, p ) = Γ 0 ( m, n, p ) ⋋ Q, where Q ∼ = Z 2 , if two of m, n,and p are equal, and Q ∼ = S 3 if m = n = p. In the latter case we can take Q = σ, ρ , where The following approach is rather natural: take a subgroup G ≤ Γ(T ), where T = 3, 3, 3 , and study the G-invariant decompositions of T , trying to find a decomposition of length ≤ 22. This was done in [11], [12] for a certain (rather large) subgroup, isomorphic to S 4 × S 3 , and in [16] and [3] for a certain subgroup ∼ = Z 3 . In the former case it was shown that a G-invariant decomposition of length ≤ 23 does not exist. In the latter case two G-invariant decompositions of length 25 were found (in [16]), and a new decomposition of length 23 in [3], whose full automorphism group happens to be isomorphic to Z 4 × Z 3 . Obviously, the group Γ(T ) contains infinitely many subgroups (even up to cojugacy). A natural question arises, can we restrict in advance the set of subgroups G that can appear, in principle, as an automorphism group of a possible decomposition of length ≤ 22 (if such a decomposition do exists ?). To obtain such a restricting condition is the aim of the present work. The following theorem is proved. Theorem 2. Let P = {w 1 , . . . , w l } be a decomposition for 3, 3, 3 of length l ≤ 22. Then Aut(P) is isomorphic to a subgroup of S l × S 3 . More precisely, the map g → (α(g), β(g)) is injective, where α(g) is the permutation of elements of P, induced by g ∈ Aut(P), and β(g) is the permutation of the factors of M ⊗3 33 , corresponding to g. 2. A direct sum of tensors. If V 1 , V 2 , and V 3 are three spaces and V ′ i ⊆ V i are their subspaces, then the tensor product It is easy to see that for any tensor w ∈ V ′ its rank with respect to the is the same as the rank with respect to V 1 ⊗ V 2 ⊗ V 3 . Now assume that the spaces V i are decomposed into direct sums: respectively. Then we can consider the tensor w = w ′ + w ′′ . In such a situation it is said that w is a direct sum of w ′ and w ′′ , w = w ′ ⊕ w ′′ . (Strictly speaking, we should say this is a inner direct sum, similarly to concept of inner direct sum of space or inner direct product of groups. We can also define an outer direct sum of tensors, in an evident way.) Obviously, rk (w) ≤ rk (w ′ ) + rk (w ′′ ). There was a long-standing conjecture (the Strassen direct sum conjecture) that the latter inequality is actually always an equality. It was however failed, see [29]. Nevertheless, the direct sum conjecture is true in a certain particular case. Proof. See [25]. Another proof is contained in [14]. We also need another two general concepts regarding tensors. Similarly, w and w ′ are similar up to a permutation of tensor factors, if there exist a permutation π ∈ S 3 and isomorphisms ϕ i : . Clearly, if two tensors are similar, or similar up to a permutation of factors, then they are of the same rank. 2) Notice that for any two subspaces A} is any family of tensors, the there exists the least subspace U ⊆ V 1 such that U ⊗ V 2 ⊗ V 3 contains all w α . This U will be called the (tensor) projection of the family {w α } to V 1 . The projections to other factors are defined similarly. It is easy to see that the projection of {w α } to V 1 is nothing else but the span of convolutions of the tensors w α with all tensors of the form l 2 ⊗ l 3 , where l 2 ∈ V * 2 , l 3 ∈ V * 3 . So it is easy to find the projection of T to each of the three factors M: it is the whole M. 3. Some identifications. Let V = C 3 be the space of all column vectors of height 3. Its dual space V * may be identified with the space of rows of length 3, so that l, v = l(v) = lv, where l is a row, v a column (note that lv is a 1 × 1 matrix, that is, a number). The group GL(V ) = GL(3, C) acts (on the left) on V as usually, i.e., g(v) = gv. This group acts (on the left) also on V * by g(l) = lg −1 . This actions are compatible in the sense that always g(l), Next, the space V ⊗ V * can be identified with M = M 3 (C) by the rule v ⊗ l → vl (note that the product of column by a row is a 3 × 3 matrix). The matrix corresponding to the tensor e i ⊗ e j under this identification is the matrix unit e ij (here e i and e i are the column and the row, respectively, that have 1 in i-th position, and 0 in other places). The When V ⊗ V * identifies with M, the corresponding action on M is Let V = V 1 ⊕ . . . ⊕ V k be a decomposition into a direct sum. Let L i be the subspace in the row space V * , consisting of all l's such that l, v = 0 for all v ∈ V j , j = i. Then it is easy to see that V * = L 1 ⊕ . . . ⊕ L k and the pairing of V i and L i is nondegenerate, so that L i is identified with V * i , and so V * identifies with V * 1 ⊕ . . . ⊕ V * k . We apply the identifications described to prove the following. . Proof. First we reduce the statement to the particular case where V i and U j are coordinate subspaces, that is V i = e α | α ∈ I i C , U i = e α | α ∈ J i C , where I 1 , . . . , I m are disjoint subsets that form a partition of {1, 2, 3}, as well as J i . Obviously, there exist a, b ∈ GL(V ) such that aV i and bU j are coordinate subpaces. Then U * j b −1 = (bU j ) * are coordinate subspaces also. Consider the transformation x → axb −1 of M. The image of K under this transformation is As T preserves T , T takes ζ to ζ 1 , and ξ to ξ 1 , where ζ 1 and ξ 1 are the components of T in K 1 ⊗M ⊗M and N 1 ⊗M ⊗M, respectively. Clearly, ζ ∼ ζ 1 . As we suppose the proposition is true in the case of coordinate subspaces, we see that Thus, we can assume that V i and U j are coordinate subspaces. It is clear that e αβ ⊗ e βγ ⊗ e γα . Since the sets I i × J i are disjoint, and the sets {1, 2, 3} × I i are disjoint also, as well as the sets Finally, it is clear that η i ∼ |I i |, |J i |, 3 = p i , q i , 3 . This proves the first claim. Prove the equality for ranks, that is We can assume that the decomposition contains no trivial summands, that is all p i , q i ≥ 1. 4. The proof of the main theorem. Now we begin to prove the main theorem. Assume on the contrary that the homomorphism g → (α(g), β(g)) is not injective. This means that there exists g ∈ Aut(P) which preserves all three of the factors M and fixes all tensors w i of the decomposition P = {w i | i = 1, . . . , l}. Since g preserves the factors, it follows that g = T (a, b, c) for some a, b, c ∈ GL(3, C), and at least one of a, b, and c is not a scalar matrix. So we can restate the theorem in the following equivalent form: Proposition 5. Let a, b, c ∈ GL(3, C), and at least one of a, b, and c is not a scalar matrix. Suppose that {w i = x i ⊗ y i ⊗ z i | i = 1, . . . , l} is a decomposition for T such that all w i are invariant under T (a, b, c). Then l ≥ 23. It is this proposition that we are going to prove. Notice that we can assume (and will assume below), without loss of generality, that a is not scalar. We need a lemma. Lemma 6. Let U and V be two spaces, Proof. It is obvious that if both A and B are diagonalizable, then A ⊗ B is diagonalizable also. It remains to prove that if one of A and B, say A, is not diagonalizable, then A ⊗ B is not. There exist u 1 , u 2 ∈ U and λ ∈ C * such that Au 1 = λu 1 and Au 2 = λu 2 + u 1 . Also, there exist v ∈ V and µ ∈ C * such that Bv = µv. Now we have When we identify M with V ⊗ V * this transformation corresponds to A ⊗ B, where Ax = ax, Bl = lb −1 . It follows from the lemma that both A and B are diagonalizable. So a is diagonalizable, and the transformation l → lb −1 of V * is diagonalizable, whence b is diagonalizable also. Let λ 1 , . . . , λ s be all the distinct eigenvalues of a, let µ 1 , . . . , µ t be the eigenvalues of b, and V = V 1 ⊕ . . . ⊕ V s and V = U 1 ⊕ . . . ⊕ U t be the corresponding decompositions into a sum of eigenspaces. Then V * = U * 1 ⊕ . . . ⊕ U * t is the eigenspaces decomposition for l → lb −1 , and the eigenvalue corresponding to U * j is µ −1 j . Next, we have Hence the set Σ of eigenvalues of Φ on M is the set of all distinct numbers of the form λ i µ −1 j , and the eigenspace correponding to σ ∈ Σ is For any tensor w i = x i ⊗ y i ⊗ z i which is a member of P, the x i is an eigenvector for Φ, that is x i ∈ M σ for some σ. Therefore T σ is the sum of all w i such that x i ∈ M σ . There are at least rk (T σ ) of such w i . Hence we obtain the inequality Observe that if (i, j), (i ′ , j ′ ) ∈ S σ and are distinct, then i = i ′ , j = j ′ . So we can apply Proposition 4 to compute the rank of T σ (with appropriate renumbering of the spaces V i and U j ), and obtain rk ( where d i = dim V i and f j = dim U j . As any pair (i, j) corresponds to some σ, we see that It remains to show that the latter sum is ≥ 23. The proof of Proposition 5, and so of Theorem 2, is complete. 5. Finiteness of the set of candidates. Let P be a hypothetical decomposition of length ≤ 22 for T . It follows from Theorem 2 that there are only finitely many possibilities for the isomorphism class of Aut(P). However, this does not give a guarantee that there are finitely many poosibilities for Aut(P), because for a given finite subgroup X ≤ Γ(T ) the group Γ(T ) can contain, in general, infinitely many subgroups isomorphic to X (say, all subgroups conjugate with X). It is easy to see, however, that if X ≤ Γ(T ) is a subgroup, Y = gXg −1 is a conjugate to it, and A is an X-invariant decomposition for T , then B = gA is a Y -invariant decomposition for T (and, conversely, every Y -invariant decomposition of T is B = gA, where A is an X-invariant decomposition). So when studying the decompositions of T which are invariant under finite subgroups we can restrict our attention and to consider a unique subgroup from each conjugacy class of subgroups. We have mentioned already that Γ(T ) ∼ = P SL(3, C) ×3 ⋋ Q, where Q ∼ = S 3 . The group P SL(3, C) ×3 = Γ 0 (T ) is an algebraic group, and it is easy to see that the conjugation by an element of Q acts on Γ 0 (T ) as a polynomial map. Therefore Γ(T ) is a (non-connected) algebraic group. But it is well known that if G is an algebraic group over an algebraically closed field of characteristic 0, and X is any finite group, then G contains only finitely many conjugacy classes of subgroups isomorphic to X. (See, e.g., [30], Theorem 1, or [28], Ch.2, Theorem 17. Actually, in these sources stronger statements are proved.) Thus, there are only finitely many possibilities for Aut(P), up to cojugacy in Γ(T ). 6. Further restrictions. It is clear that the set of conjugacy classes of subgroups of Γ(T ) ∼ = P SL(3, C) ×3 ⋋ S 3 that are isomorphic to a subgroup of S 22 × S 3 is very large. To give an observable description of this set is a technically difficult task by itself. The author thinks that this set well may contain billions of groups ! So we should obtain further restrictions on possible Aut(P), say to show that this group can not contain certain elements or subgroups. The aim of this section is to prove the following statement. Theorem 8. If P is a decomposition of length ≤ 22 for T = 3, 3, 3 , then Aut(P) does not contain elements of the form T (a, E, E), where a = E (and the elements T (a, b, c) such that exactly one of a, b, and c is different from E). (Note that the statement in parentheses is a trivial corollary of the first one. ) To prove Theorem 8 it is suficient to prove the following proposition. Proposition 9. Let g = T (a, E, E), where a = E, let w be an arbitrary decomposable tensor, and {w, gw, . . . g l−1 w} be its orbit under cyclic group g . Then there exist decomposable tensors w 1 , . . . , w k such that w 1 + . . . + w k = w + gw + . . . + g l−1 w, k ≤ l, and all w i are g-invariant. Indeed, suppose that P is a g-invariant decomposition of T . For each g -orbit O ⊆ P there exists a set of decomposable tensors O ′ such that |O ′ | ≤ |O|, all elements of O ′ are g-invariant, and the sum of elements of O ′ is equal to the sum of elements of O. Replacing all O's by O ′ , we obtain aa set of decomposable tensors P ′ such that the sum of elements of P ′ is T , all elements of P ′ are g-invariant, and |P ′ | ≤ |P|. But then |P ′ | ≥ 23 by Proposition 5, whence |P| ≥ 23, a contradiction. (a 1 , b 1 , c 1 ), and moreover a m 1 = b m 1 = c m 1 = E. Now begin to prove Proposition 9. When l = 1 the statement is trivial, so below we assume l > 1. Let m be the order of g. Then l divides m. Moreover, we can assume, by Lemma 10, that g = T (a, E, E) and the order of a is m. Let λ 1 , . . . , λ t be all the distinct eigenvalues of a. Then the eigenvalues of the map A : x → ax of M are the same λ i , the multiplicity of λ i on M equals thrice its multiplicity in the usual sense. Also, the eigenvalues of B : z → za −1 are λ −1 i , and the multiplicity of λ −1 i is again thrice the multiplicity of λ i . Thus, we have decompositions where K i = {x ∈ M | ax = λ i x} , L i = {z ∈ M | za −1 = λ −1 i z} . Let w = x ⊗ y ⊗ z be the decomposable tensor as in the hypothesis of the Proposition. We have g i w = a i x ⊗ y ⊗ za −i . Decompose x = x 1 + . . . + x t , x i ∈ K i , z = z 1 + . . . + z t , z i ∈ L i . Hence w = t i,j=1 x i ⊗ y ⊗ z j . Notice that g acts on the subspace K i ⊗ M ⊗ L j by the multiplication by λ i λ −1 j . The latter equals 1 if i = j, and is a nontrivial m-root of 1 if i = j. Hence whereas the summands x i ⊗ y ⊗ z j with i = j give zero when summed over g . Some of x i or z i may vanish. We assume, up to renumbering, that x i = 0 and z i = 0 when i ≤ s, and x i = 0 or z i = 0 when i > s. At last, note that since g l w = w, we have Thus, the orbit sum for w is equal to l s i=1 x i ⊗ y ⊗ z i . All the tensors x i ⊗ y ⊗ z i are g-invariant. So the orbit sum is the sum of s g-invariant decomposable tensors. This proves the proposition when s ≤ l. Obviously, s ≤ t ≤ 3. As l ≥ 2, the only possible case where s > l is the case s = 3, l = 2. Take this case to a contradiction. We have a 2 x = λ 2 1 x 1 + λ 2 2 x 2 + λ 2 3 x 3 . Since λ i are pairwise distinct, and x i are linearly independent, it follows from Wandermonde that x, ax, and a 2 x are independent. Whence g 2 w = w, a contradiction. The proof of Proposition 9, and so of Theorem 8, is complete.
2022-11-15T06:42:49.586Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "f106c4ed0c48e5339174f82463e29f279821c259", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f106c4ed0c48e5339174f82463e29f279821c259", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
208337968
pes2o/s2orc
v3-fos-license
Suprapatellar versus infrapatellar approaches in the treatment of tibia intramedullary nailing: a retrospective cohort study Background Tibial shaft fractures are routinely managed with intramedullary nailing (IMN). An increasingly accepted technique is the suprapatellar (SP) approach. The purpose of this study was to compare the clinical and functional outcomes of knee joint after tibia IMN through an suprapatellar (SP) or traditional infrapatellar (IP) approach. Methods Retrospective analysis was performed in patients with tibial shaft fractures that were treated with IMN through a SP or IP approach between 01/01/2014 and 31/12/2016. The clinical and functional outcomes of the knee were assessed with the Hospital for Special Surgery (HSS) Knee Score. Secondary outcomes included the operation time and intraoperative blood loss. Results A total of 50 patients/fractures (26 IP and 24 SP) with a minimum follow-up of 15 months were evaluated. All fractures were OTA 42. No significant differences were found between the two groups in age, gender, side of fractures, operation time, intra-operative blood loss, and follow-up time. No significant difference was seen in HSS score (P = 0.62) between them. Sub analysis of all the HSS components scores revealed no significant differences between pain (P = 0.57), the stand and walk (P = 0.54), the need for walking stick (P = 0.60) and extension lag (P = 0.60). The other HSS components showed full scores (IP 10 vs. SP 10) in both approaches, including muscle force, flexion deformity and stability components. The range of motion (ROM) component score was superior in the IP group (P = 0.04) suggesting a higher ROM. Conclusions Both SP and IP approach results in equivalent overall HSS knee scores. However, for the HSS component, the IP approach was superior to SP approach regarding the ROM. Background Tibial shaft fractures are primarily caused by highenergy trauma and are the most common long bone fracture seen, with 2% of all fractures occurring in the adults adult [1,2]. The insertion of an intramedullary nail (IMN) with interlocking screws is reported to be a successful surgical approach for treating tibial shaft fractures and allows for early functional rehabilitation [3,4]. Traditional infrapatellar approach for tibia IMN is a popular surgical procedure used in the treatment of tibial shaft fractures. However, IMN insertion through infrapatellar (IP) approach remained technically challenging due to quadriceps muscle force resulting in proximal fracture fragments displacement with the knee in flexion, and an increased risk of valgus and procurvatum deformities following tibial nailing [5,6]. Besides, chronic anterior postoperative knee pain is one of the most frequent complications after IMN insertion, the incidence was reported varying from 10 to 80% [7]. To overcome these issues, the semiextended approach for tibial IMN insertion was first described by Tornetta et al. [8], and later modified to a suprapatellar (SP) approach using a midline quadriceps tendon insertion site by Cole et al. [9]. This new approach suggests that valgus and procurvatum malalignment has been more easily avoided when the knee is maintained in extension and allows for easier anteroposterior and lateral imaging of the tibia [10,11]. However, the main concern of this approach is the potential for damage to the patellofemoral articulation with a concurrent effect on anterior knee pain after intramedullary nail fixation and patellofemoral arthritis [12]. There is no reliable evidence on the incidence of patellofemoral joint damage, limiting its clinical application. A recent randomized controlled trial (RCT) showed that suprapatellar approach was superior to infrapatellar approach for the treatment of tibial shaft fracture regarding the functional knee outcomes [13]. Nevertheless, several studies showed no significant differences in pain, knee range of motion or knee functional score between the SP and IP approaches [6,14,15]. To date, there is no consistent conclusion about whether suprapatellar approach is superior to infrapatrellar approach. The current study therefore compared the clinical and functional outcomes between SP and IP approaches for nailing a tibial shaft fracture. This retrospective analysis was to determine whether the SP approach was superior to the IP approach with respect to functional knee outcomes. Patients and methods This retrospective study was completed at Trauma Center of the First Affiliated Hospital of Anhui Medical University. Every skeletally mature patient with a tibia fracture who underwent treatment with an intramedullary nail between January 2014 and December 2016 was identified. The study was approved by the Ethics Committee of the First Affiliated Hospital of Anhui Medical University and conducted in accordance with the Helsinki Declaration of 1975 as revised in 2013. Tibia fractures were graded according to the Orthopaedic Trauma Association Classification (OTA/AO) scheme based on the initial injury films and computed tomography (CT) [16]. Inclusion criteria included extraarticular tibia fractures (OTA). Exclusion criteria included prior fracture of ipsilateral tibia, open fracture, intra-articular tibia fracture, pathological fracture, multiple trauma, and insufficient radiographic or chart data. Patients were divided into two groups: those treated using a SP IMN insertion technique versus those using IP IMN insertion technique, which included insertion through a medial parapatellar technique. Patient demographics and characteristics are shown in Table 1. Procedures were performed by the same senior orthopaedic surgeon who was well trained in both techniques. All fractures were treated with a reamed IMN in a nondynamized mode (T2 Tibial Nail, Stryker). All patients were contacted after a minimum of 15 months following surgery, and the knee outcomes of all patients were evaluated by a trained and experienced orthopaedic surgeon using the modified knee-rating system of The Hospital for Special Surgery (HSS) [17]. Perioperative blood loss and time to surgery were extracted from the surgical notes. For the conventional IP approach group, a 3 cm incision was performed at the medial side of the patellar tendon and the patellar tendon was retracted to the lateral side to get to the anterior tibia at the junction of the anterior cortex and articular surface. Then the knee was flexed to about 130°to obtain the desired nailing entry point, which was defined as medial to the lateral tibial spine on the anteroposterior view and anterior to the articular margin on the lateral view with the guidance of C-arm. The next steps are standard surgical techniques of IMN insertion. For the SP approach group, a 3 cm incision was made approximately 2 cm proximal to the superior pole of the patella (Fig. 1), then the quadriceps tendon and articular capsule were split lengthwise, after which a specialized insertion cannula (T2 Tibial Nail, Stryker) within a protective sleeve was placed at the desired entry point through the trochlear groove under the patellar (Fig. 1). The entry point was defined as the IP approach with the guidance of C-arm (Fig. 2). After that, IMN was inserted through the specialized insertion cannula as per convention. Data were obtained from the hospital records and the standardized data sheets completed by the clinical teams involved in surgical care. The data collectors received training and supervision from the primary investigators in the identification and classification of complications and process measures. Statistical analyses were performed using SPSS version 19.0 (SPSS, Chicago, IL). Data are given as mean ± standard deviation (SD). After testing for normality with the Kolmogorov-Smirnov test, an unpaired Student's t-test was used to compare the SP and IP groups regarding the total HSS score and all The range of motion (ROM) component score was superior in the IP group (IP 17.5 ± 1.03 vs. SP 16.71 ± 1.65; P = 0.04) suggesting a higher range of motion. The other HSS components showed full scores (IP 10 vs. SP 10) in both approaches, including muscle force, flexion deformity and stability components. No statistical differences were found in the current study between the two approaches regarding complications, incidence of the need for walking stick (IP 2/26 vs. SP 1/24; P = 0.60) and extension lag (IP 2/26 vs. SP 1/24; P = 0.60) ( Table 2). NA: Not applicable. Discussion This study provides the clinical rationale that the SP and IP approaches can get similar knee functional outcomes in the treatment of tibial shaft fracture. The results demonstrated that the SP approach is comparable to the traditional IP technique with regard to the overall HSS knee score. However, for the HSS component, our study showed that the IP approach was superior to SP approach with respect to the range of motion. Evaluation of the overall HSS knee score of SP and IP approaches for the treatment of tibial shaft fracture in this study demonstrated comparable outcomes. Our results are in line with the previous data reporting comparable functional knee outcomes between SP and IP approaches [14,15]. Chan DS et al. compare the clinical outcomes of the knee joint after SP versus IP tibial nail insertion in a prospective randomized study with 42 patients and 12 months of follow-up, and reported no significant difference regarding the Lysholm knee scores [14]. Courtney PM et al. similarly evaluated 24 patients who underwent IP IMN and 21 patients who underwent SP IMN after more than 8 months, and found similar results regarding the Oxford Knee Score [15]. However, a meta-analysis of RCTs indicates that SP IMN has higher Lysholm knee scores than IP IMN [18]. Similarly, Sun Q et al. found a higher Lysholm knee score in SP IMN than IP IMN in a prospective randomized study with 162 patients after a mean of 2 years [13]. However, it must be noticed that type of knee functional score in the previous study was different from the present study, which may generate heterogeneity. In accordance with other studies, the present study revealed comparable operation time and blood loss between SP IMN and IP IMN. These findings are confirmed by Sun Q et al. who reported a mean operation time of 71.01 min and an intraoperative blood loss of 22.11 ml for SP IMN, showing no difference compared to IP IMN with operation time of 73.26 min and intraoperative blood loss of 21.67 ml. Chen X et al. conducted a meta-analysis of RCTs and reported that there were no significant differences in the operative time and blood loss between SP and IP groups [19]. However, review analysis of RCTs indicated that SP IMN could significantly reduce total blood loss compared to IP IMN [20]. The difference might be explained by differing surgical techniques, especially during preparation or insertion of the nails and screws. The HSS pain component demonstrated that there was no significant difference between SP and IP groups. This finding goes along with a prospective randomized study operated by Chan DS et al. [14] who compared the visual analog score (VAS) of SP to IP approaches with 42 patients, and reported no significant difference regarding VAS pain scores. Similarly, Both Jones et al. [6] and Courtney et al. [15] found that the VAS pain score of the SP group was equivalent to the IP group. However, multiple studies revealed less postoperative knee pain in SP than IP approaches [13,20,21]. Sun Q et al. conducted a 2-year follow-up of 162 patients in an RCT study and found lower VAS pain scores following SP IMN compared to IP IMN [13]. The etiology of anterior knee pain is undoubtedly multifactorial, which may be related to cartilage injury, patellar ligament injury, iatrogenic damage to the IP nerve, and the protruding nail end at the tibial plateau [22][23][24]. Zamora et al. [25] and Gaines et al. [23] conducted cadaveric studies and found that the SP approach for tibial nailing has a similar rate of soft tissue damage compared to the IP approach. These results might interpret the equal pain score in SP and IP groups in the present study. This study demonstrated that the ROM component was significantly superior in the IP approach. However, the previous studies showed different results compared to our studies [13,14]. Sun Q et al. reported on a 2-year follow-up after SP tibial IMN insertion in an RCT study with 162 patients and found equivalent knee ROM compared to IP IMN [13]. Chan DS et al. reported no significant differences in knee ROM between SP and IP IMN after 12 months of follow up in an RCT study with 42 patients [14]. Song et al. found that the knee ROM was significantly associated with the severity of knee pain [22]. Moreover, Aksahin et al. [26] found that the damage to quadriceps might worsen the patellar tilt due to the sagittal patellar tilt and quadriceps hypotrophy after tibial nailing, besides, the displacement of patella might have a negative impact on the knee function. Therefore, much attention must be paid to minimize the damage to quadriceps, especially in SP IMN insertion. Further limitations of this study should be acknowledged. First, this was a retrospective evaluation for comparing the SP and IP approaches in intramedullary nailing of tibia with a small sample size. A long-term RCT study with a larger scale is needed to further evaluate the efficiency of the SP approach. Second, No arthroscopy examination was conducted to identify the cartilage changes pre-operatively and at final follow-up. Third, the alignment of fracture reduction, the fluoroscopy time and the accuracy of the entry point were not evaluated. Further workup in terms of biomechanical stability as well as finite element analysis is necessary to compare the different fixation methods and also compare the results to the practical applicability in the clinical routine. Conclusions Based on the clinical outcomes of SP and IP tibial IMN insertion obtained in this study, the SP and IP approaches can get similar knee functional outcomes in the treatment of tibial shaft fracture with regard to the overall HSS knee score. However, for the HSS component, the IP approach was superior to SP approach regarding the range of motion. A larger prospective trial with long-term follow-up is needed to improve statistical power and establish if any late sequelae exist.
2019-11-29T15:12:14.191Z
2019-10-02T00:00:00.000
{ "year": 2019, "sha1": "f94eb32df07f958851b00642194b26478d03f9db", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-019-2961-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f94eb32df07f958851b00642194b26478d03f9db", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
15751746
pes2o/s2orc
v3-fos-license
A new method of subtotal thyroidectomy for Graves’ disease leaving a unilateral remnant based on the upper pole Abstract Background: The aim of this prospective randomized study was to evaluate the feasibility of subtotal thyroidectomy with leaving a unilateral remnant based on the upper pole. Methods: Patients who underwent the subtotal thyroidectomy and isthmusectomy leaving either a unilateral remnant based on the upper pole (Group I, 79 patients) or the bilateral dorsal thyroid tissue remained (Group II, 89 patients) were compared in operation time, blood loss, recurrence, and postoperative complications. Results: Among 168 patients analyzed, the operation time remained similar, but the blood loss, the reoperation time, and recurrence in Group I were much less than Group II. In addition, no postoperative hemorrhage occurred in Group I. Two patients (2.28%) in Group II underwent recurrent laryngeal nerve damages. Four patients (5.06%) in Group I and 3 patients (3.37%) in Group II experienced transient hypocalcemia. Recurrence only occurred in Group II. Conclusion: In terms of blood loss, reoperation time, postoperative complication, and the recurrence, subtotal thyroidectomy with recurrent laryngeal nerves identification and the unilateral superior pole remnant of the gland provides a better outcome than subtotal thyroidectomy with bilateral dorsal thyroid tissue remnant. Introduction Graves' disease (GD), an autoimmune thyroid disease, has been identified as the most common cause of hyperthyroidism. Currently, 3 different treatments are commonly adopted for GD hyperthyroidism: antithyroid drugs (ATD), radioactive iodine therapy (RAI), and surgical treatment. [1] The treatment protocols for Graves' disease diversify across countries and institutions. In most cases, ATD is often the first-line therapy modality for Graves' disease, followed by RAI or surgery when drug therapy fails. However, surgery still has several advantages, especially in patients with a large goiter, when antithyroid medication fails, and in patients who expect an immediate remission. In these cases, the surgical procedure is still a preferred treatment of Graves' disease. When surgery is indicated for the treatment of GD, 1 factor that remains controversial is the extent of surgery. Total thyroidectomy (TT) leaves almost no remnant thyroid tissue behind, but it can be associated with a higher complication rate. In addition, total thyroidectomy requires patients' lifelong selfadministration of levothyroxine sodium. The majority of GD patients, particularly in developing countries such as China, are unwilling to accept this responsibility because of the long-term inconvenient life style and financial concerns. Subtotal thyroidectomy (STT) has been recommended as a safe procedure due to its lower complication rate, and thus another popular surgical treatment option. [2][3][4] In China, a frequently adopted thyroidectomy is bilateral subtotal thyroidectomy and isthmusectomy with the posterior aspect of thyroid tissue left on either side of the trachea. However, there are several flaws in this operation. First, it is difficult to estimate the appropriate amount of remnant thyroid tissue, which may be associated with high recurrence rate of GD. Additionally, this operation mode may cause massive bleeding due to a large wound and transient or permanent recurrent laryngeal nerves damage caused by poor identification and preservation. Finally, when the illness relapses, removal of the remnant tissues may be very risky because the identification and preservation of recurrent laryngeal nerves and parathyroid gland was even more challenging. YL and BL contributed equally to this work. The conception of the work was contributed by Prof. Huang. All authors did their best to acquire the data; every revision would be permitted by all authors. Our data were mainly analyzed by YL, BL, and R-LL. HJ and Z-NH were in charge of drafting the manuscript. Each author has participated sufficiently in the work to take public responsibility for appropriate portions of the content. This study was supported by Guangdong Natural Science Foundation 2014A030313193. Considering above risks, we modified the traditionally used subtotal thyroidectomy and isthmusectomy treating GD by identifying recurrent laryngeal nerves and retaining the unilateral superior pole of the gland. This study analyzed the safety and efficacy of this modified surgical procedure of Graves' disease, thus to determine if this modified subtotal thyroidectomy could be considered a viable treatment option for patients with Graves' disease. Materials and methods We confirm that the use of human subject was specifically approved by the Clinical Research Ethics Committee of the Third Affiliated Hospital, Sun Yat-sen University. Before surgery, volunteers had been informed with the possible treatment and complications, and provided their written informed consent to participate in this study. The consent procedure was approved by the Clinical Ethics Committee. Study design and study population Patients who underwent thyroidectomy from 2009 to 2014 for Graves' disease at the Third Affiliated Hospital of Sun-Yat Sen University, Guangzhou, China, were enrolled in our study. The data were accumulated in January 2015, and all author had access to identifying information during data collection. Patients with the following features were excluded: patients with a large thyroid tumor (diameter of single goiter ≥10 cm), patients whose thyroid remnant was less than 2  1  1 cm, and patients followed-up for less than 24 months. Patients with large thyroid tumor (diameter of single goiter ≥10 cm) were excluded because large nodule always compress remnant gland, it is hard for surgeons to save appropriate remnant gland in compressed upper pole. Patients with small remnant gland were excluded because it is hard to determine the ratio of removed gland. Finally, a total of 168 patients with hyperthyroidism were enrolled into this study. We conducted a prospective, randomized 2-armed study, when the patients met the study criteria, they were randomized by sealed envelope into 1 of 2 surgery procedures: the subtotal thyroidectomy and isthmusectomy patients with the unilateral superior pole remained were grouped in Group I; patients who underwent the subtotal thyroidectomy and isthmusectomy with bilateral dorsal thyroid tissue remained were in Group II. The indications for surgery were persistent or recurrent hyperthyroidism after medical treatment in 128 patients (76.19%), mechanical symptoms due to a large goiter in 30 (17.86%), increased endocrine ophthalmopathy in 10 (5.95%). Antithyroid drugs were maintained or initiated in the preoperative period to bring the thyroxine and triiodothyronine levels to nontoxic values, if possible. Beta-adrenergic antagonists (b-blockers) such as propranolol or atenolol were used preoperatively in all patients to the control the adrenergic effects of excessive levels of thyroid hormones. All patients were placed on the Lugol solution for 7 to 10 days in the immediate preoperative period. Patients were ready for operation once an euthyroid state was achieved. The operational procedures in both groups were performed by 3 senior endocrine surgeons. Surgery procedures In patients of Group I, the enlarged glands were exposed as the routine. The middle thyroid vein was ligated, the inferior poles of the gland were lifted, and the branches of the inferior thyroid artery and veins were ligated on the capsule of the thyroid gland, superior to the origins of the blood supply to the parathyroid glands. Inferior parathyroid glands were identified and protected. At the level of the inferior thyroid artery, the bilateral recurrent laryngeal nerve was exposed, ascending slightly lateral to the tracheoesophageal groove and entering the larynx. The superior parathyroid gland was identified from the posterior aspect of the gland and pushed aside, then the majority of (larger than 80%) thyroid tissues were removed, and the upper pole on the side of the thyroid gland with relatively less disease was remained. Finally, the blood vessels were ligated (Fig. 1A). In patients of Group II, the upper pole was freed completely and the lobe was divided along the line of resection as outlined (see Fig. 1B). Both parathyroid glands and the recurrent nerve were presumed to be left in their normal locations, not being exposed as routine. The gland between hemostats was divided until the anterior surface of the trachea was reached. The lateral margin of the residual segment of thyroid was sutured to the trachea. Follow-up protocol Follow-up data were obtained at the routine clinic visit. All patients were followed up by clinical examination, laryngoscopy, ultrasound, and blood test every 3 months for the first 2 years. We defined recurrent laryngeal nerve damage as vocal cord paralysis confirmed by laryngoscopy, hypocalcaemia as serum-ionized calcium less than 1.1 mmol/L, postop hypothyroidism as the elevated TSH value. Information about all these complications was gleaned. Statistical analysis Postoperative variables reviewed were blood loss, mean operative time, recurrent laryngeal nerve damage, hypocalcemia (transient or permanent), recurrence, and reoperative time. Quantitative variables were expressed as mean and compared using Student's t-test. As appropriate, quantitative variables were expressed as numbers with percentages and compared with x 2 or Fisher's exact test. Statistical analysis for Student's t-test and x 2 analysis were performed using SPSS16.0. A P-value <0.05 was considered statistically significant. Results In the study period, 168 patients (117 females and 51 males) received surgery. Their age ranged from 25 to 65 years, with a median age of 43.7 years. However, 79 cases were randomly categorized in Group I, whereas 89 were in Group II. The 2 groups were not significantly different in gender, age, and surgical indications (see Table 1). Mean observation periods in Group I is 34.9 ± 9.7 months and in Group II, it is 35.1 ± 8.3 (P = 0.918, 95%CI = À2.9 to 2.6). No significant difference (P-value > 0.05) in the average operative time was observed in 2 groups, with 90 ± 8 minutes in Group I, and 85 ± 13 minutes in Group II (Fig. 2). However, the mean estimated blood loss (EBL) in Group I (30 ± 10 mL) was much less than that in Group II (60 ± 20 mL) (P-value < 0.05), and the GD recurrence rate was much higher in Group II (3.80% vs 7.87%, P-value < 0.05) (see Fig. 2B). When Graves' disease relapses, the first-line treatment is usually antithyroid medication, but reoperation is a secondary treatment when patients have serious medication contraindications or with suspected thyroid cancer. GD reoccurred in 3 patients in Group I and in 7 patients in Group II. Among them, 2 patients in Group I and 3 patients in Group II received reoperation. The reoperation time for Group I patients (40 ± 6 min) was less than in Group II (100 ± 9 min) (P-value < 0.01). The estimated blood loss is 32.5 ± 3.5 mL in Group I and 60 ± 10 mL in Group II (P = 0.037,95%CI = À51.9 to À3.0). In Group I, there was no recurrent laryngeal nerve damage or postoperative hypocalcemia. In Group II, there was 1 patient had recurrent laryngeal nerve damage, 3 patients had postoperative hypocalcemia (1 was transient and 2 were permanent) (see Table 2). Both groups were followed up from 24 months to 60 months. No significant difference was found in the occurrence of hypothyroidism in both groups, and all of them achieved euthyroidism after treating with levothyroxine for 6 to 12 months. Table 2 also indicates the postoperative complications. No cases of postoperative hematoma were found in Group I, whereas 2 patients in Group II (2.25%) underwent reoperations for a cervical hematoma from a bleeding strap muscle (same day). 2 patients in Group II (2.25%) suffered from recurrent laryngeal nerve damage, but recovered half a year later. 2 patients (2.53%) in Group I and 3 patients (3.37%) in Group II experienced transient hypocalcemia. Recurrence only occurred in 2 patients in Group II (2.25%). No permanent hypocalcemia was identified in both groups. In Group I, 3 patients (3.80%) developed recurrent Table 1 The patients' general information. Characteristics Group I Group II P 95%CI There is no significant difference between the 2 groups (P > 0.05). The reoperation time in group I was much less than that in group II (P < 0.05). (B) The blood loss between group I and group II. The mean estimated blood loss in group I (30 ± 10 mL) was much less than that in group II (60 ± 20 mL) (P < 0.001). Discussion Previous studies have shown that subtotal thyroidectomy is a safe and effective treatment for Graves' disease. [5][6][7] However, there are still substantial debates regarding the size of resection and the option of operation modalities. Associated with a lower risk of developing recurrence of disease, total thyroidectomy has become the first-line treatment option for patients with Graves' disease in developed countries. [8][9] Although total thyroidectomy prevents relapse of the disease, it renders patients postoperative hypothyroidism, even permanently, thus often adopted in compliance with postoperative thyroid hormone supplementation, which requires patients' responsibility for lifelong selfadministration. For majority of Chinese GD patients, they are unwilling to accept this responsibility because of this long-term inconvenient life style. The traditional subtotal thyroidectomy may have the following risks. [10] First, it may cause greater estimated blood loss, particularly in patients with an enlarged goiter, mostly likely due to bleeding from the cut surface of a highly vascular and enlarged thyroid gland. Second, the bilateral recurrent laryngeal nerves are not routinely identified in this treatment, increasing the possibility of damage. Third, the blood supply to the parathyroid glands during the operation may become insufficient because the inferior thyroid artery is divided, resulting postoperative hypocalcemia. Fourth, when recurrences occur and a completion thyroidectomy is needed, the bilateral recurrent laryngeal nerves are even more easily damaged because of the adhesion resulted from the first operation. [11][12] Fifth, long-term follow-up showed a 18% hyperthyroidism recurrence because of the difficult and inaccurate estimate of residual thyroid. [13] In this study, a modified subtotal thyroidectomy was employed to treat Graves' disease in Chinese patients . In this treatment mode, the bilateral recurrent laryngeal nerve was carefully identified and preserved (see Fig. 4). Operative visualization of the recurrent laryngeal nerves throughout the entire operation was important in eliminating permanent vocal cord paralysis. The thyroid tissue was resected with approximately 3 gram (2 cm  1cm  1cm) of unilateral upper pole thyroid gland (5%) remained (see Fig. 5). Sufficient blood supply to the parathyroid glands was ensured by ligating the branches of the inferior thyroid artery and veins on the capsule of the thyroid gland, superior to the origins of the blood supply to the parathyroid glands, which would be important in reducing the incidence of permament hypoparathyroidism. We named this surgery mode as the subtotal thyroidectomy and isthmusectomy with the unilateral superior poles remnant. In our modified subtotal thyroidectomy with the unilateral superior poles remnant, the bilateral recurrent laryngeal nerves (RLN) were routinely identified and preserved, and the unilateral superior lobes of the thyroid gland were reserved. This modified surgery mode has the following advantages. Preserving the superior lobes can markedly reduce blood loss during operation due to a smaller cut surface, a stronger ligation and without involving the superior thyroid artery, and less blood loss will promise a clean and clear surgery field to ensure the surgeon protect RLN and parathyroid gland effectively. It was effective to protect the bilateral recurrent laryngeal nerves by active identification before resection (0 vs 2). This result was consistent with Riddecl's finding that the rate of RLN damage could be reduced from 2% to 0.6% with regular identification. To protect the parathyroid, we kept the inferior artery trunk for sufficient blood supply and retained the superior parathyroid in case of mistakenly cutting of inferior parathyroid. Only 2 patients had transient hypocalcemia and recovered soon in Group I, indicating that the new treatment was safe for parathyroid. The recurrence of GD in Group II was obviously higher than that in Group I (7 vs 3, P = 0.037). Perhaps because in patients with large goiters and symptoms of mechanical compression in the middle and inferior lobes, it was easier to estimate the extent of thyroidectomy and to determine the weight of resected thyroid tissue by cutting the middle and inferior lobes and preserving the superior lobes. No patients in Group I had cervical hematoma postoperatively, whereas 2 patients in Group II had hematoma and needed reoperation, and hematoma were confirmed coming from strap muscle, probably because of the less clear surgery field due to more bleeding during the first operation. With all the results, we conclude that subtotal thyroidectomy leaving a unilateral remnant based on the upper pole performs an effective and safe surgical way for Grave's disease and is associated with less injury and complications. Due to a lack of cases, more data from multiple centers are needed for further studies. www.md-journal.com
2018-04-03T04:35:07.272Z
2017-02-01T00:00:00.000
{ "year": 2017, "sha1": "1a521bd4983736f6cd6cb53e41e525fc2c8f1792", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000005919", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1a521bd4983736f6cd6cb53e41e525fc2c8f1792", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
136028062
pes2o/s2orc
v3-fos-license
Recent developments in drying of food products Drying is a dehydration process to preserve agricultural products for long period usage. The most common and cheapest method is open sun drying in which the products are simply laid on ground, road, mats, roof, etc. But the open sun drying has some disadvantages like dependent on good weather, contamination by dust, birds and animals consume a considerable quantity, slow drying rate and damages due to strong winds and rain. To overcome these difficulties solar dryers are developed with closed environment for drying agricultural products effectively. To obtain good quality food with reduced energy consumption, selection of appropriate drying process and proper input parameters is essential. In recent years several researchers across the world have developed new drying systems for improving the product quality, increasing the drying rate, decreasing the energy consumption, etc. Some of the new systems are fluidized bed, vibrated fluidized bed, desiccant, microwave, vacuum, freeze, infrared, intermittent, electro hydrodynamic and hybrid dryers. In this review the most recent progress in the field of drying of agricultural food products such as new methods, new products and modeling and optimization techniques has been presented. Challenges and future directions are also highlighted. The review will be useful for new researchers entering into this ever needed and ever growing field of engineering. Introduction Drying is a dehydration process used to remove the moisture present in food products by the application of heat. The heat may be supplied either by hot air or from the solar energy. Drying process is used to preserve the food products for future usage. Drying prevents the growth of bacteria and yeast formation. Drying can be achieved by using open sun drying and greenhouse drying methods. The open sun drying is presented in Fig. 1. When compared to greenhouse drying process, open sun drying process is a slow process; dried products will be of low quality due to contamination of dust particles, damages due to rain and moisture present in the air. Also there is a loss of food products due to insects, birds and animals. The quality of products obtained using greenhouse drying process is better because it is carried out in a closed environment. This closed environment is made using special cover materials like polyethylene, polycarbonate, etc. Greenhouse dryers are differentiated based on the structure of the roof such as roof even span and dome shaped dryer and based on the mode of energy transfer such as active (forced circulation) [6], and passive (natural circulation) [24]. Dryers with greenhouse drying are shown in Fig. 2. The main objective of this study is to provide a comprehensive review of literature related to the recent progress in the field of drying of agricultural food products. Classification of drying systems Solar dryers are classified as direct absorption type, indirect or convection type and the combination of the both. The pressure inside the drying system plays a vital role in drying of food products. Based on the operating pressure the dryers are classified into atmospheric drying, vacuum drying and freeze drying [10]. It is observed that hot air drying and vacuum drying are commonly used for drying of agricultural products. Also it is observed that hot air drying process is simple and economical [25]. Vacuum drying like microwave drying is more beneficial than hot air drying, because of short drying time, high drying rate and superior quality of dried products [26]. As yielding of microwave drying process is non-uniform it is normally carried out by combining it with other drying process to achieve better quality. Infrared drying is preferable because of uniform heating and high quality of yield. Even though the quality of dried products is good in freeze drying it is not practiced generally because it is an uneconomical process [23]. The type of dryer used is based on the need and the desired properties of drying products. Some of the new systems like desiccant [41], electro hydrodynamic and hybrid dryers [29], vibrated fluidized bed [19]& [48], fluidized bed [37]& [43], freeze [2]& [34], hot air drying [9], infrared [13,14], [16], intermittent [11], microwave [21], [49] & [53],and vacuum [44] have been utilized and investigated by researchers for drying the food products. Analysis of effectiveness of drying systems Research works have been carried out to analyze the effectiveness of various drying systems by studying the drying characteristics like moisture content, drying air temperature, air velocity, drying rate, drying time, etc. Adam Figiel [2] carried out drying of beetroot cubes using a combined convective-vacuum microwave process by freeze drying. They reported that the results such as compressive strength, antioxidant activity and rehydration have been better than the convection method and also the quality is improved with less drying time in combined method.Amin Hazervazifeh et al [5]studied the drying characteristics of apple slices and found that processing time is less in combined microwave-hot air flow method than microwave radiation and hot air drying process. And minimum energy consumption is observed in microwave radiation process. Atul Patel and Gaurav Patel [6] made CFD analysis of a forced circulation type solar dryer used conventionally for dehydrating vegetables and fruits to obtain operational augmentation in implementing the modifications in pressure and velocity values.A typical fluidized bed dryer is shown in Fig. 3. Dening Jia et al [8] investigated the effect of various parameters on the drying performance of pulsed fluidized vibrated bed of biomass particles and suggested high temperature and gas flow rate for better drying.Fagunwa et al [11] developed an intermittent solar dryer for Cocoa Beans and reported that drying performance is efficient than the traditional method.Jin et al [19] studied the effect of vibration parameters of a vibrated fluid bed dryer using an optical fiber probe approach and proposed an empirical correlation to predict bed voidage.Kejing An et al [23] studied the drying characteristics of ginger rhizome using five different drying process such as hot air drying, freeze drying, infrared drying, microwave drying (MD) and intermittent microwave and convective drying(IM&CD) and found that MD and CD method is good in preserving thermo sensitive materials with less power Liliana Seremet et al [26] investigated the drying characteristics of pumpkin using hot air drying and combined drying process and concluded that the rehydration capacity of hot air drying is higher.Magdalena Zielinska and Anna Michalska [27] evaluated the drying characteristics of blueberries and found that effect of combined hot air convective drying at 90°C and microwave vacuum drying process produces better results in drying time and quality than these processes have been carried out separately. Nadine Sangster et al [32] developed a prototype for an automated cocoa drying house which is s equipped with automated roof which is automatically opened during sunlight and closed if sunlight is not there and fermenter, automatic heaters and a remote control feature for the automated components. Oleksii Parniakov et al [34] studied the effects of pulsed electric fields on vacuum freeze-drying for drying PEF treated vacuum cooled apple tissue. It is observed that electroporated tissue samples shows large pores, fast rate of moisture impregnation and large rehydration capacity hence reported that PEF treatment gives better results.Ronak Daghigh et al [39] carried out a review of solar assisted heat pump drying systems for agricultural and marine products.Ruifang Wang et al [40] carried out drying experiments using a hybrid microwave rotary drying system in soybean drying and studied the effect of drum speed, cracking ratio, etc. Ting -Jie Wang et al [45] analyzed and reported the mechanism of vibration energy transfer using wave propagation. Sensors have been used to detect the wave signals produced and calculate the pressurewave propagation parameters. Yuting Tian et al [50] evaluated the effect of various drying methods on the qualities of the drying product. Mushroom is taken as the studying element under hot air, vacuum, microwave and microwave vacuum methods. They reported that there is a significant rise in the content of total free amino acids and the relative content of sulfur compounds of dried mushrooms in all the four methods. They observed that there is an improvement in nutrient retention and color attributes and larger amounts of taste-active amino acids are maintained and found high rehydration ratio when dried with microwave vacuum method. It is also reported that the collapse in structure also less than the other methods. Modelling and optimization techniques Modelling and optimization techniques like response surface methodology (RSM), fuzzy modelling, etc. are used to analyze the problems, in which one or more responses are influenced by many factors and to find out the quantitative relationship between the response variables and the input control variables. Also they are used to determine the significant effect of each factor on the response factor by developing mathematical models and to find out the optimal conditions. RSM technique is used by many researchers ( [4], [18], [22], [33], [35], [37], [42], [44] & [51]) to optimize the drying effect of various dryers like fluidized bed dryers, on different agricultural products like Artemisia absinthium leaves, coriandrum sativum leaves, coroba slices, kefir powder, ganodermalucidium slices, green peas, olive leaves and soya bean, etc. Response surface methodology (RSM) Response surface methodology is a method of statistical and mathematical techniques influenced by many factors and to find out the quantitative relationship between the response variables and the input control variables. This method has been initially used for model fitting of physical experiments but later it is used for the design of experiments in process optimization. It is the process of approximating a response function, based on statistical analysis of data which has been obtained at various design points. The relationship between the control parameters and the responses is given in Equation (1) as where Y is the response variable and X1, X2, … Xk are the independent variables. The function f is called the true response function. The residual ε measures the experimental error. Coefficient of determination (R-square) is used to evaluate the model [46]. Mathematical modeling techniques of thin layer drying Many researchers analyzed the drying characteristics of agricultural products using various mathematical models like Newton, Henderson and Pabis, Logarithmic and Weibull, Midilli et al, Page, Modified Henderson and Pabis, Two-Terms, etc. The mathematical models developed can be validated using correlation analysis ®, reduced chi-square (x 2 ) test and root mean square error (RMSE) analysis. Some of the model descriptions used by the researchers are listed in Table .1. Where MR is moisture ratio, M t is moisture content at any given time (kg water kg-1 solids), M e standing for equilibrium moisture content (kg water kg-1 solids) and M 0 representing the initial moisture content. Abano et al [1] investigated the drying characteristics like air velocity, drying time, etc. of tomato slices using RSM modeling technique and the ideal drying condition is predicted using desirability index technique.Akpinar [3] investigated the drying characteristics of mint leaves using an indirect forced convection solar dryer. The experimental values are analyzed using ten models and exergy analysis is also carried out to study the impact of parameters considered. Ali Abasi Surki et al [4] optimized the operating parameters in drying of soybean using RSM. A three-level, four-factor fractional factorial design is applied for optimization.Diamante et al [7] developed a new mathematical model for thinlayer for drying of fruits.Gokhan Gurlek et al [12] developed a new solar tunnel dryer in drying of tomato and compared twelve different mathematical models to predict the drying characteristics and reported that the two-term model gives better results.Hosain Darvishi et al [15] developed seven mathematical models to describe the characteristics of thin layer drying of pepper samples and reported that the Midilli model is more suitable for thin layer samples. They concluded that energy efficiency is increased when increase in microwave power and moisture content in microwave drying of pepper by varying the power values. Ibrahim Doymaz [16] studied the effect of various infrared power levels on drying kinetics of carrot pomace and reported that among the twelve mathematical models developed Aghbashlo et al model is better.Ibrahim Doymaz [17] studied the effect of various infrared power levels on drying kinetics of pomegranate seeds and reported that the drying time is reduced when the power level is increased. Also they developed ten mathematical models and concluded that among the ten models The Page, Midilli et al and Weibull models have better prediction qualities than the other models. Jose Vasquez et al [20] implemented fuzzy control systems for a solar drying system integrated with thermal energy storage system and reported that the new arrangement works satisfactorily and energy is saved considerable amount than the old methods in drying of mushroom, plum and peach.Jose Vasquez et al [20] implemented fuzzy control systems for a solar drying system integrated with thermal energy storage system and reported that the new arrangement works satisfactorily and energy is saved considerable amount than the old methods in drying of mushroom, plum and peach.Lemuel M. Diamante et al [25]developed a new mathematical model for thin layer drying of fruits.Malaisamy et al [28] designed an efficient drier and implemented modeling and conventional PI controller and Fuzzy controller for maintaining temperature in the heating chambers for drying process for the efficient usage of solar energy and solar powered electrical energy for heating process. Also they compared the response with PI and concluded that Fuzzy shows better performance than PI and cardamom dryer provide better performance than copra dryer.Midilli et al [30] developed a new model to evaluate the drying characteristics for single layer drying process and also verified the effectiveness of the model with experimental data. Further they compared the new one with other available models using data obtained from literature and concluded that it is an effective model. Minaei et al [31] analyzed the drying characteristics using 11 mathematical models in drying of pomegranate arils and reported that Midili model is best suited for vacuum drying and Page model is for microwave drying technique.Pengfei Zhao et al [36] investigated the drying characteristics and kinetics of Shengli lignite and analyzed using seven thin layer models. They concluded that among four methods of drying vibrated medium fluidized bed has been given better results and the simulation using the Midilli-Kucuk model is best suited than the other models in dewatering the lignite. Reuss et al [38] used TRNSYS simulation model for the wood drying process based on the physical properties of the drying product and concluded that the solar drying process is more advantageous.Siewkian chin et al [42] optimized the drying condition of convective hot air in drying of Ganodermalucidium slices using RSM. They concluded that drying temperature have significant effect than the other factors considered. Vega-Galvez et al [47] studied the drying characteristics of olive-waste cake using convective process with five different temperatures and tested with various mathematical models and concluded that the modified Henderson and Pabis is best suited to describe the drying curves among the other models.Zafer Erbay et al [51] optimized the operating conditions in drying of olive leaves using RSM. Zdravko Sumic et al [52] process parameters such as temperature, pressure and drying time using RSM, regression analysis and ANOVA. They concluded that the results shows improved physic-chemical properties of lyophilized samples when compared to conventional method. Conclusions In this review the most recent progress in the field of drying of agricultural food products such as new methods, new products and modeling and optimization techniques has been presented. The quality of products obtained using greenhouse drying process is better because it is carried out in a closed environment. Hot air drying and vacuum drying are used for drying of agricultural products in which hot air drying process is simple and economical. Processing time is less in combined microwave-hot air flow method than microwave radiation and hot air drying process. In infrared drying, uniform heating and high quality of yield can be achieved. Indirect mode forced convection dryer performance is better than other methods. Solar drying system accommodated with phase change material will accelerate the drying performance of the system even during nights. Simulation models can be used to forecast and analyze the new systems developed. An improvement in nutrient retention and color attributes and larger amounts of taste-active amino acids are maintained and found high rehydration ratio when dried with microwave vacuum method. Further research work in developing hybrid drying systems and drying systems to improve energy savings are needed.
2019-04-29T13:16:58.820Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "5a2b671d1d549705b6d4c46522219448c41d98d3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/197/1/012037", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8c59e03c1b9a184423c4cb48ebd3a6f7f2a1dd0b", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Environmental Science", "Physics" ] }
17324314
pes2o/s2orc
v3-fos-license
Urban habitat complexity affects species richness but not environmental filtering of morphologically-diverse ants Habitat complexity is a major determinant of structure and diversity of ant assemblages. Following the size-grain hypothesis, smaller ant species are likely to be advantaged in more complex habitats compared to larger species. Habitat complexity can act as an environmental filter based on species size and morphological traits, therefore affecting the overall structure and diversity of ant assemblages. In natural and semi-natural ecosystems, habitat complexity is principally regulated by ecological successions or disturbance such as fire and grazing. Urban ecosystems provide an opportunity to test relationships between habitat, ant assemblage structure and ant traits using novel combinations of habitat complexity generated and sustained by human management. We sampled ant assemblages in low-complexity and high-complexity parks, and high-complexity woodland remnants, hypothesizing that (i) ant abundance and species richness would be higher in high-complexity urban habitats, (ii) ant assemblages would differ between low- and high-complexity habitats and (iii) ants living in high-complexity habitats would be smaller than those living in low-complexity habitats. Contrary to our hypothesis, ant species richness was higher in low-complexity habitats compared to high-complexity habitats. Overall, ant assemblages were significantly different among the habitat complexity types investigated, although ant size and morphology remained the same. Habitat complexity appears to affect the structure of ant assemblages in urban ecosystems as previously observed in natural and semi-natural ecosystems. However, the habitat complexity filter does not seem to be linked to ant morphological traits related to body size. Habitat complexity is nonetheless a relative concept that depends upon the morphological characteristics of the species that it supports (Bell, McCoy & Mushinsky, 1991). According to the 'size-grain hypothesis' (Kaspari & Weiser, 1999), the perceived permeability of terrestrial habitats to mobile organisms is influenced by their size and morphological traits. An implication of the size-grain hypothesis suggests that organisms living in more complex habitats would be better off being smaller, whereas organisms living in less complex habitats could be larger without an increasing impediment, or cost, to movement (Kaspari & Weiser, 1999;Farji-Brener, Barrantes & Ruggiero, 2004). Many ant studies support the size-grain hypothesis and the relationship between habitat complexity and ant morphological traits in natural and semi-natural ecosystems (Kaspari & Weiser, 1999;Yanoviak & Kaspari, 2000;Espadaler & Gómez, 2001;Parr, Parr & Chown, 2003;Sarty, Abbott & Lester, 2006), although some contradictory evidence does exist (Parr, Parr & Chown, 2003;Teuscher et al., 2009). Habitat complexity can ultimately act as an environmental filter for species through their morphological traits, contributing to structure ant assemblages (Wiescher, Pearce-Duvet & Feener, 2012) and potentially the evolution of ant species over longer timeframes (Gibb & Parr, 2013). In natural and semi-natural ecosystems, habitat complexity is principally regulated by ecological successions (Gibb et al., 2015;Gosper et al., 2015), disturbances such as fire (Parr et al., 2007;Gosper et al., 2015), extreme climatic events or grazing (Boulton, Davies & Ward, 2005;Lindsay & Cunningham, 2009). In urban ecosystems, land use management is the principal factor shaping habitat complexity (Byrne, 2007). Management activities, such as mowing or litter removal, are generally controlled and recurrent disturbance events. They can determine and sustain patterns in habitat complexity that cannot be observed in natural and semi-natural ecosystems. Therefore, urban ecosystems, a recent phenomenon from an evolutionary perspective, provide novel combinations of habitat complexity useful to test traditional ecological models and theories using field-based experiments. We therefore investigated the effects of urban habitat complexity upon ant assemblages hypothesising that (i) ant abundance and species richness would be higher in habitats characterized by higher complexity, (ii) the composition of ant assemblages would be significantly different between low-and high-complexity habitats, (iii) based on the size-grain hypothesis, ants living in more complex habitats would be smaller than those living in less complex habitats. Experimental design Three habitat complexity types were identified based on their habitat structural characteristics and previous land-use in south-eastern Melbourne, Australia (Ossola, Hahs & Livesley, 2015). A total of thirty plots were established, ten within each of the three habitat complexity types, namely low-complexity parks (LCP), high-complexity parks (HCP), and high-complexity remnants (HCR) (Fig. S1). Two LCP plots were selected in out-of-play areas of each of five metropolitan golf courses (n = 10), the management practices of these habitats have been similar between sites and consistent over time. LCP plots were characterized by native and non-indigenous eucalyptus trees with a simplified understory. The ground cover consisted of turf grasses and very little litter accumulation due to monthly mowing (average height 5 cm) without the use of irrigation, fertilizers and insecticides. Within each of the same five golf courses, two HCP plots (n = 10) were also selected. While having the same previous agricultural land use as LCP, HCP were not actively managed, allowing a natural formation of a complex understory of shrubs, herbs and grasses, and the accumulation of litter. Two HCR plots were also selected in each of five nearby nature reserves (n = 10), as representatives of the natural habitat of the study area (heathy herb-rich eucalyptus woodlands). HCR plots were structurally similar to HCP plots and they are managed for conservation purposes by local city councils through weeding and native planting programs. Research sites were selected in a 10 km radius to minimize the variation of climatic variables and established on sandy soils belonging to a single soil type (podosols). Research plots (20 × 30 m) were selected in a flat location at a minimum distance of 100 m from each other and from creeks and ditches. There are no records of recent fire in the study area. Habitat complexity and microclimate measurements A number of vegetation, litter and soil variables were measured to assess the structural complexity of the three habitat types. In each plot the number of tree stems, tree basal area and tree height were measured for each tree stem with breast height diameter >8 cm. From these measures we were able to estimate above-ground tree biomass. The volume occupied by understory vegetation was quantified for four vertical strata (0-20, 20-50, 50-100 and 100-200 cm) in each research plot using a point intercept method. When understory vegetation intercepted a vertical pole placed at 28 regularly spaced points (5 m point grid), which vertical strata was intercepted was recorded and from this the volume occupied by the understory vegetation (%) was then estimated for each vertical strata. Total understory volume (0-200 cm) was calculated as the sum of the volume of the four strata. Ground cover (litter, bare soil, grass) was recorded at 28 locations within each plot. Three samples of litter were also randomly collected from 50 × 50 cm frames during each ant sampling campaign (see below) to calculate average litter mass. Soil was characterised in term of its bulk density (Wilke, 2005), aggregate size distribution (Six et al., 2002), texture (NSW Government, 2001), porosity (Wilke, 2005), total carbon and total nitrogen, using three soil samples (0-10 cm) randomly taken from each plot. In each plot, litter temperature (2 cm from the soil surface) was measured over 10 months (July 2013-April 2014) using three Thermochron sensors (model DS1922, Maxim Integrated, San Jose, CA, USA) taking readings every 3 h, and averaged to calculate daily, diurnal (6 am-6 pm) and nocturnal (6 pm-6 am) litter temperatures. In each season, soil moisture was measured in each research plot by taking six random point measurements using a ThetaProbe (Model ML2x, Delta-T Devices, Cambridge, UK). Ant sampling Since the aim of the study was to compare ant assemblages in high and low complexity habitat types a single standardised sampling method was preferred (Gotelli et al., 2011). The use of litter extractions for sampling was not possible as there was very little litter in LCP plots. Therefore, five pitfall traps, consisting of standard laboratory glass tubes (2.5 cm diameter) and containing a solution of ethanol and ethylene glycol (50:50), were deployed in each research plot (inter-trap spacing 9 m) and left open for seven days (Ward, New & Yen, 2001;Borgelt & New, 2006;Gibb et al., 2015). Three replicate samplings were conducted over one year using the same trap locations (April 2013, November 2013, April 2014). All ants collected were sorted to genera then morphospecies (Shattuck, 1999;CSIRO, 2014), since morphospecies can provide a good surrogate for ant species richness (Oliver & Beattie, 1996). From this point on, 'morphospecies' is referred to as 'species' for simplicity. Morphometric measurements Head length was measured as the linear distance between the posterior head margin and the posterior clypeus margin, while head width as the linear distance between the head sides above the eyes (Gibb & Parr, 2013). Head length is thought to be an indicator for ant diet, with herbivores species characterized by longer head (Yates et al., 2014). Head width is related to the size of interstices through which ants can move (Sarty, Abbott & Lester, 2006). Pronotum width, a robust predictor for ant body mass (Kaspari & Weiser, 1999;Espadaler & Gómez, 2001), and hind femur length were also measured. The body size index (BSI) was calculated as the product between the head width and the hind femur length (Sarty, Abbott & Lester, 2006). In dimorphic species, major workers were rare (<5% of individuals sampled), therefore morphological parameters were only measured on minor workers (n = 1-6) (Gibb & Parr, 2013), using a calibrated Leica IC80 HD camera mounted on a Leica M80 stereo microscope. Data analysis Statistical analyses were conducted using R 1.3.0 (R Core Team, 2012) and the packages vegan (Oksanen et al., 2014), lme4 (Bates et al., 2014, car (Fox & Weisberg, 2011), nlme (Pinheiro et al., 2015), ade4 (Dray & Dufour, 2007) and phia (De Rosario-Martinez, 2013) unless otherwise stated. The ant abundance for the three sampling campaigns were pooled at the plot level because our focus was on the general trend rather than seasonal patterns (Arnan et al., 2013;Gibb et al., 2015), and preliminary analyses showed no significant differences in the composition of ant assemblages among sampling dates. Abundances were fourth-root transformed prior to statistical analyses to balance the contribution of rare and common species (Parr, Parr & Chown, 2003;Lassau & Hochuli, 2004). One of the LCP research plots was invaded by the argentine ant (Linepithema humile, Mayr, 1868) and was excluded from statistical analyse because of displacement of most of the other ant species. Species accumulation curves were built on the pooled ant abundance data for the three habitat complexity types. The estimator of sample coverageĈ (Chao & Jost, 2012) and the Chao1 estimator of species richnessŜ (Chao, 1984) were also calculated using the pooled abundance data for each habitat complexity type using the iNext online tool (Hsieh, Ma & Chao, 2013). Linear mixed-effect models with a restricted maximum likelihood (REML) fit were used to test (i) differences in the habitat complexity and microclimate variables measured across the three habitat types and (ii) the effects of habitat complexity type upon the number of ant species and their abundance, using "site" as a random effect (significance level 0.05). Pairwise comparisons were performed using a sequential Bonferroni procedure (Holm, 1979) within the command "testInteractions()" of the R package phia. Correlation between ant abundance and species richness, habitat complexity and microclimatic variables were calculated using Spearman's rank correlation tests (Lassau & Hochuli, 2004). Permutational multivariate analysis of variance (PERMANOVA) on a Bray-Curtis similarity matrix was used to assess differences in ant assemblages between the three habitat complexity types. Type III sums of squares were used for partitioning to account for the unbalanced design. PERMANOVA was conducted using PRIMER 7 and PERMANOVA+ (Anderson, Gorley & Clarke, 2008). Non-metric multidimensional scaling (NMDS) on the same dissimilarity matrix was also used to ordinate ant assemblages in relation to the three habitat complexity types. Correlations between morphological traits were assessed using Spearman's rank correlation tests. The relationships between ant morphological traits and habitat complexity variables were assessed using both the RLQ and fourth-corner methods. RLQ is used to assess the overall relationship between traits and habitat variables, while the fourth-corner method is indicated to test the significance of individual trait-habitat relationships (Dray et al., 2014). RLQ is a type of co-inertia analysis which assesses the relationships between environmental characteristics (matrix R) and organism traits (matrix Q) mediated by species abundance (matrix L) (Dolédec et al., 1996). A first correspondence analysis (CA) was applied to the matrix L, while principal component analyses (PCA) to the matrices R and Q (fourth-root transformed). Results of these ordinations were used as inputs of the RLQ analysis, which generated a final matrix containing the covariance structure between ant morphological traits and habitat complexity variables (Dray et al., 2014). Monte-Carlo permutations (n = 49,999) of the rows of the matrix L (model 2, Dray & Legendre, 2008) and the columns of the matrix L (model 4, Dray & Legendre, 2008) were performed to test the significance of the relationship between species morphological traits and habitat complexity variables. Significance is reported as the maximum of the individual p-values of the two permutation models (Ter Braak, Cormont & Dray, 2012). Using the same matrices used for the RLQ, a fourth-corner analysis was performed to assess the significance of individual relationships between ant morphological traits and habitat complexity variables. Significance was tested using Monte-Carlo permutations (n. 49,999) based on the permutation model 6 (Dray et al., 2014) and the false discovery method to adjust p-values for multiple testing (Benjamini & Hochberg, 1995). Habitat complexity and microclimate HCR and HCP habitats were characterised by similar overall habitat complexity, which was significantly different from that of LCP habitats. LCP habitats had significantly taller trees and greater above ground biomass, but smaller understory vegetation volume, compared to HCP and HCR habitats (Table S1). Litter mass was greater in HCP and smaller in LCP. Bare soil cover did not differ among the three habitat complexity types, while grass cover was greater in LCP. Soils in LCP habitats were significantly less sandy than the other habitat types. Nevertheless, the other soil properties did not significantly differ among the habitat types (Table S1). Average litter temperature was ∼1 • C lower in HCP compared to HCR and LCP habitats, but there were no differences in nocturnal temperatures (Table S1). Seasonal soil moisture did not differ among the habitat types (Table S1). Ant morphological traits All the ant morphological traits were significantly correlated (ρ > 0.85) with each other (Table S2 and Fig. S2). Body size ranged over four orders of magnitude from the large Myrmecia (BSI = 17.55) to the small Solenopsis sp.1 and sp.2 (BSI = 0.09). When Rhytidoponera was excluded from graphical visualisation of traits' distribution, individuals from smaller species (BSI = 0-0.6) were more abundant than those of medium-sized and larger species (Fig. 4). There were no significant differences in the distribution of species body sizes or morphological traits between the three habitat complexity types (Fig. 4). RLQ axis 1 accounted for most of the total co-structure in the analysis (99.07%) ( Table 1). The projected inertia from the matrix R (species) and the matrix Q (traits) on the RLQ axis 1 was 75.00% and 98.47%, respectively. Permutation tests following the RLQ analysis showed no significant general relationship between ant morphological traits and habitat complexity variables (p = 0.12). Pairwise correlation values between morphological traits and habitat complexity variables following RLQ were also very poor and consistently less than 0.13 (Table S3). The fourth-corner analysis indicated that the percentage of soil micro-aggregates was negatively related to all the morphological traits measured. Head length and pronotum width were also related to the canopy complexity (Table 2). Nevertheless, when adjusting the analysis for multiple comparisons, none of the pairwise relationships (n = 95) between ant morphological traits and habitat complexity variables were significant (Table 2). Ant assemblages Contrary to our first hypothesis, average ant species richness was significantly higher in habitats characterized by lower complexity. This supports previous studies in natural and agro-ecosystems in temperate Australia, where a negative correlation between habitat complexity and ant species richness was observed (Lassau & Hochuli, 2004;Lindsay & Cunningham, 2009, but see also Andersen, 1986). Interestingly, the total number of ant species sampled from HCP and LCP habitats was similar. Taller trees and less complex Figure 4 Traits distribution. Frequency distribution of the ant morphological traits in the three habitat complexity types (low-complexity parks (LCP), high-complexity parks (HCP), high-complexity remnants (HCR)). Rhytidoponera has been excluded from this figure to increase the visibility of the underlying patterns for the less abundant species. Table 1 RLQ analysis. Results of the preliminary ordinations to the RLQ analysis. Eigenvalues (and percentage of total co-inertia) for the two main axes for the preliminary ordinations of habitat complexity variables in the matrix R (principal component analysis), species abundance in matrix L (correspondence analysis) and ant morphological traits in matrix Q (principal component analysis) are reported. Summary of the RLQ analysis reports the eigenvalues (and percentage of total co-inertia) for the two main axes, covariance and correlation (and percentage of total correlation) with the CA on matrix L, and projected inertia (and percentage of total inertia) with the R and Q matrices. Axis 1 (%) Axis 2 (%) understorey vegetation supported greater ant species richness. Similarly, previous evidence suggests ant species richness to be negatively correlated with vegetation cover (Lassau & Hochuli, 2004). In a recent study, Philpott et al. (2014) found ant species richness to be positively correlated to the vegetation height, but also with the number of shrubs. The complexity of the litter layer negatively affected ant species richness, as has been previously reported for similar woodlands in Australia (Lassau & Hochuli, 2004;Lindsay & Cunningham, 2009). In subtropical forests, litter complexity seems to enhance ant species richness, possibly due to the presence of higher number of litter specialist species (Campos, Schoereder & Sperber, 2003). However, in our study we did not find any support for this, nor did we observe high abundance and richness of litter specialist ant genera (e.g., Amblyopone, Solenopsis, Plagiolepis, Strumigenys). Although ant species richness was higher in LCP habitats, habitat complexity did not affect ant abundance among the three habitat complexity types. However, ants were more abundant in warmer and drier habitats, even though this seemed to have no effect on the species richness of ant assemblages (Kaspari, Alonso & O'Donnell, 2000;Sanders et al., 2007). Ant abundance was also negatively correlated to litter complexity as previously observed in natural ecosystems (Lassau & Hochuli, 2004). Interestingly, the abundance of Rhytidoponera, an opportunistic genus associated with disturbed habitats (Yates, Gibb & Andrew, 2011), was higher in the woodland remnant habitats as compared to the urban parkland habitats. The sampling protocol employed ensured high sample completeness, despite slightly underestimating the number of ant species. Nevertheless, it is rare to reach a complete sampling of invertebrates, particularly ants where previously undetected species can be found after decades of continuous sampling (Gotelli et al., 2011). In the present study, the number of ant species might have been slightly but consistently Table 2 Fourth corner analysis. Results from the fourth-corner analysis between ant morphological traits (matrix Q) and habitat complexity variables (matrix R) mediated by species abundance (matrix L). Significant relationships (P < 0.05) are highlighted in bold. The error introduced by multiple testing was corrected (p-value adjusted) following the permutation model 6 (Dray et al., 2014) and the false discovery method (Benjamini & Hochberg, 1995 underestimated in the three habitat complexity types, as indicated byŜ. This would not significantly bias our findings when comparing ant species richness among the three habitat complexity types. Our second hypothesis, that the composition of ant assemblages would be significantly different between low-and high-complexity habitats, was confirmed. Previous studies have found habitat complexity affects the composition of ant assemblages in many natural and semi-natural ecosystems (e.g., Culver, 1974;Andersen, 1986;Lassau & Hochuli, 2004). Recent evidence from urban ecosystems also indicates that local factors, such as habitat complexity, are likely to explain most of the variation of arthropod assemblages (>80%), as compared to other landscape factors (Philpott et al., 2014). Nonetheless, the composition of ant assemblages between the two high-complexity habitat types (HCR, HCP) was also dissimilar. This suggests that factors other than habitat complexity, such as land use history or the adjacent landscape, might have played a role in shaping the structure of ant assemblages in the habitat investigated (Bolger et al., 2000;Gibb & Hochuli, 2002). HCP habitats were established between 40 and 100 years ago when the agricultural land surrounding Melbourne was urbanised (Ossola, Hahs & Livesley, 2015). Enough time has passed for the complexity of HCP habitats to increase to levels comparable to those of HCR habitats. It is therefore likely that disturbance or landscape factors, rather than land use history, are responsible for current differences in the composition of ant assemblages between HCP and HCR habitats. Ant morphological traits Correlations between the morphological traits measured were remarkably similar to those recalculated from Gibb & Parr (2013 , Table S2) for 24 Australian ant species (average Δρ = 0.051). In the Gibb & Parr (2013) study, a significant negative relationship between hind femur length and habitat complexity was observed, as has been found in previous studies (Gibb & Parr, 2010;Wiescher, Pearce-Duvet & Feener, 2012). Nevertheless, our data suggests that this relationship does not hold when tested at the habitat microscale (i.e., meters) (Gibb et al., 2015). Our fourth-corner analysis indicated that hind femur length was not related to any of the habitat complexity variables measured. In natural unmanaged ecosystems ant body size seems to increase in more simple habitats (Sarty, Abbott & Lester, 2006;Gibb & Parr, 2010;Arnan et al., 2013), but this relationship was not supported in the urban ecosystems investigated. Overall, we did not find support for our third hypothesis that ants in more complex habitats (HCR, HCP) would be smaller than those living in less complex habitats (LCP). Nor were significant relationships observed between morphological traits and the habitat complexity variables. This suggests that environmental filtering of ant species, as mediated by the habitat complexity through ant morphological traits, might not represent the dominant mechanism in structuring ant assemblages in urban ecosystems. Various relationships between ant morphological traits and habitat complexity have been found in natural and semi-natural ecosystems, though these were often inconsistent among studies (e.g., Yanoviak & Kaspari, 2000;Farji-Brener, Barrantes & Ruggiero, 2004;Gibb & Parr, 2013). Some previous evidence did not support the size-grain hypothesis (Parr, Parr & Chown, 2003;Teuscher et al., 2009). Yates et al. (2014 found negative relationships between habitat complexity (pasture vs. remnant) and morphological traits (head and femur length) at a landscape scale. Nevertheless, this relationship was not apparent at a smaller scale when looking at vegetation (grass height, herb cover), litter and soil (bare ground, C:N, P) variables. The discrepancies between studies are likely to be determined by factors such as (a) spatial and temporal scales at which habitat factors filter ant morphological traits (Yates et al., 2014;Gibb et al., 2015), (b) landscape characteristics affecting species movements between habitats, (c) phylogeny and evolutionary history of species (Parr, Parr & Chown, 2003;Gibb et al., 2015), (d) mensurative and manipulative approaches used to test habitat-trait relationships (Gibb & Parr, 2010), (e) the variety of traits, habitat metrics and statistical approaches used, (f) factors shaping habitat complexity (e.g., ecological successions, natural disturbance, human management), and (g) the classification of habitats into discrete complexity types. In our study, the effects of habitat complexity upon ant traits might have been masked by one or a combination of these factors. CONCLUSIONS Habitat complexity is likely to affect the composition of ant assemblages in urban ecosystems as previously observed in natural and semi-natural ecosystems. Nevertheless, our study also suggests that environmental filtering of ant species mediated by habitat complexity might not be the dominant mechanism in structuring urban ant assemblages. Further studies are necessary to disentangle the interactions of habitat complexity with other factors that influence the structure of ant assemblages, such as habitat age, landscape characteristics and scale. Future investigations will be also needed to clarify how different factors shaping habitat complexity might affect habitat complexity-species traits relationships.
2017-06-21T22:33:53.048Z
2015-10-22T00:00:00.000
{ "year": 2015, "sha1": "6bec70541018e39bbb7e77b2b8319bc0bba0e22d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.1356", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a59b70c90835609dd926ca6223390da2f9b7d804", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
211609724
pes2o/s2orc
v3-fos-license
Review Essay: “What’s Going On?” Introduction .......................................................................................................19 I. Just Sayin’: Narratives on Race and Resegregation .........................21 II. Ferguson, the Black Lives Matter Movement, and the Prosecution of N.Y.P.D. Officer Peter Liang ......................................27 III. Fisher v. University oF texas ..................................................................33 IV. Post–Fisher v. texas: Discrimination Against Asian Americans at Elite Universities Redux? .................................................................37 Conclusion ..........................................................................................................42 Introduction In We Gon' Be Alright: Notes on Race and Resegregation ("We Gon' Be Alright"), 1 Jeff Chang, Executive Director of the Institute for Diversity in the Arts at Stanford University, relies on his vast knowledge of the cultural history of race in America, hip-hop music, and civil rights to comment on racial progress and race relations, bringing "renewed attention to questions of equity." 2 In his brisk volume, Chang explains why inequality persists and how resegregation is still happening today. Chang's book is timely, as ongoing violence against young African American men across the country and the debates over immigration and affirmative action, destroys any professed color-blind vision of America. Chang ultimately eviscerates the notion held by some that this country entered a "post-racial" era after President Obama's election. 3 Justice Sotomayor echoed these sentiments in her dissent in Schuette. v. Coalition to Defend Affirmative Action, 4 condemning the majority for upholding a Michigan state ban on racial affirmative action. In the dissent, she forcefully argued that race still matters because it has been used to prevent access to the political process, produce stark socioeconomic disparities, and serves as a basis for how society reacts to a person. 5 Given the ubiquity and permanence of race, she writes, "The way to stop discrimination on the basis of race is to speak openly and candidly on the subject of race, and to apply the Constitution with eyes open to the unfortunate effects of centuries of racial discrimination." 6 Chang does just this, by writing openly and candidly about race in seven opinion editorial-style essays drawn from his personal experiences as a native Hawaiian of Chinese descent, University of California at Berkeley student body president, and his ethnic identity as an Asian American. A two-fold overarching theme flows throughout: how the interconnection of inequality and segregation affects us all; and the necessity of subjugated racial minorities acknowledging and understanding the similarities and differences among minority groups to discover common goals to stop racial discrimination. Part I of this review summarizes parts of We Gon' Be Alright, setting up the landscape for extended analyses subsequent sections. Part II expands upon Chang's narrative about the Black Lives Matter Movement to discuss the recent prosecution of New York Police Department Officer Peter Liang and its impact on the national debate on police brutality and deadly violence against young African American men. Part II also explains why there are tremendous possibilities for coalition building at the grassroots level with community organizing, despite cultural and class differences; it also discusses the potential conflicts and impediments that may arise during the building process. Part III analyzes Fisher v. University of Texas, 7 particularly focusing on Justice Alito's dissent, which dubiously relies on Asian Americans to argue against the constitutionality of the University of Texas's affirmative action program. Unfortunately, because Fisher was announced after We Gon' Be Alright's release, Chang missed an opportunity to strengthen his argument that Asian Americans play a major role in advancing race relations as honorary whites. As such, this section-combining law, social science, and race theory-tries to bridge together Chang's analysis and the Court's holding in Fisher. Finally, Part IV builds on Chang's analysis and the holding in Fisher, exploring the latest litigation brought by Asian American students against affirmative action in higher education. The New Republic (May 23, 2017), https://newrepublic.com/article/142825/Clarencethomass-rulings-race-idisyncratic (explaining that adherents to colorblindness believe that racism is no longer a major problem in American society); see also Jeff Undoubtedly, We Gon' Be Alright adds to the cultural studies literature by expanding the traditional black-white dichotomous understanding of race relations, which inadequately addresses the wide demographic spectrum of racial and ethnic issues. 8 A binary racial paradigm does not account for interracial conflict, anti-Asian violence, the debate over affirmative action, or disparities in criminal prosecution and sentencing. For too long, bipolarity has forced racial groups to favor one race over another, while conforming to a racial hierarchy that places whites at the top, African Americans at the bottom, and Asian Americans and other non-whites somewhere in-between. 9 Given these current discriminatory issues racial minorities face, Chang reaches two major conclusions. At the political level, Chang calls for more police accountability and for defending affirmative action, present opportunities for all communities of color to work together in fighting racial inequality. At the theoretical level, the Liang controversy and the affirmative action debate show how Asian Americans can be treated more like whites than African Americans in an inadequate, yet traditional, black/white binary of conversations of race in America. I. Just Sayin': Narratives on Race and Resegregation Race and diversity go hand-in-hand. Early on, Chang explains that the meaning of diversity has been lost during its transition from inclusion and cultural change to serving as a euphemism for racial exclusion and "otherness." Oftentimes, half-hearted commitments to diversity have manifested themselves into mere number-counting and superficial appearances. Chang says, it was mostly white legislators and white voters who transformed diversity into a buzzword after the Court's ruling in Regents of the University of California v. Bakke (Bakke) 10 which effectively fused "diversity" with "affirmative action" when diversity became the only rationale for defending affirmative action. 11 "[D]iversity has been exploited and rendered meaningless" because the fungible term can mean practically anything including political view, social class, gender, extracurricular activities, or anything else beyond its original intent of addressing social or racial inequality. 12 After offering a litany of examples of failed diversity efforts by universities and companies, Chang reminds readers that there is still more that must be done. Despite assurances of "diversity increases," members of affinity groups appearing in university recruitment brochures, and symbolic calls by business leaders for more inclusion, there still lacks meaningful minority representation. 13 There is also great pushback against diversity from whites, who Chang says find demographic and cultural change unsettling to their privilege. 14 Fully aware of this, and seizing an opportunity, then presidential candidate Donald J. Trump mobilized segments of American society by capitalizing on the unequal division along racial lines, and espousing nativist sentiments on the campaign trail. 15 Chang argues, Trump fueled anxieties held by whites feeling vulnerable about their social and economic positions by blaming migrants, Muslims, African-Americans, women, and others deemed as undeserving of American citizenship. 16 According to Chang, from the moment Trump demanded President Obama's birth certificate through the Republican primaries to the later stages of the campaign, Trump gained the support of frustrated and enraged white voters "undone by skyrocketing economic inequality, distrustful of big business and media, ignored by elites" to establish his political base. 17 Resegregation in our neighborhoods, in our schools and in the culture is another factor that works against inclusion. 21 Surprisingly, many people do not realize that this is happening because this resegregation is obfuscated by the culture wars preventing us from understanding it, or coming up with solutions. 22 To Chang's chagrin, wealthy whites in cities like San Francisco and Oakland have displaced the poor and subjugated them from their former communities as the gaps between them and non-whites continue to widen "[i]n terms of poverty, annual income, wealth, healthy, housing, schooling, and incarceration." 23 Much of this can be attributed to the tech boom which has forced many of the disenfranchised to relocate to underfinanced cities outside of the Bay Area. 24 At times, Chang dives deep to discuss specifics. For example, Chang devotes a chapter to discussing the boycott of the Academy Awards to highlight the lack of women and racial minorities in the overwhelmingly white motion picture academy. 25 Chang starts with the April Reign's #OscarSoWhite hash tag that mobilized audience anger about the lack of nominations of black actors, directors, and others in its 2016 nominations, and then surveys the history of the inadequate representation of racial minorities in Hollywood, highlighting the limited roles available for African Americans, Latinos, and the almost non-existent opportunities for Asian Americans. 26 With this racial reality, Chang says it is unfortunate that even though African Americans, Latinos, and Asian Americans share common goals, these groups have often unknowingly worked against one another within a white narrative. 27 votes, Rock invoked the model minority stereotype by referring to three Asian American kids carrying briefcases as dedicated and hardworking. He followed-up with an oft-remark about the audiences sending out tweets using phones made by child laborers. 29 Not to be outdone, Cohen in a rant embracing racial stereotypes, separately referred to the animated Minions as "hardworking tiny yellow people with no dongs." 30 Chang explains that these comments were insensitive and insulting, exemplifying how communities of color are not listening to one another. "Cohen's 'post-racial' humor turned on the shock value of saying racist things in a faux-clueless manner to an audience that knew they were racist jokes told by white liberals for white liberals." 31 Here, Chang poses important questions. Did the audience fail to understand the racism because they wrongly believe that positive stereotypes are not offensive? Or perhaps they thought that racial jokes about Asian Americans, like racial jokes about whites, are relatively harmless? 32 Unmistakably, We Gon' Be Alright's thrust is found in the last chapter, "The In-Betweens: On Asian Americanness," where Chang explains why affirmative action has become a difficult ethical issue for Asian Americans, who have been bestowed by whites with the model minority stereotype and perceived as having achieved a nominal "honorary white" status through acculturation, education, and professional achievement. 33 But elsewhere, Professor Lopez cautions that the racial shifting of Asian Americans to whiteness is an example of how society is moving away from the black/white racial paradigm toward a hierarchy of "colorblind white dominance" whereby whites remain racially dominant, and who is considered "white" will be determined based on social-racial lines instead of biology. 34 Within this new racial paradigm "whites" adhere to a color-blind ideology. 35 Chang is critical of Asian American groups opposed to affirmative action, including Asian American students who perceive affirmative action as a policy that unjustly benefits less-qualified African Americans and Latinos, ("From the 1960s to the 1990s, profiles of whiz kid Asian Americans became so common as to be clichés"); Natsu Taylor and limits their own chances of admission to prestigious universities. 36 These groups believe admission decisions should be based on merit alone without any consideration of race. 37 For them, racial preferences in admissions are a continuation of the historical exclusion against Asian Americans. 38 However, Chang disagrees with the premise of their claims, and explains why these Asian Americans are wrong to believe that they are harmed by affirmative action. First, Asian Americans who oppose affirmative action are not considering the goals of diversity as separate from the history of exclusion against a racial group. 39 If they would, Chang says they would support affirmative action because past discrimination against Asian Americans does not justify discrimination against African Americans and Latinos going forward. Second, "Asian American academic success, regrettably opened the door to the conservative right to sway Asian Americans towards white privilege." 40 Chang insists when these conservative groups refer to Asian Americans as innocent victims of affirmative action, they really are referring to whites, and Asian Americans are being used as a wedge by conservatives to divide racial groups. 41 Third, once made aware of these truths, Chang emphatically suggests ambivalent Asian Americans can play a major role in transforming and advancing race relations. As honorary whites, they can choose to support affirmative action, instead of perpetuating white dominance by seeking its elimination if they choose to do so. "[T]he days are over when Asian Americans should think only in terms of their self-interest, that Asian Americans ought to think about what it means to fight for justice and equity for all." 42 In illustrating the stark outcomes brought by a race-blind admissions policy and how detrimental it is for Asian Americans to be short-sighted about affirmative action, Chang refers to San Francisco's Lowell High School controversy which pitted Asian American civil rights groups against one another and which resulted in a failure for diversity and equity. Unfortunately, to the book's detriment, Chang leaves out much of the facts of the lawsuit, which began in 1994 when Chinese American groups, who wanted Chinese American students to have an equal opportunity to compete for admission to the magnet high school, filed a class action legal in federal court. They challenged the school desegregation consent decree issued in 1983 to reenroll more African American students into the student body because it 36 capped the percentage of Chinese Americans to 45 percent of the school's enrollment and required Chinese American freshman applicants to score higher than whites and other candidates. 43 After years of protracted litigation, the parties entered into a settlement which eventually resulted in the school having a student body lacking diversity-the current student body is 57 percent Asian, 14 percent white, 10 percent Latino, and 2 percent African American. 44 Lowell High School set a national trend followed by high schools with race-blind and merit-focused policies that look only at standard test scores: Stuyvesant in New York, Monte Vista High School in Cupertino, Thomas Jefferson High School for Science and Technology in Alexandria, and Boston Latin School. 45 At these high schools, Asian American students significantly outnumber students from other racial backgrounds. 46 This lack of diversity compels the question of: How can these schools admit more students who are not Asian or white to their student body? More broadly, this diversity conundrum extends to colleges and universities. If affirmative action were abolished, major university campuses would likely have predominantly white and Asian American student bodies. By Chang's account, whites are three times as likely to be admitted to selective universities as Asians with a similar academic record. 47 Some scholars supplement Chang's outlook while others detract from it. Professors Frank Wu and Jerry Kang argue that without affirmative action, more whites with lower test scores would be admitted over Asian Americans. 48 In contrast, other studies show that more Asian Americans than whites would be admitted. 49 Irrespective of these divergent opinions is the general consensus that there would be fewer African Americans, Latinos, and Southeast Asians such as Vietnamese, Hmong, and Filipino people, in higher education without affirmative action. As shown through the elimination of affirmative action in California after the California Board of Regents ended the use of race as a criterion for student admissions and the enactment of Proposition 209 in 1998, which banned affirmative action from state universities, there has been a significant drop in the admission rates for African American and Latino freshman applicants at UC Berkeley and UCLA. 50 ment of Justice investigation was launched after the protests over Brown's killing uncovered intentionally racist and unconstitutional practices by the Ferguson police and Ferguson municipal courts. 55 Chang suggests that the Black Lives Matter movement was more than organized protests against police violence. These activists advocated for all persons on the margins of society, brought attention to prisoners, domestic workers, and migrants, and forced Americans to rethink culture and understand racial justice issues. 56 Next, Chang expands the largely black/white narrative of police violence and misconduct by discussing the prosecution of rookie New York Police Department (N.Y.P.D.) Officer Peter Liang. Unfortunately, Chang paints a description of this case with such broad strokes when describing the Liang case that he misses an opportunity to discuss the case in greater detail. Officer Liang was prosecuted for the shooting death of 28-year-old Akai Gurley in a dark stairway in the Louis H. Pink Houses in the Bronx. 57 Liang and his partner were patrolling different floors of the housing project simultaneously, and consistent with police policy, had their guns drawn when Liang opened a door. 58 When Liang's gun went off, a bullet ricocheted off a wall, and struck Gurley in the heart. 59 Instead of helping Gurley as he laid in a pool of his blood, Liang called his union representative with concern about losing his job. 60 Gurley later died at a hospital. The defense argued at trial that the gun accidentally went off. See Lawsuit Could Decide the Future of Desegregation Efforts in San Francisco A jury convicted Liang of manslaughter, official misconduct, and for failing to assist Gurley. 61 Liang's supporters believed he was selectively prosecuted because Liang was the first N.Y.P.D. officer in over a decade convicted in a line-of-duty shooting, while white officers in other misconduct cases were not prosecuted or received nominal punishment. 62 Justice Danny Chun reduced Liang's manslaughter charge to criminal negligence homicide and sentenced him to five years of probation and 800 hours of community service. 63 Initially, the Brooklyn District Attorney's Office and Liang appealed, but Liang later waived his right to file a motion to vacate his conviction, and in turn, the prosecutors withdrew their appeal. 64 The City agreed to pay more than $4 million to settle a wrongful-death lawsuit brought by the Gurley family. 65 Liang's prosecution generated massive Asian American activism, and drew a rally of 10,000 in April 2016. 66 This show of support for Liang was reminiscent of the outcry by Pan-Asian American coalition groups after the violent murder of Chinese American Vincent Chin thirty-five years before. 67 Liang received nationwide support from the Chinatown community, composed of mainly immigrants, who insisted that the 28-year-old officer was scapegoated in a prosecution that did not involve an altercation and in a climate of ongoing protests by African Americans against police violence. 68 Their concerns were shared by Professor Stephen Saltzburg who believed 67. Chin was adopted and raised in a working-class Chinese-American family and was having his bachelor party at a Detroit bar. See United States v. Ebens, 800 F.2d 1422, 1427-29 (6th Cir. 1986) (describing the brutal attack on Chin). Believing him to be Japanese and economic competition, two white men who had been recently laid off from a Chrysler plant started a fight and fatally struck Chin several times in the head with a baseball bat. Chin's killers were fined $3000 and ordered to pay $780 in court fees but never went to jail. At that time, the popular perception was that Asian Americans and immigrants were not minorities protected by civil rights laws. Chin's killing sparked outrage and galvanized the Asian American community into protest. Liang would not have been prosecuted or would have been acquitted but for the movements clamoring for police accountability. 69 Liang, the son of immigrants, was raised in Chinatown. Many of the foreign-born Chinese protesters considered Liang as Chinese and not American. Much of the media attention was placed on these Asian immigrants and conservative Asian groups that were loudly supporting Liang. 70 According to Chang, Liang seemingly acquired the status of conditional whiteness because he represented law enforcement, yet he was not afforded the protections that white officers are normally given. Believing Liang supporters were naïve to think that Liang would be afforded all the privileges of whiteness, Chang rhetorically asks: Did they really believe the killing of Akai Gurley should be less indictable because it came at the hands of an Asian American officer? Were they really arguing that if hundreds of thousands of people had not taken to the streets in a freedom movement against state violence, this Chinese American police officer would have been afforded all privileges offered a white cop who had taken the life of a Black person? 71 On the flipside, assimilated Asian Americans, many of whom supported the Black Lives Matter movement and similar racial justice projects that sought solidarity and police accountability, were against any special treatment for Liang due to his race. 72 These Asian Americans argued that as a matter of principle, Liang should not be afforded "white" privilege and immunity from prosecution, which has been frequently granted to white officers who shot Though not considered by Chang, readers may ask themselves: what explains the different positions held by the two opposing Asian American camps? A possible theory is that Asian immigrants who insisted that Liang was scapegoated were not familiar with the ugly racial history of this country, and therefore did not fully understand the African Americans experience or the Black Lives Matter movement. Nor did they realize that Asian Americans are beneficiaries of the civil rights struggles during the 1960s, 74 or recall that African American and Asian American communities previously worked together in seeking institutional reform of the N.Y.P.D. "stop-and-frisk" practice. This naivety could have made it easier for Asian immigrants to inherit and be influenced by American racial stereotypes as depicted in the media. Beyond the Liang case, the fluidity in the racial positioning of Asian Americans as functionally white or constructively black, is apparent in several recent cases involving them. In each case, the racial implications were downplayed or not mentioned at all. First, a few days before Christmas 2014, N.Y.P.D Officers Wenjian Liu and Rafael Ramos were sitting in their patrol car in Brooklyn when they were killed by Ismaaiyl Brinsley in an ambush. 75 Brinsley, an African American with an extensive criminal history, traveled from Baltimore after shooting his girlfriend, with the intent to kill police officers in retaliation for the killings of Eric Garner and Michael Brown. 76 Apparently, it made no difference to Brinsley that Liu and Ramos were not white because their police uniforms represented whiteness. Second, Jiansheng Chen, a 60-year-old retired restaurant worker and grandfather playing Pokémon GO was shot to death in his van by a security guard over a disagreement in Virginia. 77 Third, just before graduation, Seattle high school student Tommy Le was first tased and then shot and killed by sheriff's deputies responding to a 911 report of disturbance by an armed man. 78 The three officers who arrived on the scene said they mistook the pen Le was carrying for a sharp object that resembled a knife. 79 Fourth, United Airlines singled out 69-year-old Dr. David Dao by requesting Department of Aviation security officers to remove him from a plane. To the shock of other passengers, a bloodied Dao was forcibly dragged off the plane despite his repeated protestations. 80 The emotionally disturbing incident was captured on a video that went viral, sparking public outcry and creating a public relations debacle for United Airlines. Collectively, these cases pose the question: Was the race of the victims initially minimized due to unconscious attitudes and cognitive bias? 81 On this issue, Implicit Association Tests (IAT) show that individuals unconsciously express preferences for and attribute positive characteristics to individuals who are like them. 82 Conversely, they react negatively toward and attribute negative characteristics to those outside of their social and racial groups. 83 According to a 2015 Pew Research Center Report, two IAT studies show that fifty percent of whites subjects tested held subconscious preferences for other whites over Asian Americans, and forty-eight percent of whites held subconscious preferences for other whites over African Americans. 84 In light of the power of unconscious biases, is it possible that Asian Americans have been considered as somehow mattering less than whites or African Americans in the four cases discussed? Consider the following. In the case of Chen, did his inability to speak English and immigrant background contribute to the security guard's perception of Chen as an "other" which made it easier to shoot Chen at least five times? As for Tommy Le, would he have been shot in the arm and back if he were white instead of Vietnamese American? If Le were perceived as an American and not as a foreigner, would the narrative of the shooting by the Sheriff's Department change as much as it has? 85 Would the optics be different if Dr. Dao was an African American being dragged off a commercial airplane? Release, Asian Americans Advancing Justice, Asian Americans Advancing Justice Condemns Grand Jury's Failure to Bring Charges Against Darren Wilson in Michael Brown Hours Before High-School Graduation, The Seattle Times (June 28, 2017, 3:56 PM), http:// www.seattletimes.com/seattle-news/crime/bubbly-kid-was-fatally-shot-by-king-countydeputy-hours-before-high-school-graduation. 79 85. An analysis of the racialization of Asian Americans would not be complete III. Fisher v. University oF texas As reflected in mainstream media, affirmative action supporters hailed Fisher as a major victory for fairness and racial diversity. Edward Blum, Executive Director of the Project on Fair Representation and a longtime advocate for color-blindness who is credited for being responsible for gutting the Voting Rights Act, orchestrated the lawsuit against the University of Texas (UT). 86 At the center was the UT admissions program that was designed to admit a bright and diverse entering class to alleviate the lingering effects of racial discrimination and graduate a diverse student body for the professional workforce. In 2008, Abigail Fisher, a white female high school student, applied for undergraduate admission to the UT's flagship campus in Austin and was rejected. 87 She filed suit, claiming that UT's consideration of race in admission decisions was in violation of the Equal Protection Clause of the Fourteenth Amendment. 88 without addressing the way in which Asian Americans are attributed with foreignness. For the uninitiated, Asians, like Latinos are often perceived as foreigners even though they were born in this country or their families have been rooted in the United States for generations. A particularly egregious example was the mass internment of Japanese Americans during World War II, many of whom were born in the United States and were fully assimilated into American society. In upholding the constitutionality of UT's admissions program, Justice Kennedy concluded that UT's diversity goals satisfied the strict scrutiny standard that requires government racial classifications to advance a compelling interest, and that the holistic aspect of the admissions program was needed to reach the diversity goals of the university's freshman class,. 89 With this ruling against Fisher, an unrelenting Blum filed a new lawsuit a year later in Texas state court arguing that the use of racial and ethnic preferences by UT in admissions violated Texas law and the State Constitution. 90 Asian Americans were drawn into the Supreme Court's affirmative action jurisprudence by Fisher and Justice Alito. In Fisher's brief, she referred to Asian Americans in arguing that Texas's use of race in admission decisions was detrimental to Asian Americans and subjected them to the same inequality as white applicants, thereby exacerbating classroom diversity problems. 91 UT's response brief did not mention Asian Americans because they were neither beneficiaries of its admission plan, nor were they underrepresented minorities. 92 Asian American interest groups on both sides of the issue assumed an active advocacy role by filing amicus curiae briefs. Asian American interest groups that loudly opposed affirmative action received more media attention than Asian American groups that supported them. In their amicus brief, the Asian American Legal Foundation (AALF) and the Asian American Coalition for Education argued that the university's admission program uses impermissible racial balancing by including Hispanics in the program but excluding Asian Americans. AALF argued that Asian Americans were harmed the most by the university's affirmative action program, and the exclusion of Asian Americans as beneficiaries diminishes their value. 93 As a counter, in their brief, Asian Americans Advancing Justice (AAAJ), joined by 150 civil rights groups advocacy organizations, bar associations, and business organization, representing the majority of Asian American supporting affirmative action, argued that Fisher used Asian Americans as pawns to strengthen her arguments. They took issue with Fisher's characterization of Asian Americans as innocent victims burdened by affirmative action programs. 94 Remarkably, Justice Alito transformed Asian Americans into honorary whites and made them the centerpiece of his dissent. 95 Early indicators that Alito was going to place the spotlight on Asian Americans materialized five years before in a series of robust questions during oral arguments in Fisher I. At the time, Alito was particularly interested in how UT's program impacts Asian Americans. When asked if Asian Americans were treated fairly in the admission process, he insinuated that UT lumped together all Asian groups to support its decision to exclude Asian Americans as beneficiaries, and then Alito expressed skepticism about whether the Texas plan appropriately accounts for determining the admission rates for Asian American subgroups. Despite UT's university's assurances that this self-identification process was an accurate measure for Asian American students, Alito argued that Asian Americans remain "overrepresented" due to the lumping together of the major and subgroups of Asian Americans: Chinese, Japanese, Korean, Vietnamese, Hmong, and Indians on campus as a monolithic group. Fisher v. Univ. of Texas at Austin, 136 S. Ct. 2198, 2229 (2016). From Alito's perspective, Asian Americans are not overrepresented based on state demographics. Id. Alito's concerns may have stemmed from his belief that UT has a poor history of actively recruiting Asian American applicants in the years leading up to Fisher. Though Alito never mentioned it in his dissent, the weak recruitment of Asian American students was particularly noticeable in UT's 1992 affirmative action program which excluded Asian Americans, and was deemed to be unconstitutional four years later in 1996 by the Fifth Circuit in Hopwood v. Texas (Hopwood). 78 F. 3d 932, 945-46 (5th Cir. 1996). Hopwood revealed that UT valued African Americans and Mexican Americans more than Asian Americans since the university chose to give preferences to the former two groups. On this issue about the place of Asian Americans in the Texas program, Professor Gabriel Chin comments that such a decision "sends a signal of the valuation of the race in the eyes of the law school if the law school helps some races . . Alito's festering concerns about UT's discrimination against Asian Americans manifested into a boisterous fifty-one-page dissent, joined by Chief Justice Roberts and Justice Thomas. Alito's dissent began with a detailed critique of UT's policies, and then argued that UT failed to define the term "critical mass" and explain how the use of race and ethnicity were used to achieve that goal. 97 Alito insisted that the UT program could not satisfy strict scrutiny because the university merely made vague amorphous definitions of critical mass and how it measures diversity on campus, and that the majority decision gives too much deference to UT. 98 Throughout his dissent, Justice Alito effectively functions as a selfanointed advocate for Asian Americans opposing affirmative action. 99 As a rejoinder to the single mention of Asian Americans in the majority's opinion, Alito did the reverse and over-relied on Asian Americans to avoid talking about white interests and white victimhood. Alito purposefully minimized references to Fisher as being a white woman. This avoidance of the plaintiff's white identity was noticed by one academic who argued that Asian Americans were used as a proxy for whites. Justice Alito mentions white people only ten times . . . , and not once does he use the word in reference to Fisher herself. Yet the words "Asian Americans" appear sixty-two times in his dissent. If it were not for the ubiquity of Abigail Fisher's image in the media today, one might think that Justice Alito was examining the petition of a person like me a Chinese American. 100 Alito's use of Asian Americans to argue that affirmative action discriminates against whites is the kind of argument that Professor Alfred Yen warns about: when Asian Americans are assigned the model minority stereotype, they are "given whites attributes mak [ing] it possible [to] argue about the interests of whites without ever mentioning whites." 101 Arguably, Alito's use of the racial identity of Asian Americans to further anti-blackness reached a high point when Alito separated Asian Americans from African Americans and Hispanics. Alito asserted that the majority opinion helped affluent African American students while hurting Asian Americans students, and used the perennial trope of pitting African Americans and Hispanics against Asian Americans. 102 Observations such as these have been criticized by AAAJ: "Justice Alito takes pains during a period of significant racial conflict in our society, to look outside the record to irresponsibly pit Asian Americans against other communities of color." 103 Ultimately, Alito's dissent was not all for naught for it serves as a blueprint for other challenges against affirmative action that awaited Fisher's outcome, "Alito's repeated references to Asian students were a clear nod to two other cases working their way through federal court, although he did not mention them specifically." 104 Indeed, as is explored in the next section, Asian Americans are plaintiffs in current lawsuits against Harvard College and the University of North Carolina in Chapel Hill (UNC). 105 IV. Post-Fisher v. texas: Discrimination Against Asian Americans at Elite Universities Redux? Students For Fair Admissions (SFFA), an arm of Blum's group, moved away from applying the strategy of using a sympathetic young white female in Fisher to using Asian American student-plaintiffs to challenge the Harvard example of what Frank Wu articulates as "[Asian Americans] placed in the awkward position of buffer or intermediary, elevated as the preferred racial minority at the expense of denigrating African Americans." See Wu, supra note 28. Alito's concerns about the plight of Asian American applicants continued when he announced his dissent from the bench and offered queries about a hypothetical applicant (straw man), who has one Asian grandparent, self-selecting his or her ethnic background on their application and asked whether such an applicant bring a different diversity perceptive to UT. Justice Alito's reading of his dissent from the bench. Fisher, 136 S.Ct. 2198. He rhetorically asked whether UT would have the presumption that the Asian applicant bring a distinctive "Asian viewpoint" to the classroom. To Alito, given the many diverse ethnic backgrounds of Asian students, "It would be ludicrous to believe that the student will have the same viewpoint to share in class." Id. 103 and UNC affirmative action programs. 106 Just like Abigail Fisher, these Asian American students are portrayed as innocent victims, but unlike Fisher, they are not white and possess much stronger compelling qualifications for admission. 107 To begin, SFFA filed suit against Harvard College on behalf of a rejected Chinese American applicant alleging that the university's admissions policy violates Title VI of the Civil Rights Act of 1964 which bars federally funded entities from discriminating based on race or ethnicity. 108 SFFA's Complaint argues that Harvard intentionally discriminates against Asian American applicants by requiring them to score 112-140 points higher on the SAT than white or other minority applicants and has imposed longstanding ceilings on Asian American admissions that are akin to the quotas placed on Jewish students generations before. 109 It further alleges that more whites and Asian Americans would be admitted into Harvard absent its reliance on "racial classifications" in its admission decisions. 110 Similarly, SFFA alleges UNC-Chapel Hill (UNC) discriminates against white and Asian American applicants because it uses race as determinative factor in admissions. 111 Compared to the Harvard brief, SFFA's grouping together of whites and Asian Americans in the UNC Complaint is even more explicit: UNC-Chapel Hill's racial preference for each underrepresented minority student (which equates to a penalty imposed upon white and Asian-American applicants) is so large . . . using race or ethnicity as a dominant factor in admissions decisions could, for example, account for the disparate treatment of high-achieving Asian-American and white applicants, and underrepresented minority applicants with inferior academic credentials. UNC-Chapel Hill admission decisions simply were not explainable on grounds other than race. High-achieving Asian-American and white applicants are as broadly diverse and eclectic in their abilities and interests as any other group seeking admission to Coinciding with these SFFA's sleeper cases are the allegations made by the Asian American Coalition for Education (AACE) joined by sixty other Asian American groups opposed to affirmative action in a complaint with the Department of Justice. 113 AACE asked the Civil Rights Division to investigate unlawful discrimination against Asian American applicants in the admissions programs at Yale, Brown, and Dartmouth. 114 AACE claims these universities are enforcing race-based quotas against Asian American applicants, and this anti-Asian bias is facilitated by the university's embrace of negative racial and cultural stereotypes such as: (1) Asian Americans lack creativity and cannot think critically think; (2) Asian Americans lack leadership skills; and (3) Asian American students are not well-rounded because they overemphasize studying over extracurricular activities. 115 Unlike the Harvard and UNC cases held in abeyance, AACE's complaint laid dormant after it was lodged in 2015. It only received increased attention last summer when the Justice Department signaled a renewed federal effort to challenge affirmative action policies in college and university admissions. 116 Anyone familiar with Asian American issues will probably realize that the allegations of university admission policies discriminating against Asian Americans made by SFFA and AACE, portraying "Asian Americans as victims" of affirmative action, are not new. In fact, such arguments harken back to the Reagan Administration's argument that affirmative action unfairly limited opportunities of whites 117 and echo the charges made by Asian Americans in the 1980s that Berkeley, UCLA, Brown, Stanford, Harvard, and Princeton intentionally discriminated against them by claiming that Asian American applicants were overrepresented or unqualified for admission. 118 Like before, these claims about "reverse discrimination" claims are misguided. On this issue sociologists Michael Omi and Howard Winnant clarify that affirmative action programs are not reverse discrimination because they are designed to address social and historical inequalities, and do not essentialize a particular individual race. 119 To the contrary, when properly administrated, the benefits and burden of affirmative action are shared by everyone. 120 So why the rehash? Are conservatives simply out of ideas? In my view, the mirrored claims made in the Harvard and UNC lawsuits by affirmative action opponents are analogous to the times when Hollywood decides to reboot an older film property. These movie studios will obfuscate the real reason why they are making a "new version" by claiming that they are seeking to introduce a new generation to a classic movie, but the truth is they are just lacking original ideas. Similarly, with the same characters and plot, but with new actors cast, affirmative action opponents are recycling old arguments about the harm caused to Asian Americans. But like most rebooted films, the "new one" is not warranted and fans of the original were not clamoring for a remake. As such, a reboot made in such desperation is unlikely to perform well at the box office, and it runs the risk of being eternally ridiculed in film history as an attempted cash grab. Note the resemblance. In her seminal book about Asian Americans and affirmative action published in 1992, sociologist Dana Takagi wrote: Beginning in late 1988 conservative and neoconservatives suggested that discrimination against Asian Americans was sympathetic of deeper problems with the university: affirmative action . . . discrimination against "WHAT'S GoING oN?" Asians was the logical and inevitable outcome of preferences for 'other' minorities (that is, Blacks and Chicanos/Latinos). 121 Blum applies a similar tactic when he commented about the Harvard lawsuit and the use of racial classifications in university admissions in a Washington Post op-ed last summer: Today, Harvard's discriminatory polices harm Asian Americans-call it the Asian problem . . . . From 1992 through 2013, the percentage of Asians admitted to Harvard each year has been remarkably stable. In 1992, 19 percent of Asian admitted to Harvard each year has been remarkably stable. In 1992, 19 percent of admitted students were Asian, while in 2013, 18 percent were Asian. This is true even though the number of Asian applicants to elite schools have disproportionately risen in recent decades . . . . This rate of admission of Asians cannot be a coincidence . . . Harvard isn't alone. The same flat rate of Asian admissions is evidenced at all of the Ivy League schools. 122 Zooming in for a closer examination, Blum is using this tired affirmative action trope to separate Asian Americans from other communities of color on this issue. Legal scholars Nancy Leong and Erwin Chemerinsky characterize such disingenuous arguments as strategic ones made to further a conservative agenda rather than protect Asian Americans. 123 Unveiled, this feigned concern for Asian Americans is a way to "protect the existing racial hierarchy-with white people at the top-while disguising their efforts as race-neutral rather than racially motivated." 124 Conservatives are not the only ones at fault for not acting in the best interests of Asian Americans. The political left can be criticized for leaving out Asian Americans in their broad pro-affirmative action arguments on behalf of African Americans and Latinos. On this topic, professors Michael Omi and Dana Takagi argue it is problematic that Asian Americans are considered part of a wider and "shared interest" coalition politics because it assumes that every racial group faces the same kind of racism and discrimination in the legal market, politics and residential patterns. 125 Omi and Takagi maintain that liberals should not assume that all racial minorities share the same nature of racism in this country. 126 In applying their thesis to the examples of the opposing camps in the Liang case, and the Chinese American plaintiffs and the civil rights groups that opposed them in the Lowell high school litigation, the shortcomings of shared interest theory become more apparent. With the Harvard bench trial's conclusion last fall, and a decision by Judge Allison D. Burroughs expected in early 2019, and the UNC case still in the early stages of litigation, it remains entirely unclear if or how they will be resolved. It is possible that the suits will reveal some evidence of discrimination by some universities or clear some of the universities of any wrongdoing, just like the admissions controversies of the 1980s did. Professor Takagi reports the Education Department's Office of Civil Rights absolved Harvard of charges of discrimination against Asian Americans, and internal investigations at Cornell, Princeton, found no evidence of bias. 127 However, Berkeley, UCLA, and Brown found problematic issues, and the schools were forced to modify their admission policies. 128 Whatever the outcome of the cases, hopefully at the very least, these challenges will encourage sufficient transparency and more accountability in the admission process, as well as bring attention to issues affecting Asian Americans. Finally, speculation that the Harvard and the UNC cases are on the fast-track to the Supreme Court has been spurred on by Justice Kennedy's retirement and Justice Brett Kavanaugh's confirmation to the Court, which could tilt the Court to the right. 129 The anticipated new conservative wing of Thomas, Roberts, Alito, Gorsuch, and Kavanaugh may view the Harvard lawsuit was an opportunity to strike down affirmative action once and for all. With Alito's dissent available as a template, the Court could return to the traditionally rigid application of struct scrutiny as opposed to the seemingly more relaxed test in Grutter v. Bollinger130 and Fisher, to challenge the defenses raised by Harvard concerning their admission policies and reject the diversity rationale for affirmative action. But maybe none of these predictions will come into fruition. Conclusion In the end, We Gon' Be Alright successfully destroys the myth of a post-racial America by offering a snapshot of racial progress and contemporary race relations. By showing that race and privilege are embedded in our society, Chang reveals the fallacy of a color-blind society and provides solid reasons why Americans should act through activism and advocacy to address inequality.
2019-03-14T02:28:53.352Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "2e126d8cad3d412e48bed08da4ea1419a742ebaf", "oa_license": null, "oa_url": "https://doi.org/10.1308/rcsbull.2019.8", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5dbe926ef8ee0a8c35e1b5571ffbeff7ed0f8434", "s2fieldsofstudy": [], "extfieldsofstudy": [ "History" ] }
260716619
pes2o/s2orc
v3-fos-license
Ensemble Improved Permutation Entropy: A New Approach for Time Series Analysis Entropy quantification approaches have gained considerable attention in engineering applications. However, certain limitations persist, including the strong dependence on parameter selection, limited discriminating power, and low robustness to noise. To alleviate these issues, this paper introduces two novel algorithms for time series analysis: the ensemble improved permutation entropy (EIPE) and multiscale EIPE (MEIPE). Our approaches employ a new symbolization process that considers both permutation relations and amplitude information. Additionally, the ensemble technique is utilized to reduce the dependence on parameter selection. We performed a comprehensive evaluation of the proposed methods using various synthetic and experimental signals. The results illustrate that EIPE is capable of distinguishing white, pink, and brown noise with a smaller number of samples compared to traditional entropy algorithms. Furthermore, EIPE displays the potential to discriminate between regular and non-regular dynamics. Notably, when compared to permutation entropy, weighted permutation entropy, and dispersion entropy, EIPE exhibits superior robustness against noise. In practical applications, such as RR interval data classification, bearing fault diagnosis, marine vessel identification, and electroencephalographic (EEG) signal classification, the proposed methods demonstrate better discriminating power compared to conventional entropy measures. These promising findings validate the effectiveness and potential of the algorithms proposed in this paper. Despite the great success that entropy algorithms have achieved in practical applications, they continue to face certain limitations that require further refinement.For instance, both ApEn and SampEn are sensitive to tolerance r, a parameter that decides the level of similarity between two vectors in the phase space [1,5,10].If the value of tolerance is set too low, very few vectors are regarded as similar, leading to unreliable or undefined conditional entropy estimates.The situation can be worse if the data length is short.By contrast, a larger value of tolerance may result in a loss of information.FuzEn has been proposed as a solution to this issue by using the exponential function instead of the Heaviside function to obtain a fuzzy measurement of two vectors' similarity [10].However, FuzEn still requires pairwise similarity checks between vectors in the phase space; its computation cost increases quadratically with data length.An alternative approach, PE [13], uses the Bandt-Pompe procedure to symbolize the vectors based on the order of amplitudes, resulting in ordinal patterns (or permutation patterns).Despite its simplicity and computational efficiency, absolute amplitude information is overlooked in this process [14,26].Some researchers also claimed that PE is liable to be affected by noise because the permutation relations can be varied by a small change in amplitude values [5,16].Additionally, there are studies that have proved that PE is susceptible to the equal values in time series [15,27,28].Typically, ranking the equal values according to their temporal order or breaking them by adding random perturbations are common ways to circumvent this problem [1,13].Unluckily, a recent study pointed out that these solutions can lead to misinterpretations of the underlying nature of the electroencephalogram records [28].Many efforts have been made by researchers to tackle the above-mentioned defects of PE.Fadlallah et al. proposed WPE [14], in which the amplitude information is considered by weighting the ordinal patterns.Bian et al. invented mPE [15], where the same symbols are assigned to the ties.Because of that, mPE can provide more potential motifs to represent the sub-series, and its ability to recognize the heart rate variability (HRV) signal is thus improved.Notably, both WPE and mPE have been shown to be insufficient in completely addressing the limitations inherent to PE, highlighting the need for further research and development in this area.Recently, DispEn [16] and its extension FDispEn [17] were devised by Hamed Azami and the coauthors, whose main idea is to represent the univariate time series with a small set of symbols.Then, entropy estimation of the original data can be equivalent to studying the probability distribution of symbol sequences and calculating the corresponding entropy value.Since the data are transformed into a new time series based on symbolic dynamics, some detailed information might be lost.Moreover, how to determine the number of symbols remains a problem.Therefore, each entropy approach has its advantages and limitations. To enhance the performance of traditional entropy algorithms, a novel entropy measure called ensemble improved permutation entropy (EIPE) is proposed in this paper.We start by presenting a new data symbolization method that uses a symbol set composed of L elements to represent vectors in the phase space, resulting in symbolic patterns.It is imperative to note that the obtained symbolic patterns take both permutation relation and amplitude information into account.Then, like what is performed in the PE algorithm, the probability distribution of the symbolic patterns is mapped to an entropy value based on the Shannon entropy.However, one needs to artificially pre-define the discretization factor in the symbolization process, and determining this parameter remains a challenge.We addressed this issue by drawing inspiration from the ensemble technique presented in reference [5], where we varied the discretization factor and averaged the corresponding entropy results, resulting in the EIPE.To facilitate the analysis of signals over multiple temporal scales, a multiscale EIPE (MEIPE) algorithm is further introduced, where the coarse-graining technique is applied prior to the EIPE calculation.The effectiveness of the proposed methods is evaluated using various synthetic and experimental data, including RR interval data, bearing fault signals, underwater acoustic signals, and EEG signals. The remainder of this paper is organized as follows: the proposed EIPE and MEIPE algorithms are described in Section 2; simulation and experimental results are provided in Sections 3 and 4, respectively; and the paper is concluded in Section 5. Ensemble Improved Permutation Entropy The EIPE algorithm is calculated through the following steps: Step 1.As shown in Equation (1), given a univariate time series x = {x 1 , x 2 , • • • , x N }, the cumulative distribution function is utilized for data normalization: where y i represents ith element of the normalized sequence y and µ and σ 2 denote the mean and variance of x, respectively. Step 2. With embedding dimension m and time delay τ given, the reconstructed phase space is denoted by where Y(j, :) is the jth row of Y, and Step 3. Let y max and y min represent the maximum and minimum values of y, respectively, and L be the discretization factor (an artificially pre-defined parameter); the uniform partition function (UPF) is defined as follows: where ∆ = (y max − y min )/L.Obviously, for arbitrary input u ∈ (y min , y max ), UPF converts it into an integer symbol ranging from 0 to L − 1.Let the first column of Y be the input of UPF; Y(:, 1) is then transformed into a symbol sequence, represented as S(:, 1). Step 4. For the kth column of Y, indicated as Y(:, k), 2 ≤ k ≤ m, its corresponding symbolization result S(:, k) is achieved by Equation (4): where 1 ≤ j ≤ N − (m − 1)τ, and represents a function that rounds the elements in it to the nearest integers towards zero.Upon completion of the symbolization process for all components within the phase space Y, the resulting entity, expressed as the symbolic phase space S, is obtained.Further, each row of S is referred to as a symbolic pattern (SP), which incorporates both permutation relation and amplitude information. Step 5.As shown in Equation ( 5), the probability distribution of SP is computed and then mapped to an entropy value based on the definition of Shannon entropy.This resulting entropy value is referred to as the improved permutation entropy (IPE).Since each symbolic pattern comprises m elements, and each element can take L possible states, the total number of symbolic patterns is thus given by L m .It is apparent that the IPE attains its maximum value only when SP follows a uniform distribution.To optimize the IPE, a normalization technique can be applied using Equation (6). The above description indicates that the discretization factor L has a significant impact on the calculation of the IPE, because it plays a pivotal role in the symbolization process, as depicted in Equations ( 3) and (4).Evidently, a higher value of L leads to a comparatively lesser loss of time series' information during the symbolization process, while a smaller L value offers better noise resistance, albeit at the cost of losing some information.The selection of an appropriate discretization factor L depends on the characteristics of the signal, including its signal-to-noise ratio (SNR).Unfortunately, this a priori information is usually unknown.The ensemble technique, which involves the integration of multiple methods to improve overall prediction performance, can address this issue.Motivated by this idea, we propose the EIPE.As can be seen in Equation (7), EIPE is calculated as the mean of the IPE results derived from varying values of L. where a and b are the minimum and maximum value of L, respectively. Multiscale Ensemble Improved Permutation Entropy Complex time series often have intricate structures across multiple temporal scales, which conventional entropy measures that rely on a single-scale analysis fail to account for.To remedy this, multiscale ensemble improved permutation entropy (MEIPE) is proposed in this section, where a coarse-graining process [25] is conducted prior to a comprehensive analysis with EIPE.The coarse-graining process of a time series x = {x 1 , x 2 , • • • , x N } is given by Equation ( 8), where r s represents the output sequence under scale s.Applying EIPE to process the subsequence r s , the obtained result EIPE s is the entropy of the original sequence under scale s.This process is repeated for all scale factors, resulting in an entropy vector, namely the MEIPE.In other words, MEIPE is essentially a plot of EIPE versus scale factors. Synthetic Data Analysis In this section, the effectiveness of the proposed EIPE algorithm is verified through several synthetic signals.As can be seen in Equation (6), embedding dimension, time lag, and discretization factor need to be properly set to implement the EIPE algorithm.According to the conclusions in [1,5,13], 3 ≤ m ≤ 7 and τ = 1 are recommended.In what follows, unless otherwise specified, we varied the discretization factor L from 2 to 8 and set m = 4 and τ = 1. Noise Signals Noise is ubiquitous in various systems and applications.White, pink, and brown noise are the most frequently used random signals for model analysis [5,29].White noise is a type of noise that contains equal energy or power across all frequencies; its power spectrum density can be represented as S w ( f ) = C w , where C w is a constant.Pink noise, also known as 1/f noise, is a type of noise where the power of the signal decreases by 3 decibels per octave as the frequency increases.Compared with pink noise, brown noise has a lower intensity at higher frequencies.The power spectrum density of pink and brown noise can be denoted by S p ( f ) = C p / f and S b ( f ) = C b / f 2 , respectively, where C p and C b are constants. The comparative results of diverse entropy algorithms in terms of their ability to discriminate between three types of noise are presented in Figure 1.The average entropy values, along with their error bars representing the standard deviation (SD), are plotted against the varying data length.The data length was changed from 40 to 700, with an increment of 20.For each data length, 40 independent realizations were generated for each type of noise.As can be seen, no matter which algorithm is utilized, white noise attains the highest entropy values, followed by pink and brown noise.This result is consistent with the reality that white noise is the most complex, succeeded by pink and brown noise [5,29].It can also be observed that EIPE requires fewer samples than the other methods to discriminate between the three types of noise, implying that our method has a low dependency on data length and can extract effective features of the noises even with limited samples. Entropy 2023, 25, x FOR PEER REVIEW 5 of 13 density can be represented as , where w C is a constant.Pink noise, also known as 1/f noise, is a type of noise where the power of the signal decreases by 3 decibels per octave as the frequency increases.Compared with pink noise, brown noise has a lower intensity at higher frequencies.The power spectrum density of pink and brown noise can be denoted by The comparative results of diverse entropy algorithms in terms of their ability to discriminate between three types of noise are presented in Figure 1.The average entropy values, along with their error bars representing the standard deviation (SD), are plotted against the varying data length.The data length was changed from 40 to 700, with an increment of 20.For each data length, 40 independent realizations were generated for each type of noise.As can be seen, no matter which algorithm is utilized, white noise attains the highest entropy values, followed by pink and brown noise.This result is consistent with the reality that white noise is the most complex, succeeded by pink and brown noise [5,29].It can also be observed that EIPE requires fewer samples than the other methods to discriminate between the three types of noise, implying that our method has a low dependency on data length and can extract effective features of the noises even with limited samples. Logistic Map The logistic map can be described as Logistic Map The logistic map can be described as where µ is a parameter that controls the dynamic behavior of the model.According to previous studies [30][31][32], when µ increases from 3.5 to 3.99, the model exhibits a period-doubling bifurcation.In particular, for 3.57 ≤ µ ≤ 3.99, the system is chaotic, except for rare exceptions like µ ≈ 3.84. To evaluate the ability of the EIPE algorithm to detect periodicity and nonlinearity, we varied µ from 3.5 to 3.99 with a step size of ∆µ = 0.001.For each µ, we generated a time series with 10,000 sampling points and computed its entropy.Figure 2 shows how the entropy values obtained by different algorithms change with µ.When µ ≈ 3.57, the EIPE exhibits a positive correlation with the augmentation of µ, affirming that the system progressively grows in complexity.This phenomenon agrees with the fact that the system undergoes a transition from periodic to chaotic behavior [32].Remarkably, the values of the other three entropy algorithms remain unaltered in this context.When µ ≈ 3.84, both DispEn and EIPE exhibit a significant decline in this region, whereas PE and WPE initially decrease but quickly rebound afterward.It is noteworthy to mention that the profile obtained by the EIPE algorithm is consistent with the result depicted in Figure 1 of reference [32], signifying the potential of the proposed method in discriminating between regular and non-regular dynamics. series with 10,000 sampling points and computed its entropy.Figure 2 shows how tropy values obtained by different algorithms change with μ .When 3.57 ≈ μ , th exhibits a positive correlation with the augmentation of μ , affirming that the syst gressively grows in complexity.This phenomenon agrees with the fact that the sys dergoes a transition from periodic to chaotic behavior [32].Remarkably, the values of t three entropy algorithms remain unaltered in this context.When 3.84 ≈ μ , both and EIPE exhibit a significant decline in this region, whereas PE and WPE initially d but quickly rebound afterward.It is noteworthy to mention that the profile obtained EIPE algorithm is consistent with the result depicted in Figure 1 of reference [32], sig the potential of the proposed method in discriminating between regular and nondynamics. Noisy Lorenz Signal To evaluate the performance of the proposed algorithm under noisy conditi added white Gaussian noise into the Lorenz time series to generate signals at differe levels.A fourth-order Runge-Kutta scheme with a time step of 0.001 t Δ = was ap solve the Lorenz system depicted in Equation ( 9), and 50,000 data points were record each SNR condition, 40 trials were independently conducted, and their multiscale en were calculated through various approaches.The average multiscale entropy values w SD error bars are demonstrated in Figure 3.For all entropy algorithms, the multis tropy curve increases as the SNR decreases.Notably, from the results depicted in Figu is evident that the MEIPE curve at Noisy Lorenz Signal To evaluate the performance of the proposed algorithm under noisy conditions, we added white Gaussian noise into the Lorenz time series to generate signals at different SNR levels.A fourth-order Runge-Kutta scheme with a time step of ∆t = 0.001 was applied to solve the Lorenz system depicted in Equation ( 9), and 50,000 data points were recorded.For each SNR condition, 40 trials were independently conducted, and their multiscale entropies were calculated through various approaches.The average multiscale entropy values with their SD error bars are demonstrated in Figure 3.For all entropy algorithms, the multiscale entropy curve increases as the SNR decreases.Notably, from the results depicted in Figure 3a, it Is evident that the MEIPE curve at −10 dB remains close to that of the clean signal, suggesting the minimal influence of noise on the performance of the MEIPE algorithm.Conversely, the other three approaches display larger deviations in entropy values under low SNR conditions, especially for lower-scale factors.The findings in Figure 3 Experimental Data Analysis In this section, the proposed EIPE and MEIPE algorithms are applied to process three kinds of experimental data: RR intervals, bearing fault signals, underwater acoustic signals, and EEG signals.All these data are regarded as complex time series. RR Intervals The RR interval data used in this paper originate from the Fantasia dataset [33].This collection comprises RR interval data from 20 young and 20 elderly healthy participants, with their ages ranging from 21 to 34 and 68 to 85, respectively.Both the DispEn and EIPE analysis results, as shown in Figure 4c and d, respectively, illustrate that the RR intervals of healthy young subjects exhibit greater irregularity in comparison to those of healthy elderly individuals.However, the PE and WPE analysis results show insignificant differences between the two groups. To quantitatively assess the differences between entropy values for young and elderly individuals, the non-parametric Mann-Whitney U-test is utilized.The significance of inter-group differences can be determined through the p-values, with lower p-values indicating more significant distinctions.In Figure 4, p-values smaller than 0.01 and 0.001 are represented by ** and ***, respectively.The calculated p-values corroborate the visual observations from the boxplots, where the p-values for PE and WPE are greater than 0.05 (0.2792 and 0.8498).On the other hand, DispEn and EIPE yield p-values of 0.0038 and 0.000437, respectively, providing strong evidence for their exceptional discriminability in distinguishing between the two types of signals. Experimental Data Analysis In this section, the proposed EIPE and MEIPE algorithms are applied to process three kinds of experimental data: RR intervals, bearing fault signals, underwater acoustic signals, and EEG signals.All these data are regarded as complex time series. RR Intervals The RR interval data used in this paper originate from the Fantasia dataset [33].This collection comprises RR interval data from 20 young and 20 elderly healthy participants, with their ages ranging from 21 to 34 and 68 to 85, respectively.Both the DispEn and EIPE analysis results, as shown in Figure 4c and d, respectively, illustrate that the RR intervals of healthy young subjects exhibit greater irregularity in comparison to those of healthy elderly individuals.However, the PE and WPE analysis results show insignificant differences between the two groups. To quantitatively assess the differences between entropy values for young and elderly individuals, the non-parametric Mann-Whitney U-test is utilized.The significance of inter-group differences can be determined through the p-values, with lower p-values indicating more significant distinctions.In Figure 4, p-values smaller than 0.01 and 0.001 are represented by ** and ***, respectively.The calculated p-values corroborate the visual observations from the boxplots, where the p-values for PE and WPE are greater than 0.05 (0.2792 and 0.8498).On the other hand, DispEn and EIPE yield p-values of 0.0038 and 0.000437, respectively, providing strong evidence for their exceptional discriminability in distinguishing between the two types of signals. Bearing Fault Signals In this subsection, a collection of bearing fault signals originating from the Case Western Reserve University Bearing Data Center is analyzed.The collection contains four categories of signals that are normal, ball fault (BF), inner race fault (IRF), and outer race fault (ORF) [34].The motor speed is about 1730 r/min, and the fault diameter is 0.1778 mm. Each type of signal consists of approximately 120,000 data points.To facilitate analysis, each datum was divided into 10 equally sized segments, with each segment containing 12,000 sample points.As can be seen in Figure 5d, the EIPE values of BF remain relatively constant across all scales.In contrast, the EIPE values of IRF increase slightly between scales 1 and 4 and then decrease persistently.For the normal category, the EIPE values increase sharply (from 0.65 to 0.8) at lower scales and then show a minor decline.The MEIPE feature of ORF signals shows a decrease initially, followed by oscillations between scales 2 and 10.The distinct underlying structures of different bearing fault signals make their MEIPE curves unique, both in terms of the entropy magnitude and the variation trend across the scale factors.For comparison, analysis results of other multiscale entropy approaches are also provided in Bearing Fault Signals In this subsection, a collection of bearing fault signals originating from the Case Western Reserve University Bearing Data Center is analyzed.The collection contains four categories of signals that are normal, ball fault (BF), inner race fault (IRF), and outer race fault (ORF) [34].The motor speed is about 1730 r/min, and the fault diameter is 0.1778 mm. Each type of signal consists of approximately 120,000 data points.To facilitate analysis, each datum was divided into 10 equally sized segments, with each segment containing 12,000 sample points.As can be seen in Figure 5d, the EIPE values of BF remain relatively constant across all scales.In contrast, the EIPE values of IRF increase slightly between scales 1 and 4 and then decrease persistently.For the normal category, the EIPE values increase sharply (from 0.65 to 0.8) at lower scales and then show a minor decline.The MEIPE feature of ORF signals shows a decrease initially, followed by oscillations between scales 2 and 10.The distinct underlying structures of different bearing fault signals make their MEIPE curves unique, both in terms of the entropy magnitude and the variation trend across the scale factors.For comparison, analysis results of other multiscale entropy approaches are also provided in Underwater Acoustic Signals Identifying targets based on their emitted sound poses a significant challenge in underwater acoustic signal processing [1,4,7], primarily due to the complex ocean environment and the presence of high ambient noise levels.In this subsection, we adopted the MEIPE algorithm to analyze three types of ship-generated noise, namely, from passenger ships, ocean liners, and motorboats [35].For the sake of simplicity, the dataset was divided into various segments, with each segment lasting for 3 s.Given a sampling frequency of 52,734 Hz, each segment consisted of 158,202 sample points.Additional details regarding the dataset can be found in Table 1.Notably, signals from various distinct marine vessels were collected for each category. Underwater Acoustic Signals Identifying targets based on their emitted sound poses a significant challenge in underwater acoustic signal processing [1,4,7], primarily due to the complex ocean environment and the presence of high ambient noise levels.In this subsection, we adopted the MEIPE algorithm to analyze three types of ship-generated noise, namely, from passenger ships, ocean liners, and motorboats [35].For the sake of simplicity, the dataset was divided into various segments, with each segment lasting for 3 s.Given a sampling frequency of 52,734 Hz, each segment consisted of 158,202 sample points.Additional details regarding the dataset can be found in Table 1.Notably, signals from various distinct marine vessels were collected for each category.The MEIPE analysis result is presented in Figure 6a, where the scale factor is set to 40.The plot displays the average EIPE values versus the scale factor, accompanied by their corresponding SD error bars.The EIPE value of the ocean liner increases consistently across all scale factors.On the other hand, the EIPE value of the passenger ship shows a sharp increase and then remains relatively constant after scale 15.Interestingly, the EIPE value of the motorboat exhibits an initial increase from scales 1 to 5, followed by a downward spike from scales 5 to 35.Visually examining the MEIPE curves, it can be observed that the curves for the three target categories are distinct from each other, indicating the excellent discriminating power of our proposed method.For comparison, the multiscale DispEn analysis result is presented in Figure 6b, which shows similar trends as the MEIPE analysis result.However, there are some subtle differences observed between scales 16 and 25, where the multiscale DispEn features of three of the ships are closer to each other when compared to the MEIPE features. Entropy 2023, 25, x FOR PEER REVIEW 10 of 13 The MEIPE analysis result is presented in Figure 6a, where the scale factor is set to 40.The plot displays the average EIPE values versus the scale factor, accompanied by their corresponding SD error bars.The EIPE value of the ocean liner increases consistently across all scale factors.On the other hand, the EIPE value of the passenger ship shows a sharp increase and then remains relatively constant after scale 15.Interestingly, the EIPE value of the motorboat exhibits an initial increase from scales 1 to 5, followed by a downward spike from scales 5 to 35.Visually examining the MEIPE curves, it can be observed that the curves for the three target categories are distinct from each other, indicating the excellent discriminating power of our proposed method.For comparison, the multiscale DispEn analysis result is presented in Figure 6b, which shows similar trends as the MEIPE analysis result.However, there are some subtle differences observed between scales 16 and 25, where the multiscale DispEn features of three of the ships are closer to each other when compared to the MEIPE features.To further quantify the discriminative capability of the MEIPE features for the three categories of ships, we employed a probabilistic neural network (PNN) for feature training and recognition.For testing, 150 randomly selected segments were retained for each target category, while the remaining segments were used for network training.The recognition results of the network are presented in Table 2.For comparison, the classification results of the multiscale DispEn algorithm are given in Table 3.To further quantify the discriminative capability of the MEIPE features for the three categories of ships, we employed a probabilistic neural network (PNN) for feature training and recognition.For testing, 150 randomly selected segments were retained for each target category, while the remaining segments were used for network training.The recognition results of the network are presented in Table 2.For comparison, the classification results of the multiscale DispEn algorithm are given in Table 3.The results clearly indicate that both the MEIPE and multiscale DispEn algorithms achieve an impressive recognition rate of 100% for the passenger ship category.However, for the motorboat and ocean liner categories, the multiscale DispEn algorithm demonstrates a comparatively lower recognition rate, denoted as 74% and 82%, respectively, in contrast to the MEIPE algorithm.Overall, the MEIPE algorithm attains a classification accuracy of 92.44% for the three target categories, which is 7.11% higher than multiscale DispEn.These findings illustrate the superior performance of the MEIPE algorithm in accurately identifying and discriminating between the various ship categories. EEG Signals EEG records contain fruitful physiological and pathological information.The analysis of EEG signals is of high significance in numerous applications, such as evaluating the mental state of subjects, assessing drivers' fatigue, measuring anesthesia depth, and predicting the onset of epileptic seizures [6].In this subsection, our proposed algorithm was employed to process the commonly used University of Bonn EEG database.Our analysis covered four subsets of the database, which correspond to healthy subjects with eyes open (Class 0), healthy participants with eyes closed (Class 1), subjects during interictal epileptic activity (Class 2), and participants experiencing seizure attacks (Class 3), respectively.Each subset comprises 100 data segments, with each segment lasting 23.6 seconds and consisting of 4097 data points (sampling frequency is 173.61Hz).For detailed descriptions of the dataset, please see reference [6]. The MEIPE analysis result is presented in Figure 7a, where the scale factor is varied from 1 to 5, owing to the limited length of the signal.It is evident that Class 1 acquires the highest EIPE values across all scale factors, followed by Class 0, 3, and 2. Notably, the MEIPE features of each category exhibit a distinct separation from one another.In contrast, Figure 7b reveals that the multiscale DispEn features of Class 0 and 1 are challenging to discriminate, particularly for scales 1 to 3. Additionally, under scales 3 to 5, the differences between Class 2 and 3 appear less pronounced.These outcomes indicate that the proposed MEIPE algorithm may be better suited for discriminating between different EEG classes in comparison to multiscale DispEn. strates a comparatively lower recognition rate, denoted as 74% and 82%, respectively, in contrast to the MEIPE algorithm.Overall, the MEIPE algorithm attains a classification accuracy of 92.44% for the three target categories, which is 7.11% higher than multiscale DispEn.These findings illustrate the superior performance of the MEIPE algorithm in accurately identifying and discriminating between the various ship categories. EEG Signals EEG records contain fruitful physiological and pathological information.The analysis of EEG signals is of high significance in numerous applications, such as evaluating the mental state of subjects, assessing drivers' fatigue, measuring anesthesia depth, and predicting the onset of epileptic seizures [6].In this subsection, our proposed algorithm was employed to process the commonly used University of Bonn EEG database.Our analysis covered four subsets of the database, which correspond to healthy subjects with eyes open (Class 0), healthy participants with eyes closed (Class 1), subjects during interictal epileptic activity (Class 2), and participants experiencing seizure attacks (Class 3), respectively.Each subset comprises 100 data segments, with each segment lasting 23.6 seconds and consisting of 4097 data points (sampling frequency is 173.61Hz).For detailed descriptions of the dataset, please see reference [6]. The MEIPE analysis result is presented in Figure 7a, where the scale factor is varied from 1 to 5, owing to the limited length of the signal.It is evident that Class 1 acquires the highest EIPE values across all scale factors, followed by Class 0, 3, and 2. Notably, the MEIPE features of each category exhibit a distinct separation from one another.In contrast, Figure 7b reveals that the multiscale DispEn features of Class 0 and 1 are challenging to discriminate, particularly for scales 1 to 3. Additionally, under scales 3 to 5, the differences between Class 2 and 3 appear less pronounced.These outcomes indicate that the proposed MEIPE algorithm may be better suited for discriminating between different EEG classes in comparison to multiscale DispEn. To quantitatively evaluate the differences in entropy values across different EEG categories, the non-parametric Mann-Whitney U-test is utilized, and the corresponding p-values are listed in Tables 4 and 5.These statistical results are in line with the findings in Figure 7.With the application of multiscale DispEn, it is observed that there are no significant differences between Class 2 and 3 at scales 1 and 2. Furthermore, the distinction between Class 0 and 1 at scale 5 is not pronounced.In contrast, the inter-group differences are found to be significant across all scale factors when using the MEIPE approach.Based on these findings, we can confidently conclude that MEIPE outperforms multiscale DispEn in accurately discriminating between EEG categories.To quantitatively evaluate the differences in entropy values across different EEG categories, the non-parametric Mann-Whitney U-test is utilized, and the corresponding p-values are listed in Tables 4 and 5.These statistical results are in line with the findings in Figure 7.With the application of multiscale DispEn, it is observed that there are no significant differences between Class 2 and 3 at scales 1 and 2. Furthermore, the distinction between Class 0 and 1 at scale 5 is not pronounced.In contrast, the inter-group differences are found to be significant across all scale factors when using the MEIPE approach.Based on these findings, we can confidently conclude that MEIPE outperforms multiscale DispEn in accurately discriminating between EEG categories. Figure 1 . Figure 1.Comparative results of diverse entropy algorithms regarding their discriminative capability among white, pink, and brown noise.(a) PE analysis result; (b) WPE analysis result; (c) DispEn analysis result; and (d) EIPE analysis result. Figure 1 . Figure 1.Comparative results of diverse entropy algorithms regarding their discriminative capability among white, pink, and brown noise.(a) PE analysis result; (b) WPE analysis result; (c) DispEn analysis result; and (d) EIPE analysis result. Figure 2 . Figure 2. Plot of entropy versus μ for the logistic map with 3.5 3.99 ≤ ≤ μ .The arrow indi specific region where the behavior of the system transitions from chaotic to periodic. −10 dB remains close to that of the clean signal, sug the minimal influence of noise on the performance of the MEIPE algorithm.Convers other three approaches display larger deviations in entropy values under low SNR tions, especially for lower-scale factors.The findings in Figure 3 illustrate the robus the MEIPE algorithm against the noise. Figure 2 . Figure 2. Plot of entropy versus µ for the logistic map with 3.5 ≤ µ ≤ 3.99.The arrow indicates the specific region where the behavior of the system transitions from chaotic to periodic. Figure 3 . Figure 3. Multiscale entropy analysis of Lorenz time series under different SNR conditions.(a) MEIPE analysis result; (b) multiscale PE analysis result; (c) multiscale WPE analysis result; and (d) multiscale DispEn analysis result. Figure 3 . Figure 3. Multiscale entropy analysis of Lorenz time series under different SNR conditions.(a) MEIPE analysis result; (b) multiscale PE analysis result; (c) multiscale WPE analysis result; and (d) multiscale DispEn analysis result. Figure 4 . Figure 4. Boxplots of distinct entropy approaches computed from the RR intervals of healthy young and healthy elderly participants.(a) PE analysis result; (b) WPE analysis result; (c) DispEn analysis result; and (d) EIPE analysis result.p-values smaller than 0.01 and 0.001 are represented by ** and ***, respectively. Figure 5 .Figure 4 . Figure 4. Boxplots of distinct entropy approaches computed from the RR intervals of healthy young and healthy elderly participants.(a) PE analysis result; (b) WPE analysis result; (c) DispEn analysis result; and (d) EIPE analysis result.p-values smaller than 0.01 and 0.001 are represented by ** and ***, respectively. Figure 5 . Obviously, the entropy curve seems closer to each other in the multiscale PE and multiscale WPE results.For instance, at scale 7, these algorithms assign high entropy values (≈0.98) to normal and BF signals, making them indistinguishable.Although multiscale DispEn outperforms multiscale PE and multiscale WPE, its separability declines under the scales 2, 3, 4, 7, 9, and 10, where entropy features of distinct types of signals overlap with each other.By contrast, the proposed MEIPE algorithm can distinguish four types of signals at most scales.This finding suggests that the MEIPE algorithm has the potential for bearing fault diagnosis. Figure 5 . Figure 5. Multiscale entropy analysis results of four types of bearing fault signals.(a) Multiscale PE analysis result; (b) multiscale WPE analysis result; (c) multiscale DispEn analysis result; and (d) MEIPE analysis result. Figure 5 . Figure 5. Multiscale entropy analysis results of four types of bearing fault signals.(a) Multiscale PE analysis result; (b) multiscale WPE analysis result; (c) multiscale DispEn analysis result; and (d) MEIPE analysis result. Figure 6 . Figure 6.Multiscale entropy analysis results of three types of ship-radiated noise.(a) MEIPE analysis result; (b) multiscale DispEn analysis result. Figure 6 . Figure 6.Multiscale entropy analysis results of three types of ship-radiated noise.(a) MEIPE analysis result; (b) multiscale DispEn analysis result. Figure 7 . Figure 7. Multiscale entropy analysis results of four types of EEG signals.(a) MEIPE analysis result; (b) multiscale DispEn analysis result. Figure 7 . Figure 7. Multiscale entropy analysis results of four types of EEG signals.(a) MEIPE analysis result; (b) multiscale DispEn analysis result. Table 1 . Description of three types of ship-radiated noise. Table 1 . Description of three types of ship-radiated noise. Table 2 . PNN classification results for three types of ships using MEIPE features. Table 3 . PNN classification results for three types of ships using multiscale DispEn features. Table 2 . PNN classification results for three types of ships using MEIPE features. Table 3 . PNN classification results for three types of ships using multiscale DispEn features.
2023-08-09T15:03:20.854Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "6c56ce64845c52c0174fc4acb4f872034433cae3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/25/8/1175/pdf?version=1691389710", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "36168831304c26ad5507a09141e5da1338bfba32", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Medicine" ] }
249677676
pes2o/s2orc
v3-fos-license
Coexposure to Solvents and Noise as a Risk Factor for Hearing Loss in Agricultural Workers Noise and solvent exposures are common in agriculture work. In 7,495 operators, noise and solvent exposure jointly increased the risk of hearing loss. Rural healthcare providers and agricultural safety and health experts should consider discussing the risks of co-exposure to noise and solvents in those at risk of hearing loss. F arming is a hazardous occupation, with countless ways workers are unintentionally injured both acutely and chronically; for farmers and ranchers, the deleterious loss of hearing can be both. Work-related hearing loss remains one of the most prevalent yet preventable health ailments adversely impacting the lives of workers in the United States. 1 Annually, at least 22 million US workers are exposed to occupational noise at hazardous levels, and nearly 30 million workers are exposed to chemicals-many of which are ototoxic. 1 Noise exposure is the most probative cause of occupational hearing loss among farmers. 2,3 Among workers of similar age, farmers experience higher rates of noise-induced hearing loss (NIHL) than nonfarmers. 4 Farmers are exposed to hazardous levels of noise on a daily basis, as machinery, 5 equipment, 6 and livestock 7 are frequent sources of occupational exposure. However, noise is not the only etiology of hearing loss. Epidemiological and laboratory studies since the 1970s 8 have investigated ototraumatic agents that enter the body through absorption, inhalation, and ingestion exposure routes. 9 Chemical substances including fuels, solvents, pharmaceuticals, and pesticides can adversely affect hearing and how the ear functions. 9,10 Essentially toxic to the ear, ototoxicants affect the inner ear or auditory nerve causing damage to the sensory cells used in hearing and balance, and they may also impact the vestibular system regardless of noise exposure. [11][12][13] Occupational exposure scenarios for noise and solvents in agriculture are complex, changing over time and by the season, task, and environmental conditions. Occupational exposures are not measured and documented on farms on a regular basis, but studies have reported dermal and respiratory exposures to solvents in mixing and spraying pesticides, 14 general maintenance, and repair of equipment and machinery 15 ; cleaning livestock confinements with disinfectants and detergents; fueling and operating engines 16 ; and using paints, adhesives, and epoxies. 14,16 Because of their frequent use, exposure to solvents is a concern among those living and working on agricultural operations. When combined, noise and ototoxic substances have a greater propensity to adversely contribute to hearing loss than each individual exposure alone. 17,18 Of considerable concern is the joint effect of noise and ototoxic solvent exposure, as it has been suggested that a single exposure to both, even when noise is within the permissible exposure limit, increases the risk of hearing loss through synergism of exposures. 19,20 A 2017 systematic review and meta-analysis revealed that coexposure to noise and mixed solvents increased the risk of hearing loss nearly threefold (odds ratio [OR], 2.95) in comparison with a nonexposed reference group. 21 Moreover, the risk of hearing loss from coexposure was considerably greater than predicted by either noise exposure or mixed solvent exposure alone 21 -validating concerns that this coexposure may be missed by employers and health professionals alike. 20,22 Controlling known hazardous exposures is essential to preserve the hearing of those affected, but there are also broader consequences; hearing loss is associated with more workplace injuries 23 and has even been found to double the risk of injury in agricultural workers. 24,25 Nevertheless, occupational health research often examines and characterizes work-related hazards and their potential contributions to injury and illness as single causative agents. 10 Although this approach is efficacious in identifying and controlling undue risks to workers, 26 exposures to hazards hardly occur as independent agents 20 -especially among those working in agriculture. 14,27 Ototoxic hearing loss with and without noise in occupational settings is not new 28,29 ; however, there remains a paucity of research concerning the combined effects of solvents and noise in agricultural workers who are frequently exposed to both. This study was based on data from the FRHSS administered by the Central States Center for Agricultural Safety and Health (CS-CASH) in 2018 and 2020. The primary aim of this study was to evaluate whether hearing loss among farmers and ranchers is associated with exposure to noise, solvents, and both combined. We hypothesized that noise would be a primary contributor to hearing loss, but that solvent exposure would also contribute independently. We further hypothesized that coexposure of noise and solvents would further elevate the risk of hearing loss compared with either exposure alone. A secondary aim was to evaluate factors that modify this association and increase or decrease the risk of hearing loss. Study Design and Population The CS-CASH is one of ten regional centers funded by NIOSH, established to address the safety and health issues of agricultural producers and workers. It projects involved research, education, and prevention efforts aimed to protect those working in agriculture in the seven central states of Iowa, Kansas, Minnesota, Missouri, Nebraska, North Dakota, and South Dakota. One of the center's research initiatives involves surveillance of agricultural injuries, illnesses, and exposures using surveys, 30 media monitoring, 31 and analyses of existing data sources. The current study analyzed data from the Farm and Ranch Health and Safety Survey (FRHSS), administered to randomly selected farms and ranches, stratified by state (2500 per state). Contact information with selected farm production variables was obtained from Farm Market iD, a commercial agricultural data service provider, currently part of DTN Industries (DTN LLC, Burnsville, MN). The FRHSS surveys were administered by the University of Nebraska Medical Center's College of Public Health in the spring of 2018 and 2020 via postal service. Respondents were asked to provide information on work-related injuries, chronic health conditions, exposures, and preventive practices for up to three operators on the farm or ranch operation. In 2018, returned responses were entered by members of the CS-CASH team into University of Nebraska Medical Center's Research Electronic Data Capture secure web platform. In 2020, the returned forms were scanned into OpenText TeleForm OCR software (Waterloo, ON, Canada) and quality checked manually. Before mailing, paper surveys were coded with unique identification numbers to enable repeat mailings to nonrespondents and merging of agricultural production variables from Farm Market iD data set to survey data. Self-assessed or Diagnosed Hearing Loss Measures The primary outcome in this study was hearing loss, queried by a question "Does the operator have hearing loss (diagnosed or self-assessed)?". Participants were asked to select one of the following response options: none, mild, moderate, or severe. We chose to collapse the responses into three categories (none, mild, moderate/ severe) because of a relatively small number of respondents reporting severe hearing loss (n = 264, <4%) and because of the difficulty of discriminating between moderate and severe hearing loss without audiometric testing, resulting in potential misclassification of the outcome. Occupational Noise and Solvent Exposures The main independent variables of interest included farmers' self-reported noise exposures and chemical exposures to solvents. Noise exposure was measured with the following question: "Was the operator exposed to high levels of noise from any of the following sources during the past 12 months? (Mark all that apply)". Response options included tractor, combine, implements, power tools, and other noise. We combined the response options into a single binary variable for any noise exposure = 1 and no noise exposure = 0. Chemical exposure via inhalation was measured through the following question: "Was the operator exposed to high levels of any of the following air contaminants during the past 12 months? (Mark all that apply)". The response options were categorized as none, grain/ feed/hay dust, animal confinement dust, field/road dust, manure/silage gases, anhydrous ammonia, fuels/solvents/paints, and other. Chemical exposures via dermal/skin contact were measured through the following question: "Was the operator exposed to any of the following chemicals or animal-based allergens while working during the past 12 months? (Mark all that apply)". The response options were categorized as none, pesticides/fertilizers, animal/livestock, detergents/disinfectants, fuels/ solvents/paints, and other. Because solvent exposure was indicated in two different exposure routes (inhalation and dermal/skin) with the same response option (fuels/solvents/paints), we created a new variable where the response categories were combined into any solvent exposure = 1 and no solvent exposure = 0. Finally, we created indicator variables for solvent exposure only (yes = 1), noise exposure only (yes = 1), and both exposures present (yes = 1) with no solvent or noise exposure as the reference group. It should be noted that, although solvent exposures can also occur from detergents/ disinfectants and pesticides/fertilizers, we limited the analysis to "fuels/ solvents/paints" responses through inhalation and dermal/skin routes. Individual and Work-Related Covariates Covariates from the data set were included in the analyses if they had an association with hearing loss in at least one peer-reviewed study or an association with hearing loss was considered biologically plausible. Covariates included individual characteristics and work-related factors. Individual level factors included respondent age (in years) and sex (male, female). Work-related covariates included primary occupation (farm/ranch work, other work), percent of time spent on farm/ranch (vs other) work (0%-24%, 25%-49%, 50%-74%, 75%-99%, and 100%), percentage of time using hearing protection when needed, operator status (primary, second, third), and type of agricultural operation (farm, ranch, both). Operator status was collapsed into primary versus second/third in the analysis because of a relatively low number of third operators. Agricultural operation type was also dichotomized as farm = 1 versus ranch/both = 0 because of similar exposures related to animal production. Statistical Analysis Observations with missing data in key variables were excluded, namely, hearing loss (n = 289) and covariates sex (n = 86) and age (n = 68). Respondents younger than 18 years (n = 50) were also excluded; they were not included in the mailing, but some respondents chose to enter data for persons younger than 18 years. After deleting observations with missing outcome and demographic variables, we used listwise deletion in the analyses and reported missing values on covariates in Table 1. Analyses were performed using SAS version 9.4 (Cary, NC). Results were considered significant at α = 0.05. We began with a contingency table analysis to test for mutual, joint, and marginal independence between solvent exposure, noise exposure, and hearing loss. We calculated conditional ORs for hearing loss using solvent exposure as the explanatory variable and noise exposure as the stratification variable. Mutual independence was tested using a loglinear model containing all three effects. Joint independence testing was conducted using χ 2 tests for independence on the four categories with hearing loss (4 Â 3 table). Marginal probabilities were calculated by summing over the noise categories in a 2 Â 2 table and testing for independence of solvent exposure on hearing loss. Conditional ORs were tested for equality using the Cochran-Mantel-Haenszel χ 2 test. We calculated means and standard deviations (SDs) and used an analysis of variance to assess the association between age as a continuous variable and the three-level hearing loss variable. The Jonckheere-Terpstra test was used to test whether the frequencies in the percentage of time spent working on the operation were increasing across levels of hearing loss. Percentage of time using hearing protection was highly skewed (median, 10), so comparisons across hearing loss categories used the nonparametric Kruskal-Wallis test. Chi-square tests and t tests were used to assess potential confounders and statistically significant associations among variables. A total of six potential confounding variables were associated with our exposure groups and hearing loss and were entered into a full multinomial regression model. We used a generalized logit link function because the proportional odds assumption did not hold ( P < 0.0001). Treating hearing loss as a nominal variable allowed us to compare the mild and moderate/severe groups to the group without hearing loss separately. We used a hierarchical approach by first examining the indicator variables for the combined solvent and noise exposure followed by adding individual (sex and age) and work-related characteristics. For each adjusted model tested, explanatory variables were added incrementally and remained in the model if their input produced a decrease in the Akaike information criteria fit statistics or produced a significant likelihood ratio test. We estimated effect sizes using ORs and their 95% confidence intervals (CIs). Descriptive Statistics The 2018 and 2020 combined FRHSS produced data for 5651 farming operations and 7915 individual operators. The response rate at the farm level was 19% in 2018 and 14% in 2020. Of the 7915 individual operators, a total of 7495 respondents met our inclusion criteria and were selected for statistical analysis (Fig. 1). From this sample, respondents who identified as males represented 85% (n = 6344) of our respondents with females representing 15% (n = 1151). More than half (n = 3991 [53%]) of the operators specified some degree of hearing loss as diagnosed by a physician or self-assessed, of which mild hearing loss was most prevalent at 63% (n = 2504). Significantly elevated differences in hearing loss were found for males, primary operators, operators working on a farm, and those whose primary occupation was farming and/or ranching ( Table 1). Severity of hearing loss had a positive linear trend in association with participant age. The mean ages were 52.4 years (SD, 15.5 years) for those without hearing loss, 60.8 years (SD, 11.0 years) for those with mild hearing loss, and 66.9 years (SD, 10.2 years) for those with moderate/severe hearing loss. The differences were highly significant ( P < 0.0001). Percentage of time spent farming (vs other occupation) also showed a positive increase over hearing loss categories ( P < 0.0001). Characteristics of those exposed to only noise, only solvents, both noise and solvents, or neither were similarly distributed among respondents who indicated mild or moderate/severe hearing loss. Of the 3955 respondents who indicated coexposure to noise and solvents, 59% (n = 2337) indicated some degree of hearing loss. In those without exposure to both noise and solvents, noise exposure was more prevalent than solvent exposure (Fig. 2). Solvent exposure, noise exposure, and hearing loss were not mutually independent (χ 2 7 =1507, P < 0.0001). The hypothesis test for joint independence asking whether solvent and noise exposure were jointly independent of hearing loss was strongly rejected (χ 2 3 = 341, P < 0.0001). As expected, marginal independence testing whether solvents were associated with hearing loss summing over noise exposure was rejected (χ 2 2 = 74.5; P < 0.0001; marginal OR, 1.53; 95% CI, 1.38-1.70). The conditional OR (95% CI) for the association of solvent exposure without noise exposure was smaller 1.10 (0.98-1.25) than for solvent exposure with noise exposure 1.50 (1.11-2.02). The Cochran-Mantel-Haenszel test showed these ORs to be significantly different ( P = 0.047). Taken together, these results suggested a joint effect of solvent and noise exposure on hearing loss. Table 2 presents the unadjusted odds of having hearing loss (mild or moderate/severe) by exposures to noise, solvents, and both combined. Respondents exposed to noise only as well as noise and solvents together had more than three times higher odds of having mild hearing loss and moderate/severe hearing loss. For solvents and hearing loss, the association was significant with mild hearing loss but not moderate/severe hearing loss. Controlling the multinomial model for age and sex increased the ORs of mild hearing loss with noise, solvents, and the combination of both exposures. In the partially adjusted model, noise exposure (OR, 4.46) and both noise and solvents (OR, 5.91) also increased the odds of having moderate/severe hearing. In the partially adjusted model, exposures to solvents alone did not demonstrate a statistically significant association with moderate/severe hearing loss (OR, 1.42; 95% CI, 0.92-2.19). Multinomial Logistic Regression In the final fully adjusted model, age, sex, and farm/ranch characteristics were added into the model. When examining exposures to noise only in the fully adjusted model, ORs slightly decreased from our partially adjusted model for both mild (OR, 3.46) and moderate/severe (OR, 4.42) hearing loss outcomes. Compared with our unadjusted (OR, 1.50) and partially adjusted (OR, 1.62) models, mild hearing loss among those exposed only to solvents demonstrated its largest increase in the final, fully adjusted model (OR, 1.78). Furthermore, farmers and ranchers with moderate/severe hearing loss were over six times more likely to have been exposed to a combination of noise and solvent exposures than those without hearing loss (OR, 6.03), the highest effect size among all models. Effect Modifiers In our secondary aim, we hypothesized that the use of hearing protection may modify the effects of our exposures of interest; however, there was no significant association between hearing loss and percentage of time wearing hearing protection (Kruskal-Wallis χ 2 = 2.06, P = 0.36). Of those respondents who reported any use of hearing protection (n = 3680), the mean percentage using hearing protection was low, approximately 31% in each hearing loss group with a median of 10%. DISCUSSION The current study used an analytical approach to investigating the multifactor effects of noise and solvent exposures on hearing loss among farmers and ranchers in the seven US states using data collected from the CS-CASH FRHSS in 2018 and 2020. After applying inclusion criteria, this study provided hearing loss data for 7495 respondents. The prevalence of mild and moderate/severe hearing loss in operators from the current study was 33% and 20%, respectively. Although there is difficulty in estimating the true prevalence of hearing loss among farmers and ranchers with estimates ranging from 11% to 80%, our estimates may be representative of the population within the geographical area sampled. [32][33][34][35] Hearing loss characterized by pure tone audiometry is considered the criterion standard for assessing hearing loss; however, research has indicated perceived hearing loss among agricultural workers to be fairly representative of actual hearing loss 36,37 and perhaps even a stronger predictor of injuries than pure tone audiometry. 24,38 We found significant associations between hearing loss and work-related characteristics. The highest hearing loss prevalence was in primary operators, those with primary occupation as farm/ranch work, and those who spent greater than 75% of their time performing farm/ranch work. These findings are in concordance with other studies on the prevalence of hearing loss in farmers, and their role and level of participation in agricultural work. 2,6,24,25,32 Age and sex as biological factors have been previously linked with hearing loss among unexposed and noise exposed populations. [39][40][41] Age and sex were significantly associated with hearing loss also in the current study; as age increased, hearing loss also increased. However, deciphering whether hearing loss is a consequence of age, noise, or a combination of both cannot be discerned without more extensive occupational history and hearing test results. In a 2019 systematic review of occupational hearing loss, the authors iterated the difficulty to distinguish NIHL from age-related hearing loss (ARHL), as ARHL increases with age, but NIHL also often begins after years of excessive occupational noise exposure. 42 Sex is also important in the etiology of hearing loss and modeling factor associated with hearing loss. 41 Like age, sex differences have been observed in ARHL and NIHL. [41][42][43] A lifetime prevalence of physical and chemical exposures, genetic and heritability factors, and physiological changes associated with aging limit the assessment of sex as a causative contributing factor to hearing loss. 41 However, evidence among agricultural populations has suggested that men have a higher propensity to experience hearing loss younger 27,44 and at higher frequencies. 45 We hypothesized that coexposure of noise and solvents would elevate the risk of hearing loss more than either exposure alone. We found this to be the case, in univariable and adjusted regression models. The analysis of mutual, joint, and marginal independence of noise exposure, solvent exposure, and hearing loss showed no mutual dependence. This analysis indicated a joint effect of solvent and noise exposures on hearing loss. In our unadjusted, partial, and fully adjusted models, the ORs suggested that respondents with both solvent and noise exposure were at higher odds of either degree of hearing loss compared with noise exposure alone or solvent exposure alone. Controlling for sex, age, primary operator, primary occupation of operator, and working on a farm strengthened these associations. However, most of the increased odds were due to adjusting for sex and age with farm characteristics incrementally increasing the effect size. We found that nearly 60% of participants who indicated mild or moderate/severe hearing loss were exposed to a combination of noise and solvents. The relationship of hearing loss with coexposures to noise and solvents is complicated because previous research has demonstrated that exposures to solvents occur from a range of activities in farmwork and often when using, maintaining, and repairing noise producing machinery and equipment. 14,15,46,47 For decades, hazardous noise from agricultural equipment and machinery has been implicated with NIHL among farmers and ranchers. 5,6,32,48,49 Although advancements in technology and design of agricultural machinery and equipment have aided in reducing excessive noise, evidence suggests that farmers are still often using and servicing decades-old vehicles, machinery, and equipment. 14,15,46,48 Consequently, servicing of farm equipment is connected with ototoxic chemicals that include solvents. A study in Kentucky found repeated dermal contact with solvents during farm equipment repair/maintenance/ service multiple times a month. 15 Various types of solvents were used including gasoline, diesel fuel, degreasers, oils, and hydraulic fluid. 15 Although hearing loss was not examined in the Kentucky study, chemical solvents previously demonstrated as ototoxic in either animal or human studies 50,51 were found as high as 36000 μg for toluene and 5700 μg for xylene on farmer's hands in the Kentucky study, with no statistical difference indicated between personal protective equipment use and exposure. 15 The Agricultural Health Study addressed activities involving solvent exposure (painting, solvents used for cleaning, and gasoline used for cleaning) and found that all metrics using solvents were associated with elevated odds of wheeze. 16 Monthly solvent use ranged from 23% to 40% among those with wheeze and 21% to 37% without wheeze. 16 Together, these studies affirm that solvents are used frequently in agricultural work and that solvent exposure is a risk factor for multiple health outcomes. To our knowledge, the current study is the first epidemiological study to quantify the coexposure of solvents and noise with hearing loss among farmers and ranchers. Previous studies have found an association between pesticides and/or disinfectants with hearing loss among agricultural workers. [52][53][54] As noted in our methodology, the FRHSS also included questions on pesticides/fertilizers and detergents/disinfectants; however, their association with hearing loss was not a focus in this study. Noise exposure is a well-recognized contributor to hearing loss, but distinguishing the causative, additive, or cumulative contribution of solvent exposure to hearing loss remains challenging-especially among agricultural workers. This ambiguity is partially related to variability in agricultural farmwork and how exposures to solvents occur. Exposures may include machinery repair and maintenance, [14][15][16]46,47 spray painting farm equipment or structures, 14,47 or mixing and applying pesticides 14,27,[52][53][54] -all activities in a typical days' work for farmers and ranchers. There is variability also in the types of solvents used, the duration and frequency of use, and whether one chemical agent or a mixture of solvents is used-adding complexity to identifying potential causal agents to hearing loss among agricultural workers. [14][15][16]53 Studies of the association of hearing loss with solvent exposure, alone or in combination with noise exposure, have only recently emerged among agricultural workers. 27,52,54 Physiologically, hearing loss is a quantifiable condition; it is the differences of expected (healthy) and actual (impaired) sound levels (in decibels) required to hear sounds at specified frequencies (in hertz) in the audible range. What cannot be measured is the insidious detriment of losing something that was once had and will never be fully replacedthe ability to hear. Although there are great difficulties convincing farmers to wearing hearing protection, 35,55 the effect of solvent exposure, either alone or combined with noise exposures, is an added risk that should be addressed in educating farmers about prevention of hearing loss. Strengths and Limitations The Farm and Ranch Health and Safety Surveys offer an opportunity to evaluate a wide range of injury and illness outcomes, and potential demographic and farm production risk factor variables from a large sample of farmers and ranchers (N = 7495) in a region that represents about 20% of the agricultural workers and products in the United States. The survey questions enabled evaluating the prevalence of hearing loss at different severity levels, based on self-report of a condition that was either diagnosed or self-assed. The questions also enabled quantifying the presence of exposure to solvents, by respiratory or dermal exposure. With the available demographic and farm production variables, it was possible to design statistical analyses to evaluate the risk factors for hearing loss, including noise exposures, and chemical/solvent exposures, alone or in combination. The limitations of the study included a low response rate, 16% overall in the two survey years. However, the potential biases from nonresponse may be limited based on analyses of respondent and nonrespondent characteristics, where only minor differences were identified between respondents and nonrespondents. 56 Another limitation involves the quality of data for self-reported outcomes and exposures. Although many respondents may have had hearing tests, and perhaps hearing aids, we did not ask separately whether the reported hearing loss was diagnosed or just one's own assessment. Similarly, we could not objectively quantify the exposures, rather than just relying on the respondents' own assessment of their exposures. Both the hearing loss outcome and the associated solvent exposures may have occurred gradually over a long time with no possibility to establish a temporal relationship between exposure and outcome. Furthermore, combining a broad range of chemical agents into one group "fuels/solvents/paints" provides no specificity for identifying agents that are most harmful. Without detailed chemical exposure history, including specific solvent types, doses, and frequency of use, it is not possible to identify etiological agents for hearing impairment without potential measurement bias. Many other potential contributors to hearing loss, and confounders, could have been missed; for example, shooting guns, listening to loud music, and motorsport hobbies, or personal exposures like smoking, alcohol consumption, medications, and other lifestyle measures were not addressed in this study. There is also emerging evidence in the association of noise and hand-arm vibration exposure's induced hearing loss. 57 Similar to solvents, hand-arm vibration may be another occupational exposure missed when studying hearing loss among working populations, especially among agricultural workers. 58 CONCLUSIONS A high percentage of farmers and ranchers (33% mild, 20% moderate/severe) reported having diagnosed or self-assessed hearing loss. Noise exposure is a known contributor to hearing loss, and the odds of having hearing loss were higher for those exposed to loud noise in all models. In addition, our study provided new evidence on the association of hearing loss and solvent exposures, either alone or combined with noise exposure. Adjusting for personal and work characteristics the risk of hearing loss was about threefold in those exposed to noise and as high as sixfold among those who were exposed to both noise and solvents. This finding emphasizes the need to reduce noise exposures and also exposure to chemicals and solvents. Prevention of chemical and solvent exposures is important for reducing the risk of many chronic conditions, but it is also important in preventing hearing loss.
2022-06-16T06:16:18.623Z
2022-06-09T00:00:00.000
{ "year": 2022, "sha1": "4ef735fb3b2ebb5f3d3f838af6216847a3c2b68e", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/joem/Abstract/9900/Co_exposure_to_solvents_and_noise_as_a_risk_factor.40.aspx", "oa_status": "HYBRID", "pdf_src": "WoltersKluwer", "pdf_hash": "11f0fd6c1393b11ba737a52cd263d028f36ed209", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
155196083
pes2o/s2orc
v3-fos-license
Human Intestinal Morphogenesis Controlled by Transepithelial Morphogen Gradient and Flow-Dependent Physical Cues in a Microengineered Gut-on-a-Chip Summary We leveraged a human gut-on-a-chip (Gut Chip) microdevice that enables independent control of fluid flow and mechanical deformations to explore how physical cues and morphogen gradients influence intestinal morphogenesis. Both human intestinal Caco-2 and intestinal organoid-derived primary epithelial cells formed three-dimensional (3D) villi-like microarchitecture when exposed to apical and basal fluid flow; however, 3D morphogenesis did not occur and preformed villi-like structure involuted when basal flow was ceased. When cells were cultured in static Transwells, similar morphogenesis could be induced by removing or diluting the basal medium. Computational simulations and experimental studies revealed that the establishment of a transepithelial gradient of the Wnt antagonist Dickkopf-1 and flow-induced regulation of the Frizzled-9 receptor mediate the histogenesis. Computational simulations also predicted spatial growth patterns of 3D epithelial morphology observed experimentally in the Gut Chip. A microengineered Gut Chip may be useful for studies analyzing stem cell biology and tissue development. INTRODUCTION Understanding how local gradients of morphogens and their antagonists regulate tissue development remains a fundamental question in developmental biology (Logan and Nusse, 2004;Petersen and Reddien, 2009;Zallen, 2007), and elucidation of these mechanisms could lead to new approaches to stem cell engineering and regenerative medicine (Clevers et al., 2014). Intestinal development is a classic example whereby intestinal villus morphogenesis is known to be controlled by polarized gradients of the morphogens, but the precise physical mechanism remains unknown. For example, Wnt ligands produced by Paneth and mesenchymal cells (Farin et al., 2012) stimulate intestinal epithelial proliferation, whereas bone morphogenetic protein represses Wnt signaling and promotes cytodifferentiation as well as apoptosis as they move vertically along the crypt-villus axis (Biswas et al., 2015;Martini et al., 2017;Sato and Clevers, 2013). However, Wnt antagonists, such as Dickkopf-1 (DKK-1), are known to inhibit Wnt/b-catenin signaling (Bafico et al., 2001), and it remains unclear how all of these factors interact with each other to maintain intestinal homeostasis in vivo. It has not been possible to dissect the mechanism by which the morphological three-dimensional (3D) formation of epithelium occurs in the human intestine under controlled conditions because it is difficult to recreate the localized gradient of morphogens and their antagonists in conventional static cell culture models. Intestinal organoids derived from intestinal crypts or single intestinal stem cells have been used to study crypt regeneration and crypt-epithelial domain formation in vitro (Farin et al., 2016;Sato et al., 2009. However, the localized morphogen gradients that drive crypt formation are randomly organized in organoid cultures, and thus, it is impossible to dissect the molecular and biophysical mechanisms that orchestrate the regulated morphogenesis. Therefore, there is a critical need for physiological tissue models that can control spatiotemporal gradients of morphogens and their antagonists with a defined developmental axis in a human organ-relevant context. Human Organ-on-a-Chip (Organ Chip) technology, which involves the development of microfluidic cell culture devices that recreate the physical and biochemical microenvironment of key functional units of living human organs, offers an alternative approach to study intestinal structure and function. We previously (B) Differential interference contrast (DIC) microscopic top-down views at low (top left) and high magnification (top right) and a fluorescence microscopic cross-sectional view highlighting the nuclei (bottom; DAPI) of intestinal villi-like microarchitecture that formed spontaneously when the Caco-2 epithelium was exposed to continuous flow (30 mL/h) in both channels and cyclic mechanical strain (10%; 0.15 Hz) for approximately 100 h. (C) Human primary organoid-derived 3D epithelial growth in a Gut Chip. DIC microscopic top-down views at low (top left) and high magnification (top right) and a vertical cross-sectional view of 3D epithelial layer fluorescently visualized the plasma membrane (bottom). (D) A schematic diagram (left) and a phase contrast view (right) of a planar monolayer of Caco-2 cells cultured in a static Transwell insert for 8 weeks. described a Gut Chip device lined by an intact monolayer of human Caco-2 intestinal epithelium, which spontaneously forms intestinal villi-like 3D structures when cultured under continuous flow and cyclic peristalsis-like mechanical deformations Kim and Ingber, 2013). These microengineered villi-like epithelial cells recreate all four differentiated cell types of the small intestine (absorptive, goblet, enteroendocrine, and Paneth) and contain proliferative cells limited to their basal crypts. This 3D epithelium also exhibits physiological migration of proliferative cells from the crypt to the villus tip, formation of a specialized apical brush border, augmented barrier function, increased drug-metabolizing cytochrome P450 activity, and enhanced mucus production relative to static cultures Kim and Ingber, 2013). In addition, the microfluidic Gut Chip model has been used to co-culture anaerobic commensal or pathogenic gut microbiome with living human intestinal epithelium for extended periods and to recapitulate the pathophysiology of intestinal inflammation and small intestinal bacterial overgrowth in vitro (Kim et al., 2016;Shin et al., 2019). The genome-wide transcriptome analysis confirmed that Caco-2 cells also exhibit a highly differentiated intestinal epithelial phenotype similar to that shown by the normal human ileum when cultured in the Gut Chip (Kim et al., 2016), even though the Caco-2 cells were originally isolated from human colorectal cancer that show truncating mutations in adenomatous polyposis coli (APC) tumor suppressor and b-catenin proteins (De Bosscher and Nicolas, 2004;Ilyas et al., 1997). By leveraging the Gut Chip, we also identified that the epithelial barrier dysfunction is the culprit trigger that initiates the onset of intestinal inflammation under complex host-microbiome cross talk (Shin and Kim, 2018). Formation of villi-like structures by Caco-2 cells also was previously observed by another group (Pusch et al., 2011), although their structure and function were not fully characterized. Thus, the mechanism of this epithelial morphogenesis remains unknown. The Gut Chip is a two-channel microfluidic device that contains human intestinal epithelial cells cultured on one surface of a porous membrane that separates the channels, which makes it possible to independently control the fluid flow in each channel and to establish molecular gradients across the epithelium. As Wnt signaling is known to mediate intestinal villus morphogenesis, and Caco-2 cells secrete both Wnt molecules (Munemitsu et al., 1995;Voloshanenko et al., 2013) and the Wnt-antagonist DKK-1 glycoprotein Saaf et al., 2007), we explored whether the human Gut Chip can be used to analyze how gradients of Wnt agonists and antagonists interplay to promote intestinal morphogenesis under controlled conditions in vitro. Also, we extended this work to the Gut Chips lined by primary intestinal epithelial cells originated from biopsy-derived intestinal organoids to confirm the physiological relevance of our findings. Basal Fluid Flow Is Crucial for Intestinal Epithelial Morphogenesis The Gut Chip is a microfluidic cell culture device composed of transparent silicone polymer (polydimethylsiloxane) that contains two apposed hollow microchannels separated by a flexible, extracellular matrix (ECM)-coated, porous membrane (Huh et al., 2013;. The channels are lined on each side by hollow chambers that are exposed to cyclic vacuum to repeatedly strain and relax the porous membrane, thereby mimicking peristalsis-like deformations ( Figure 1A). As previously demonstrated , when human Caco-2 intestinal epithelial cells are cultured on the upper surface of the porous flexible membrane and exposed to physiological fluid flow (30 mL/h; 0.02 dyne/cm 2 ) and cyclic mechanical deformations (10% in cell strain; 0.15 Hz in frequency), these epithelial cells spontaneously undergo 3D intestinal morphogenesis ( Figure 1B) with finger-like projections extending vertically up to $300 mm in height after 5 to 7 days of culture ( Figure S1). Importantly, human primary intestinal organoid-derived epithelial cells (F) A schematic diagram (left; an arrow indicating the fluid flow) and a phase contrast view (right) of Caco-2 cells grown on a single-channel microfluidic device without mechanical strain in the presence of flow and apical shear stress (0.02 dyne/cm 2 ) for 150 h. A white arrow indicates a dome formed in the cell monolayer. (G) A schematic diagram (left) and a phase contrast view of the epithelium in a Gut Chip in which human microvascular endothelial cells (''Endo'') were pre-cultured on the opposite side of the membrane from the Caco-2 intestinal epithelial cells in the lower channel to block access of fluid shear stress (30 mL/h) to the basolateral surface of the epithelium without mechanical deformations. The schematic diagram depicts the experimental setup at the point of the co-culture that both endothelium and epithelium independently formed a monolayer. Scale bars, 50 mm. also form similar structures when cultured in the Gut Chip under physiological flow and motions ( Figure 1C), as demonstrated previously (Kasendra et al., 2018). On the other hand, a Caco-2 cell monolayer maintained its planar form even when analyzed for up to 8 weeks of culture ( Figure 1D). Previous studies suggested that fluid flow is more important than mechanical deformations for induction of 3D morphogenesis in this system , and when we repeated these studies with or without cyclic mechanical strain, we confirmed that the 3D intestinal histogenesis occurs under both conditions ( Figure 1E). Thus, we then focused on how perfusing fluid flow similar to that observed within the lumen of the living intestine while flowing medium below to mimic vascular or interstitial flows that exist in vivo (Granger, 1981), which might influence this developmental process. We first explored whether apical shear stress due to luminal fluid flow above the epithelium is responsible for induction of the epithelial morphogenesis. To test the effect of apical shear stress independent of the basal fluid shear generated in the lower channel, we cultured the intestinal epithelium without mechanical deformation in a single channel microfluidic device. Even though these cells were cultured under fluid flow and experienced apical shear stress, they did not show 3D morphology as in the two-channel microfluidic Gut Chip ( Figure 1F). Instead, they grew as an epithelial monolayer with occasional epithelial domes being observed, as previously described in other static Caco-2 cell cultures (Ramond et al., 1985). Another possibility is that application of fluid shear stress to the basal surface of the Caco-2 cells could drive formation of the villi-like 3D structure given that the central membrane contains multiple large pores (10 mm in diameter). However, when we co-cultured the epithelial cells with capillary endothelial cells pre-grown on the surface of the porous membrane in the lower microchannel to eliminate the direct basal mechanical signal ( Figure S2), Caco-2 cells continued to form 3D morphology in the upper channel in the absence of mechanical deformations ( Figure 1G). Thus, application of fluid shear stress to neither the apical nor the basal surfaces of the intestinal epithelium is responsible for inducing 3D villi-like morphogenesis. Next, we explored the possibility that the presence of continuous fluid flow might remove secreted molecules, such as Wnt antagonists, which have been reported to suppress villi formation in the past in vitro and in vivo studies (Kuhnert et al., 2004;Pinto et al., 2003). Interestingly, when we flowed medium simultaneously through both the upper and lower channels (Figure 2A, left), or through the lower channel alone while maintaining a static epithelial cell culture in the upper channel ( Figure 2A, middle), 3D morphogenesis progressed normally; however, it took approximately 1.5 times longer to form villi when fluid was only flowed through the basal channel. In contrast, when the experimental protocol was reversed and fluid was only flowed above the epithelium in the upper channel, this epithelial morphogenesis was completely inhibited ( Figure 2A, right). One explanation for these observations is that the intestinal epithelial cells might secrete a certain type of inhibitory factor in a polarized manner causing it to concentrate in the lower basal channel and thereby feedback via basal membrane receptors to inhibit villi-like epithelial growth. To verify this hypothesis, we collected the conditioned medium from the basolateral side of Caco-2 cells grown for 3 days in Transwells, and then flowed this conditioned medium into the lower microchannel of the Gut Chip while fresh culture medium was flowed through the apical channel. Surprisingly, the introduction of the basally collected conditioned medium completely inhibited the 3D morphogenesis of Caco-2 cells in the Gut Chip ( Figure S3). Moreover, when we cultured the Caco-2 cells on both sides of the same porous membrane, the formation of the villi-like structure was also suppressed in both microchannels ( Figure S4). Taken together, these results suggest that the Caco-2 intestinal epithelial cells may secrete inhibitory factors basally that potentially antagonize the 3D morphogenesis of Caco-2 cells. To further investigate this mechanism, we designed a hybrid device ( Figure S5) that holds a Transwell insert and basally is in contact with a lower microfluidic channel that continuously removes epithelial secretomes released into the basal chamber of the Transwell. After Caco-2 cells were grown as a planar monolayer under the static condition for 3 weeks in a Transwell ( Figure 2B, left), the Transwell setup was transferred to the hybrid microfluidic device. While the Caco-2 monolayer was maintained without flow apically, the medium was continuously flowed in and out (30 mL/h) through the basal microfluidic chamber, where we observed a rapid formation of 3D morphogenesis within 48 h ( Figure 2B, middle). Moreover, this structural formation was similarly induced in the static Transwell inserts by simply diluting the basal medium by >100-fold in volume ( Figure 2B, right), which was accomplished by placing the Transwell insert (0.33 cm 2 in surface area) in a larger culture dish containing 70 mL of static basal culture medium for 120 h. Importantly, we also (bottom) showing that a planar Caco-2 monolayer cultured in a Transwell for 3 weeks can be induced to form villi-like protrusion (white arrow) by transferring the Transwell insert to a hybrid microfluidic device and applying constant flow (30 mL/h) in the basal chamber for the next 48 h (middle) or to a larger static culture well containing excess medium (70 mL) to dilute factors contained within the basal chamber for 120 h to sufficiently diffuse out basolaterally released secretomes (right). The intestinal epithelium remains as a planar monolayer in the static Transwell even after culture for up to 6 weeks (left). (C) A monolayer of human intestinal organoid-derived epithelium pre-cultured in a Transwell insert underwent 3D morphogenesis when setup was transferred into the hybrid microfluidic device (left), whereas the same organoid-derived epithelium maintains a planar monolayer in the static condition (right). Inset schematics show the experimental setups. Cyan, F-actin; white, nuclei. Scale bars, 50 mm. replicated this 3D histogenesis response using primary intestinal epithelial cells derived from human intestinal organoids ( Figure 2C, left), whereas a 2D monolayer of primary epithelium was maintained under static conditions ( Figure 2C, right). Thus, the removal of inhibitory factors released basally by the polarized intestinal epithelium can rapidly (<2 days) trigger intestinal morphogenesis. We also observed that the cessation of basolateral flow induced the loss of pre-formed 3D epithelial microstructure within 4 days (Figure 3A). Furthermore, the number of proliferative cells labeled with Ki67 was significantly decreased under basolateral cessation of flow (<7%) compared with the control (>50%) ( Figure 3B). However, a live/dead staining assay revealed that the number of dead epithelium was negligible when the basolateral flow was stopped ( Figure 3C), suggesting that the loss of villi-like morphology was caused by the reduced proliferation rather than the cell death without a loss of epithelial barrier function ( Figure S9B, ''Control'' vs. ''BL ceased''). Wnt Antagonists Suppress the On-Chip Morphogenesis of an Intestinal Epithelium Past works have shown that the canonical Wnt signaling pathway promotes villus morphogenesis in the embryonic intestine (Peifer and Polakis, 2000;Pinto et al., 2003) and human organoids (Ootani et al., 2009) via autocrine regulation. To explore whether Wnt signaling mediates the 3D histogenesis in the human Gut (C) Viability of Caco-2 epithelium cultured in the absence (''BL ceased'') or the presence (''Control'') of basolateral flow in the Gut Chips, assessed by staining with Calcein AM (Live, green) and ethidium homodimer-1 (Dead, red), and quantification of cell viability (right, N = 10). The basal flow was ceased for 48 h after 3D villi-like structure was formed in the Gut Chips by culturing for $100 h. N.S., not significant, **p < 0.001; scale bars, 50 mm. Chip, we, respectively, added human recombinant Wnt antagonists including DKK-1 (rDKK-1), Wnt inhibitory factor 1 (rWIF-1), secreted frizzled-related protein 1 (rsFRP-1), and Soggy-1/DKK-like 1 (rSoggy-1/ DKKL-1) (Kawano and Kypta, 2003) to the culture medium that was perfused to the lower microchannel of the Gut Chips, which had pre-formed villi-like epithelium ( Figure 4A, ''Control''). As expected, all of these Wnt antagonists induced the loss of 3D morphology within 48 h of the exposure ( Figures 4A and S6), and the percentage of epithelial surface that exhibited the morphologically blunted lesion was significantly higher in the cultures treated with each of the Wnt antagonists compared with the control (Figure 4A (C) Phase contrast views (left) and a graph (right; N = 10) showing that addition of increasing concentrations of rDKK-1 (0, 100, and 500 ng/mL) resulted in a dose-dependent suppression of the 3D epithelial morphogenesis. The culture medium containing rDKK-1 was perfused into the basolateral microchannel at 72 h since the seeding. The overall time course of villus morphology is provided in Figure S7. (D) Phase contrast views (left) and a graph (right; N = 10) showing that the inhibitory effect of rDKK-1 (500 ng/mL) was suppressed by addition of a blocking anti-DKK-1 antibody (20 mg/mL). The rDKK-1 and anti-DKK-1 antibody were treated to villi-like epithelium for 48 h. Scale bars, 50 mm; *p < 0.001, **p < 0.05. We then selected the most potent and well-characterized antagonist, DKK-1 (Aguilera and Munoz, 2007;Gonzalez-Sancho et al., 2005), to further investigate the mechanism of inhibition. First, we confirmed that the Caco-2 cells cultured in a Transwell secrete approximately 5.3-fold more DKK-1 (p < 0.001) into the basal chamber than into the apical side ( Figure 4B), which shows a good agreement with our observation in Figures 2 and 3. Addition of rDKK-1 to the basal channel resulted in a statistically significant (p < 0.05), dose-dependent reduction of the height of 3D epithelium ( Figures 4C and S7). Moreover, when we analyzed the same location in the chip over time, we found that, although the presence of the rDKK-1 antagonist for 48 h resulted in the loss of villi-like structure ( Figure S8A), removal of rDKK-1 resulted in the rapid restoration of epithelial 3D growth within 24 h ( Figure S8B). However, the villi-like microarchitecture was involuted once again when we resumed rDKK-1 treatment for an additional day ( Figure S8C). We also confirmed that the inhibition of 3D epithelial morphogenesis by rDKK-1 can be successfully suppressed by the co-treatment of an anti-DKK-1 monoclonal antibody ( Figure 4D) for neutralizing the antagonistic function of DKK-1. We also confirmed a decreased population of Ki67-positive proliferative cells when the rDKK-1 was treated to Caco-2 villous epithelium for 48 h ( Figure S9A), whereas the barrier integrity was well maintained ( Figure S9B). Furthermore, when we added the same anti-DKK-1 antibodies to the Caco-2 monolayers grown in static Transwells ( Figure S10), the average height of the cell monolayer significantly increased compared with the control ( Figure S10C; p < 0.0001). Interestingly, nucleated cells were observed beginning to extend above the surface of the planar epithelial monolayer ( Figure S10B, a zoomed-in inset), suggesting that the addition of anti-DKK-1 antibody contributed to the initiation of morphogenesis. Computer Simulation Predicts Transepithelial Morphogen Gradient We then built a multi-physics, finite element model of the Gut Chip to better understand how polarized secretion of Wnt antagonists and generation of the gradient of a Wnt inhibitor may contribute to the spatiotemporal control of epithelial 3D growth patterns. This simplified computational model assumes that DKK-1 is the most relevant and potent morphogen antagonist and computes the concentration of DKK-1 and Wnt within the geometry of the Gut Chip, taking into account the relative production rate of both molecules by the intestinal epithelium, diffusion through the medium, and convection due to the fluid flow. We estimated the diffusion coefficient of DKK-1 to be two orders of magnitude greater than that of Wnt based on the past work , which we set at 9.3310 À11 and 6.9310 À13 cm 2 /sec, respectively. The production rate of DKK-1 (421 pg/10 6 cells/h) by the Caco-2 cells was applied from the previous study , where we postulated the same production rate for Wnt because binding of Wnt to its receptors stimulates production of both itself and DKK-1 . Quantitation of Caco-2 cell numbers in the Gut Chip revealed that the epithelial cell layer contains $5.0310 5 cells/chip ($4.5310 6 cells/cm 2 ). We first performed the simulation to explore if the accumulated DKK-1 can form a stable gradient in the lower microchannel of a simplified 2D representation of the Gut Chip ( Figure S11). Under static flow conditions (i.e., 0 mL/h), DKK-1 was simply accumulated in the basal channel; however, as flow rate is increased up to 30 mL/h, the simulation predicts that a gradient of DKK-1 molecules will be generated with lower concentrations at the bottom of the basal channel and toward the inlet. Furthermore, when the flow rate was increased above 50 mL/h, the model predicted that DKK-1 levels would fall to almost zero in the lower channel. We then used a more complex 3D model to analyze the concentration of DKK-1 along the length and width of a perfused channel in the Gut Chip, assuming that the polarized cells are secreting DKK-1 from their basolateral surface. This model predicted that the concentration of DKK-1 near the inlet of the microchannel would be relatively low, whereas the level of DKK-1 will increase by 10-fold or more near the channel outlet ( Figures 5A and S12). Interestingly, when we experimentally analyzed villus growth at 80 h in regions corresponding to the upstream, middle, and downstream positions in the microchannel ( Figure 5A), we observed that there was a gradient of pseudo-villous growth that closely mirrored the pattern predicted by the computational model. For example, vigorous 3D morphogenesis was observed in the upstream region, whereas the least formation of the villi-like structure was found in the downstream portion in the microchannel ( Figure 5B). The 3D computational model also predicted a parabolic distribution of DKK-1 accumulation at the surface of the porous ECM-coated membrane, with lower concentrations in the center of the channel and higher concentrations near the sidewalls of the channel ( Figures 5A and S12). This result is attributed to the parabolic profile of laminar flow in the microchannel, where the linear flow velocity will be the highest in the center, therefore washing away DKK-1 at a higher rate, with DKK-1 accumulation where the flow is the lowest near the channel walls. In fact, we experimentally confirmed that the growth of 3D epithelium exhibited a similar pattern on-chip, in which both the height and abundance of the villi-like structure are higher in the middle of the channel compared with its sides near the wall where the inhibitor levels are lower ( Figure 5C). Interestingly, we experimentally observed that the intermediate flow rate regime (70-120 mL/h) resulted in faster growth of taller villi-like epithelial structure per unit time compared with flow rates below 30 mL/h or above 120 mL/h, which produced significantly slower morphogenesis (Figures 6A and S13). We then compared effects of three representative flow rates (30, 100, and 200 mL/h) in the Gut Chips versus static Figure S11 displaying the spatial pattern of DKK-1 concentrations at the bottom surface of the membrane under the flow (30 mL/h) mapped on a schematic of a Gut Chip, indicating that it will form a parabolic gradient and exhibit a gradient of concentration levels with lowest near the inlet and highest downstream near the outlet. Color bar, the scaled range of DKK-1 concentrations (unit, 310 À9 , mol/m 3 ). An inset at the right shows the structure of a Gut Chip, where the orange, green, and blue boxes overlaid on the device design at the right indicate the location of up-, mid-, and downstream snapshots of the microchannel shown in (B); red arrows indicate the flow direction. Transwell cultures at 48 h and performed quantitative real-time polymerase chain reactions (qPCR) targeting 92 human Wnt-related genes. The qPCR results revealed that only three genes, G protein-coupled receptor (GPCR) frizzled 9 (FZD9), Myc (MYC), and lymphoid enhancer-binding factor 1 (LEF1), were significantly (p < 0.05) upregulated in the Gut Chips under flow compared with Transwells. Consistent with the predictions from the computational model, the Wnt receptor FZD9 exhibited the highest and the most significant (p < 0.01) upregulation ($66-fold increase) at a flow rate of 100 mL/h, whereas cells in the chips exposed to lower or higher flow rates (30 or 200 mL/h, respectively) exhibited less increment in FZD9 ($49-and $36-fold, respectively) ( Figure 6B). However, there was no significant difference in MYC and LEF1 regardless of the flow rates. Immunofluorescence confocal microscopy showed that the expression level of FZD9 was significantly (p < 0.001) higher in the fluidic than in the static culture condition in both Caco-2 ( Figure 6C) and organoid-derived epithelium ( Figure S14). A computational simulation accounting for the production rates of DKK-1 and Wnt molecules at the basolateral membrane revealed that the relative ratio of DKK-1 and Wnt at the steady state is almost constant at flow rates greater than $30 mL/h, whereas the ratio exponentially increases as flow rates approach static conditions (i.e., 0 mL/h) ( Figure 6D). This result suggested that the flow-dependent 3D morphogenesis of Caco-2 cells is predominantly orchestrated by the FZD9 receptor because the DKK-1/Wnt ratio was almost constant regardless of the flow rate at >30 mL/h. To confirm if FZD9 is a critical receptor for intestinal morphogenesis, anti-FZD9 blocking antibodies were added to the villi-like epithelium pre-established in the Gut Chip. We found that the infusion of anti-FZD9 antibodies (20 mg/mL) into both the apical and basal microchannels of the Gut Chip substantially altered the 3D morphology ( Figure 6E). Epithelial height was 122.5 G 4.2 mm in the control group, whereas it was significantly (p < 0.05) reduced to 75.8 G 2.7 mm when FZD9 receptors were blocked ( Figure 6E). Taken together, our finding suggests that FZD9 is a key receptor that mediates control of intestinal morphogenesis through its interactions with Wnt and DKK-1 in the microengineered model. DISCUSSION Our mechanistic study that leverages a microphysiological Organ Chip uncovers the molecular basis of a developmental morphogenic process in vitro that is governed by complex cellular signaling. The Gut Chip enabled separate access to the apical lumen and basal abluminal compartments of this engineered intestinal epithelium, as well as precise independent control over fluid flow rates, molecular components, and, hence, transepithelial gradients while allowing high-resolution microscopic imaging. By manipulating biophysical and biochemical cues in the Gut Chip, we discovered that the Wnt antagonist DKK-1 is secreted in a polarized basolateral direction and that its removal by fluid flow in the basolateral microchannel is a crucial factor that directly triggers intestinal 3D morphogenesis in this in vitro model using Caco-2 as well as the primary organoid-derived epithelial cells. We also discovered that the Wnt receptor FZD9 mediates this morphogenic response, where its expression level is dependent on the flow rate and correlates with epithelial differentiation. Experimental results that we obtained were verified with the computational modeling designed to understand the molecular distribution and dynamics of secretory DKK-1 and Wnt in the Gut Chip. By using a simple computational simulation, we successfully explained the morphogenic patterns of epithelial growth inside the microfluidic channel reminiscent of the intestinal development. Our past finding that human Caco-2 intestinal epithelial cells spontaneously undergo villus morphogenesis and small-intestine-specific cytodifferentiation and histogenesis in the microfluidic Gut Chip Kim and Ingber, 2013) was surprising because Caco-2 cells cultured under static conditions, or even under microfluidic flow, did not exhibit this response in prior studies. The results of the current study now explain this disparity because those past studies did not include basolateral flow (Gao et al., 2013;Imura et al., 2009). However, although fluid flow was found to be more critical than peristalsis-like deformations in triggering the formation of villi-like epithelial microarchitecture, we found that direct application of fluid shear stress to the epithelial cells is not sufficient to induce the morphogenesis in the Gut Chip. This observation suggested that fluid flow might influence histogenesis by altering the delivery or removal of soluble signaling factors. Thus, we explored whether fluid-flow-dependent changes in delivery of Wnt and removal of Wnt-antagonistic molecules such as WIF, sFRP-1, DKKL-1, and DKK-1 modulate structural changes. WIF-1 binds directly to Wnt proteins and prevents the initiation of the Wnt-signaling pathway (Malinauskas et al., 2011). sFRP-1 also binds to Wnt proteins to inhibit Wnt signaling as an antagonist (Bovolenta et al., 2008). DKKL-1, also known as Soggy-1, is a homologue to the DKK family proteins. However, DKKL-1 does not affect canonical signaling of the Wnt/b-catenin pathway, which acts differently from DKK-1 protein (Yan et al., 2012). Regardless of their specific mechanism, all of these Wnt antagonists involuted the preformed Caco-2 villi-like structure. We further studied how the removal of Wnt antagonists mediates the intestinal morphogenesis by using DKK-1, which have been previously implicated in control of intestinal morphogenesis (Crosnier et al., 2006;Ootani et al., 2009) and are known to be produced by Caco-2 epithelial cells Voloshanenko et al., 2013). Our study revealed that the reduction of basolaterally secreted DKK-1, potentially other Wnt antagonizing molecules as well, is the crucial trigger that orchestrates the epithelial histogenesis in the Gut Chip as well as in the hybrid microfluidic device. When we removed secreted DKK-1 under either fluidic or diffusionbased conditions, 3D morphogenesis of intestinal epithelial cells was promoted. This approach was verified even in the presence of an endothelial layer at the opposite side of the porous membrane in the Gut Chip. It is noted that the endothelial layer we cultured in this experiment has a high permeability to large molecules via transcytosis (Fung et al., 2018;Mehta and Malik, 2006). Therefore, DKK-1 that are basolaterally secreted by the Caco-2 cells can be readily transported through the endothelial layer, which potentially leads to the epithelial morphogenesis. In the normal intestinal epithelium, Wnt molecules are mainly localized on the external cell surface where they activate downstream Wnt signaling, and diffusion of Wnt molecules to adjacent epithelial cells is negligible (Farin et al., 2016). Caco-2 cells have shown a continuous activation of Wnt/b-catenin signaling because of truncating mutations in APC and b-catenin (Voloshanenko et al., 2013). Interestingly, autologous secretion of DKK-1 by Caco-2 cells concomitantly occurs in conventional Caco-2 cultures at about 42.1 pg/10 5 cells/h Takahashi et al., 2010). However, it is notable that the presence of DKK-1 does not completely block the proliferation of Caco-2 cells , which has been confirmed in our previous and current studies. As the inhibitory DKK-1 molecules are freely secreted and bind to membrane receptors such as low-density lipoprotein receptor-related protein (LRP) 5/6 (Bafico et al., 2001), these observations are consistent with the mechanism we uncovered here, in which removal of the secreted Wnt inhibitor DKK-1 can directly initiate intestinal morphogenesis in Caco-2 epithelium. Furthermore, the presence of DKK-1 can induce disruption of intestinal villi in the in vivo mouse models (Kuhnert et al., 2004), which was also replicated in our current in vitro study (Figure 4). Although the averaged molecular weight of Wnt family members (38-42 kDa) (Gavin et al., 1990) is larger than DKK-1 (28.7 kDa) (Aguilera and Munoz, 2007), our computational model suggests that, as a consequence of production and secretion by Caco-2 cells, the ratio of DKK-1 and Wnt molecules at the steady state is almost constant at flow rates higher than 30 mL/h, suggesting that the ratio of DKK-1/Wnt is independent of flow rate. Thus, the flow-dependent profile of secreted Wnt antagonist molecules, such as DKK-1, is likely a key feature that controls the intestinal 3D morphogenesis in the Gut Chip. We found that the cessation of basolateral fluid flow in the Gut Chip can also induce the loss of preformed villi-like microarchitecture. We hypothesized that the disappearance of 3D epithelium under the cessation of basolateral flow might result from either the increased death of cells or the decreased proliferation of cells. We discovered that the cessation of basal flow significantly decreased the number of Ki67-positive proliferative cells, whereas the control group that experienced continuous basolateral flow maintained proliferative population more than 10-fold. However, neither the cell viability nor the barrier function was compromised in response to both the cessation of basal flow and treatment of rDKK-1, suggesting that the maintenance of proliferative cell population may be a crucial element to sustain the intestinal epithelial morphogenesis and its microarchitecture over time. The directional secretion of DKK-1 in a polarized monolayer of the human intestinal epithelium has not been reported, although secretion and antagonism of DKK-1 in Wnt/b-catenin signaling is a common developmental feature in vertebrates (Farin et al., 2012;Glinka et al., 1998). Furthermore, we also verified the 3D morphogenesis mechanistically using primary epithelial cells obtained from human intestinal organoids in the same Gut Chip as well as in the hybrid microfluidic device. Since the organoid-derived epithelial culture is constitutively supported by the high level of Wnt, R-Spondin, and Noggin , it was evident that the removal of morphogen antagonists by fluid flow in the basolateral side is critical for the intestinal morphogenesis in this primary cell culture model. Although the regeneration of 3D microarchitecture of organoid-derived epithelium was previously reported using human small intestinal organoids (Kasendra et al., 2018), it has not been clear which factor triggers this epithelial morphogenesis. The wave of rostral-to-caudal (oral-to-anal) formation of the villi during the intestine development has been long recognized (Johnson, 1910;Kammeraad, 1942;Walton et al., 2012). This proximal-to-distal wave of development results in a progressive decrease in the height of villi along the length of the intestine from duodenum to jejunum, ileum, and colon (Walton et al., 2012). Thus, the temporal growth pattern observed in the present study in which villi-like structure first emerged in the proximal (upstream) region of the Gut Chip near the inlet where their heights are also the longest, and then they progressively shorten toward the outlet, is remarkably reminiscent of what is observed during vertebrate intestine development (Spence et al., 2011). However, our results are different from those of the past in vivo study, which suggested that Hedgehog (HH)-dependent intestinal mesenchymal cell clusters orchestrate the patterning and generation of villi within the adjacent intestinal epithelium (Walton et al., 2012). Thus, it will be interesting to explore whether the Wnt signaling we discovered occurs in vivo and, if so, how it interplays with HH signaling-mediated epithelial-mesenchymal interactions. Mesenchymal cells (e.g., intestinal fibroblasts) that produce tissue-specific morphogens (Powell et al., 2011) could be potentially integrated into the Gut Chip to explore this mechanism in vitro in future studies. We initially expected that the growth rate of villi-like epithelium would proportionally increase as a function of flow rate because higher flow rates should remove Wnt antagonists more efficiently. However, among six different flow rates in a range of 15-200 mL/h, the epithelial growth we observed experimentally was slower at flow rates >100 mL/h compared with the intermediate flow rate regime. This discrepancy suggested that there may be an additional factor that orchestrates the epithelial morphogenesis independently of constitutive competition between Wnt agonists and antagonists. Our qPCR analysis revealed that the expression of the Wnt receptor FZD9 significantly increased among 92 human Wnt-related genes at the intermediateflow-rate regime compared with lower or higher rates, which correlated directly with the degree of morphogenesis. Immunofluorescence imaging revealed that the fluid flow significantly increased the expression of FZD9 receptor in both Caco-2 and human organoid-derived epithelium, suggesting that FZD9 expression is dependent on fluid flow. Moreover, blocking the upstream signaling by neutralizing FZD9 remarkably reduced the formation of villi-like structures in Caco-2 cells. We revealed that the expression of FZD9 is regulated by the fluid flow, and it represents a morphogenetic control mechanism. The function of FZD9 is poorly understood compared with other well-characterized FZD receptors (Bafico et al., 1999). Thus the use of the microfluidic human Gut Chip may help to mechanistically unravel the role of FZD9 in development and morphogenesis in the human intestine. Intestinal morphogenesis has been promoted in vitro previously by culturing intestinal organoid-derived epithelial cells in the presence of Wnt (Farin et al., 2012;, or Wnt-producing Paneth-like cells or mesenchymal cells (Valenta et al., 2016). Interestingly, the cell-elaborated Wnt factors, which normally contribute to generation of localized Wnt gradients in the intestinal crypt microenvironment, induce formation of crypt-like protrusion with intervening villus domains within the crenulated organoids, whereas addition of soluble Wnt often promotes the formation of round spheroids that fail to undergo villus morphogenesis (Farin et al., 2012;Sato et al., 2009). Thus, organoids require the presence of live Wnt-producing cells to create spatiotemporal gradients of Wnt (and possibly its inhibitors as well) that are required for histogenesis, which is consistent with the importance of mesenchymal clusters observed during intestinal development in vivo (Walton et al., 2012). In contrast, Caco-2 epithelial cells, which were originally isolated from a tumor, secrete Wnt molecules constitutively and exhibit autocrine activation of FZD9 receptors, which may contribute to their stem cell-like behaviors as well as their ability to undergo 3D morphogenesis in the absence of mesenchymal cells. However, similar observation when primary organoid-derived human intestinal epithelial cells were cultured on-chip suggests that the removal of Wnt antagonists and the flowdependent expression of Wnt receptors may play a pivotal role in the control of intestinal morphogenesis. Furthermore, when the organoid-derived epithelium is cultured in static, intestinal morphogenesis does not occur regardless of the presence of Wnt, R-spondin, noggin, and various growth factors (Ettayebi et al., 2016;Noel et al., 2017). Thus, our finding in this study and the prior reports strongly suggest that the demonstration of epithelial 3D morphogenesis may be predominantly driven by the removal of morphogen antagonists rather than by the addition of morphogen or the origin of cell source (e.g., cancerous vs. normal). The current study provides an exceptional example showing how spatial control of morphogen antagonists can orchestrate epithelial morphogenesis by establishing asymmetry across the epithelial cells and sustaining differences between their apical versus basolateral microenvironments during tissue growth. This finding is consistent with the past observation that spatiotemporal morphogen gradients and control of extracellular morphogen antagonists are as important as the type of morphogen for control of development (Kawano and Kypta, 2003). Although the Wnt gradient in vivo has been well characterized (Scoville et al., 2008), the gradient profile of DKK-1 in vivo has been insufficiently discussed (Du et al., 2013), suggesting that the power of the Organ Chip technology is that we can identify the complex mechanism by precisely controlling gradients and independently varying potential morphogenic parameters one at a time, in both time and space. This ability to control directional flow rates, fluid shear stresses, mechanical deformations, and asymmetric stimulation of the apical versus the basolateral side of a developing epithelium cannot be easily achieved in any other culture system or animal model. Thus, Organ Chips may offer a compelling in vitro tool to decipher cellular, molecular, and biophysical mechanisms of developmental control that underlie histogenesis of the intestine, as well as other epithelial tissues. In summary, we discovered that human intestinal morphogenesis is controlled by a transepithelial gradient of the Wnt antagonist DKK-1 and flow-dependent induction of the Wnt FZD9 receptor using a microfluidic Gut Chip as a model system. DKK-1 is secreted asymmetrically across the epithelium resulting in higher concentrations in the basal compartment, where the presence of basal fluid flow removes this inhibitor and promotes intestinal epithelial morphogenesis. The expression of FZD9 varies with flow rate, and the location and height of Caco-2 intestinal villi-like epithelium scale directly with its expression levels. This microfluidic experimental platform can be further expanded to incorporate other cell types (e.g., mesenchymal cells) to explore how they contribute to this morphogenic response, in addition to exploring other unknown questions in developmental biology that involve the establishment of chemical gradients or variations in the local physical microenvironment. Limitations of the Study Caco-2 intestinal epithelium formed in the Gut Chip might not sufficiently recapitulate the normal physiology observed in vivo. This study was performed in the absence of other cell types such as mesenchymal cells or vascular components that may support epithelial morphogenesis in the intestinal microenvironment. For instance, myofibroblasts in the lamina propria area are known to produce morphogens and interact with the intestinal epithelium to control Wnt signaling (Roulis and Flavell, 2016). Therefore, incorporation of other tissue-specific cell types can further improve the accuracy of the Gut Chip model to study morphogenesis of intestinal epithelium in vitro. In addition, identification of cross talk between DKK-1 and FZD9 remains as a future study. METHODS All methods can be found in the accompanying Transparent Methods supplemental file. SUPPLEMENTAL INFORMATION Supplemental Information can be found online at https://doi.org/10.1016/j.isci.2019.04.037. (B, Control) or the presence of recombinant Wnt antagonists (C, rDKK-1; D, rWIF-1; E, rsFRP-1; and F, rSoggy-1/DKKL-1). Wnt antagonists were flowed into the lower microchannel for 48 hr, then the setup was imaged by phase contrast microscopy. Bar, 100 µm. Figure S7. rDKK-1 inhibits the 3D epithelial growth of Caco-2 cells in a dosedependent manner. Related to Figure 4. A time course of the epithelial growth at different concentrations of rDKK-1 (100 and 500 ng/mL; Control at 0 ng/mL). The culture medium containing rDKK-1 was perfused into the basolateral microchannel at 72 hr since the seeding. Phase contrast images were taken at 24, 85, and 135 hr, respectively. The image taken at 135 hr was also provided in Figure 4C. Bar, 50 µm. Figure 5. A schematic (top) shows the 2D side view of a lower microchannel (Blue inlet, sky blue outlet), and the zoom-in inset configures the secretion of DKK-1 from the basal surface of a Caco-2 monolayer into the capillary microchannel. In the flow rate regime from 0 to 200 μL/hr, 2D simulations were performed at conditions as the DKK-1 production rate by Caco-2 cells is 42.1 pg/10 5 cells/hr ), number of cells was 5×10 5 cells, and the diffusion coefficient of DKK-1 was 1×10 -10 cm 2 /sec . Pores and their structure in the PDMS membrane were simplified by adding a diffusion layer ("Membrane layer" in the zoom-in inset) during simulation where no fluid advection took place. A color bar (bottom right) indicates the scaled range of DKK-1 concentrations (unit, mol/m 3 ). Figure S12. The concentration profile of DKK-1 in the basal microchannel of a Gut Chip. Related to Figure 5. The 3D volume represents the geometry of the lower microchannel. A heat map shows the concentration profile of DKK-1 in the capillary microchannel. The upper plane of this 3D heat map represents the basolateral space underneath the porous membrane that touches the basal membrane of a Caco-2 epithelium in the upper microchannel. The concentration of DKK-1 near the inlet of the microchannel was relatively low whereas its level increased by 10-fold or more in regions near the channel outlet. A 2D top-view of this heat map was used in Figure 5A. A color bar (right) indicates the scaled range of DKK-1 concentrations (unit, mol/m 3 ). Figure 6. Phase contrast microscopy was applied to take snapshots of villi at 30, 53, 77, 100, and 140 hr, respectively. Volumetric flow rates were applied at 15, 30, 70, 100, 150, and 200 μL/hr, respectively. Results of epithelial height were plotted in Figure 6A. Bar, 50 µm. Figure S14. FZD9 expression of the human organoid-derived epithelium cultured in a hybrid microfluidic chip. Related to Figure 6. Human colonoid-derived epithelial cells were cultured in the hybrid chip with ("Fluidic") or without ("Static") basolateral flow for 168 hr, and the expression of FZD9 was analyzed by immunofluorescence imaging. Images represent the projection view of the 3D reconstructed images. The intensity of FZD9 was quantified using Image J (N=3). Bar, 50 µm. **p<0.001. TRANSPARENT METHODS Microfabrication of a Device. A Gut Chip microdevice was fabricated by the soft lithography method as previously described . Briefly, a Gut Chip was fabricated with the upper and lower microchannel compartments of cured polydimethylsiloxane (PDMS, 15:1 (w/w) prepolymer:curing agent; Sylgard, Dow Corning). A Gut Chip has two parallel cell culture microchannels (1 mm wide × 10 mm long × 0.15 mm high) and two vacuum chambers (1.68 mm × 9.09 mm × 0.15 mm) besides the central cell channels. Each microchannel is separated by a PDMS wall (100 µm thick). A porous PDMS membrane (20 µm thick) containing an array of circular pores (10 µm diameter with 25 µm spacing) was produced as described . The layer-by-layer assembly of each PDMS compartment and a porous membrane was performed by incubating the setup at 80°C for an overnight after corona treatment (BD-20AC, Electro-Technic Products, Inc.). Gas-permeable silicone tubing (Tygon 3350, ID 1/32", OD 3/32", Saint-Gobain Performance Plastics) linked with a connector (hub-free stainless steel blunt needle, 18G; Kimble Chase) was inserted into the microchannels to supply cell culture medium ( Figure 1A, orange and blue arrows) or vacuum suction ( Figure 1A, white arrows). To test the effect of apical shear stress on the epithelial morphogenesis ( Figure 1F), a single channel microfluidic device was prepared by bonding the upper PDMS layer of a Gut Chip onto a flat PDMS layer via corona treatment bonding as previously described. To monitor the maximum epithelial growth ( Figure S1), we used an upper microfluidic layer with the same design of a Gut Chip but the modified height from 0.2 to 1 mm. A Transwell-insertable hybrid microfluidic device (Figures 2B middle,2C left,and S5) that can hold a Transwell insert (pore size, 0.45 µm; culture area, 0.33 cm 2 ) was fabricated by bonding the Transwell-insertable upper and the microfluidic lower (150 µm in height) layers through corona treatment. Cell Culture. A Caco-2BBE human intestinal epithelial cell line was purchased from the Harvard Digestive Disease Center. Conventional static cultures of Caco-2 cells were performed in a Transwell (pore size of a polyester membrane, 0.45 µm; culture area, 0.33 cm 2 ). The complete culture medium includes 20% (v/v) fetal bovine serum (FBS; Gibco), 100 units/mL penicillin, and 100 μg/mL streptomycin (Gibco) in the Dulbecco's Modified Eagle Medium (DMEM; Gibco) containing 4.5 g/L glucose and 25 mM HEPES. Complete medium was replenished every other day in both apical (AP) and basolateral (BL) sides until use. To prepare a conditioned medium ( Figure S3), we cultured Caco-2 cells in the Transwell (N=20), then collected the conditioned medium from the BL side at day 3 since the seeding. The collected conditioned medium was spun down (500× g, 5 min) to remove possible cell debris; then the supernatant was used. To carry out Transwell perfusion test ( Figure 2B, middle and 2C, left) or diffusion test ( Figure 2B, right), we used 3-week-cultured Caco-2 cells, or 1-week-cultured human intestinal organoid cells on Transwell inserts. For the diffusion test ( Figure 2B, right), a Transwell insert containing a pre-cultured Caco-2 monolayer was placed in the center of a 6" tissue culture dish (Corning) using two rectangular PDMS spacers (0.5 cm wide × 1 cm long × 1 cm high), then added 70 mL (100-fold increased volume from the format in a 24-well plate) of the complete culture medium into the BL side. Incubation was further performed for 120 hr from the onset in the 6" dish culture. For the Transwell culture of human intestinal organoid-derived primary epithelial cells, 3D colonoids were dissociated into single cells by applying 500 μL of EDTA solution (0.5 mM; Alfa Aesar), breaking the Matrigel, collecting organoid pellets after centrifugation (100× g, 4°C, 5 min), and incubating with 1 mL of TrypLE (Gibco) at 37°C for 15 min. After filtering the cell suspension through a cell strainer (40 μm pores, Corning), dissociated organoid epithelial cells resuspended in organoid complete medium (final cell density, ~6×10 6 cells/mL) were seeded into a Transwell insert. A monolayer of colonoid-derived epithelium was formed by replacing the culture medium every other day for up to a week. Microfluidic Cell Cultures. Prior to seeding Caco-2 or normal colonoid-derived epithelial cells, a Gut Chip was sterilized by flowing 70% (v/v) ethanol into the microchannels, completely dried in an 80°C oven, then exposed to ultraviolet light and ozone simultaneously (UVO Cleaner 342, Jelight Company Inc.) for either 40 min (for Caco-2 cells) or 1 hr (for organoid-derived epithelial cells). After microchannels were coated with a mixture of type I collagen (30 μg/mL; Gibco) and Matrigel (300 µg/mL) in serum-free DMEM for either 1 hr (for Caco-2 cells) or 2 hr (for organoid-derived epithelial cells), cell suspension was introduced into the upper microchannel (final cell density, ~1.510 5 cells/cm 2 ). After cells adhered on the surface of an ECM-coated porous membrane, culture medium was perfused through the upper microchannel at 30 µL/hr (0.02 dyne/cm 2 ) for up to 24 hr to form an intact cell monolayer; thereafter, the culture medium was perfused to both the upper and lower microchannels at the same flow rate. To provide peristalsis-like physical deformations on the cell monolayer, cyclic vacuum suctions that exert 10% maximum cell strain at 0.15 Hz frequency were applied via a vacuum controller (FX5K Tension instrument, Flexcell International Corporation). To seed and grow organoid-derived primary epithelium, we followed the same protocol for preparing dissociated organoid-derived epithelial cells (See "Organoid culture"), seeded into the pre-coated chip (final cell density, ~6×10 6 cells/mL), then incubated the whole setup in a humidified CO2 incubator at 37°C for 3 hr. After the cell attachment, we followed the same protocol described previously for the microfluidic culture. Assessment of Epithelial Morphogenesis. To test the effect of mechanical deformations on the 3D epithelial growth ( Figure 1E), stretching motions were applied at 24 hr since the seeding for up to 96 hr (+Str) whereas the stretching movements were not applied in control (−Str). To assess the cessation effect of fluid flow on the formation of villi-like structure (Figure 2A), we set out two independent setups with the cessation on the apical (Figure 2A, middle) or the basolateral flow (Figure 2A, right) during the entire microfluidic cultures up to 150 hr, then phase contrast imaging was performed. To assess the cessation of flow on the pre-grown villi-like epithelium ( Figure 3A), basolateral flow was ceased for up to 90 hr in the Caco-2 Gut Chip. Morphology of the epithelial structure was assessed by a phase contrast imaging. The population of proliferative cells was visualized and quantified by labeling the Ki67-positive cells. Nuclei staining was subsequently performed as a counterstaining for estimating the percentage of proliferative cells ( Figure 3B). The viability of the intestinal epithelium was assessed by performing a live/dead assay. A mixture of Calcein acetoxymethyl (AM) (4 µM) and ethidium homodimer-1 (8 µM) ( Figure 3C) diluted in PBS was perfused to the epithelium-grown Gut Chip at 30 µL/h in a 37 o C CO2 incubator for 1 hr. To evaluate the inhibitory effect of basolaterally secreted compounds in Figure S4, a double-layered Caco-2 culture (i.e., two Caco-2 monolayers are adherent on both sides of the porous membrane) was conducted by seeding Caco-2 cells onto the upper microchannel first, incubating the setup for 45 min in a CO2 incubator for the attachment, subsequently seeding dissociated Caco-2 cells onto the lower microchannel, incubating the flippedover setup on top of a PDMS spacer in a CO2 incubator for 45 min again, then flipping it over to run microfluidic cultures. A hybrid microfluidic device was used to introduce basolateral flow (30 µL/hr) in the Transwell containing a flat monolayer of Caco-2 or organoid-derived epithelial cells (Figures 2B middle, 2C left, and S14) by flowing fresh culture medium for 48 hr but the apical chamber was maintained static. For monitoring growth profile at various volumetric flow rates ( Figures 6A and S13), we perfused culture medium into both upper and lower microchannels in the Gut Chip at flow rates of 15, 30, 70, 100, 150, and 200 µL/hr without mechanical deformations. The height of intestinal epithelium in the Gut Chip was monitored and measured by phase contrast or DIC microscopy at each given time point. To measure the height of villi, z-position was tracked using a laser scanning confocal microscopy (Leica SP5 X MP DMI-6000), at the anchorage of the basement membrane to the villous tip or directly measured using cross-sectional images. Computational Simulation. A computational model was developed using COMSOL Multiphysics 4.0 based on a simplified geometry of a Gut Chip that includes the fluidic channels and the membrane surface on which the cells reside. The model included calculations for laminar flow through the channels and the transport of diluted chemical species from the cell surface due to diffusion and fluid convection. For simulating the fluid dynamics, Navier-Stokes equation assuming incompressible fluid was used, and convection of fluid was also included. As a boundary condition, the bottom of the cell channel and the membrane area were set as no-slip condition. For simulating the transport of DKK-1 and Wnt, Fick's second law was applied. The flux of the production of each molecule by the epithelium was applied based on the specific production rate (unit, mol/m 3 ; ), where the cell number per unit area was experimentally calculated. The porosity of a PDMS porous membrane (~10%) was reflected in the simulation. The concentration of DKK-1 and Wnt in the inlet of the lower microchannel was set as 0 mol/m3. The flux of each molecule from the bottom PDMS layer of the basolateral microchannel was set as zero. Mesh geometry and solver configurations were refined for solution convergence. Morphological Analysis. Microscopic images of Caco-2 or organoid-derived primary epithelial cells in forms of a monolayer or 3D structures grown in either Transwells or Gut Chips were taken by the phase contrast microscope (Axiovert 40CFL, Zeiss) equipped with a 20× objective (0.30 Ph1; Zeiss) and a Moticam 2500 camera (Motic China Group Co., Ltd.) or a differential interference contrast (DIC) microscope (Axio Observer.Z1, Zeiss) coupled to a 20× objective (LD PlnN, 0.4 DICII; Zeiss) and a digital camera (1000×1000 8 µm pixels, EM-CCD C9100, Hamamatsu). Images were taken and processed using imaging software (Motic images plus 2.0), MetaMorph (Molecular Devices), ZEN Pro (Zeiss), or ImageJ. The villi-like microarchitecture was detected by laser scanning confocal microscopy using a 25× water immersion objective (NA 0.95) linked to the laser sources (a diode laser with 405 nm, a white light laser with 489-670 nm, or an argon laser with 488 and 496 nm) and detectors (a photomultiplier tube or HyD). Single plane or multi-plane Z-stacked confocal fluorescence images were analyzed by Leica imaging software (LAS AF; Leica Microsystems). To obtain horizontal, vertical cross-sectional, or angled views, deconvolution (AutoQuant X, Version X3.0.1; Media Cybernetics Inc.) followed by a 2D projection process (IMARIS 7.6 F1 workstation, Bitplane Scientific Software) were performed on Z-stacked confocal fluorescence images. For immunofluorescence microscopic analysis, epithelial cells grown in either a Transwell or a Gut Chip were fixed with 4% (w/v) paraformaldehyde (Electron Microscopy Science), permeabilized with 0.3% (v/v) Triton X-100 (Sigma), blocked with 2% (w/v) bovine serum albumin (BSA; Sigma) and washed with PBS (Ca 2+ and Ma 2+ free; Gibco). To fluorescently label Ki67 (Figures 3B and S9) and FZD9 (Figures 6C and S14), primary antibodies (Abcam) were diluted in 2% BSA in PBS and treated to the cells. When staining was performed in the Gut Chip, the solution was filled to both microchannels; then the upper microchannel was flowed at 30 µL/h at room temperature for 3 hr and incubated at 4°C for overnight. For the immunostaining in the hybrid chip, the solution of primary antibody was treated into the insert, incubated at room temperature for 1 hr. Secondary antibodies (DyLight 488 conjugated goat polyclonal to rabbit IgG, Abcam), also diluted in 2% BSA in PBS, was added to the microchannel under light protected at room temperature for 3 hr. For the counterstaining, samples were incubated with 4',6-diamidino-2-phenylindole dihydrochloride (DAPI; final concentration of 1.5 µM; Molecular Probe) to visualize nuclei ( Figures 1B, 2C, and S10). For visualizing the F-actin, fluorescein isothiocyanate (FITC)-phalloidin (10 µM, final concentration; Sigma) was mixed with the DAPI solution and simultaneously used to stain intestinal epithelium ( Figures 2C and S10). The lesion area was defined as the percentage of the measured cell culture area where the morphology of 3D epithelial structure was destroyed by the Wnt antagonistic reactions compared to the entire available cell culture area in the Gut Chip microdevice (~0.11 cm 2 ). Lesion area was estimated using the ImageJ software. The height of intestinal epithelium was determined using either the DIC microscopy or the confocal immunofluorescence microscopy in conjunction with the Calcein AM (5 µM, final concentration) staining by measuring the distance of the Z position of 3D epithelium from the porous membrane to the tip of the villi-like structure ( Figure S1). The height of a Caco-2 monolayer was measured by the analysis of XZ vertical cross-sectioned images using the ImageJ ( Figure S10). Epithelial height represents the average maximum height of the entire epithelial layer by measuring the full height from the ECMcoated porous membrane to the villus tip. Live cell imaging of Caco-2 epithelium was carried out by incubating the cells with the culture medium containing a cell-permeant Calcein AM (5 µM) perfused (30 µL/hr) into the upper microchannel for 30 min at 37°C. After washing with fresh culture medium (30 µL/hr) for 10 min, confocal microscopy was applied to take Z-stacks ( Figure S1). A contour of the 3D epithelial structure was visualized using a fluorescence dye (5 µL/mL; CellMask, Molecular Probe) targeting the plasma membrane of epithelium ( Figures 1C and 5C). Effect of FZD9 neutralization on the 3D morphogenesis was assessed by introducing the anti-FZD9 antibodies (20 μg/mL; sodium azide-free, Abcam) to the Caco-2 epithelium grown in a Gut Chip device. Culture medium containing antibodies was perfused through to both upper and lower microchannels at 30 μL/hr for up to 24 hr. Morphology of the 3D structure was monitored using a DIC microscope (DMi8, Leica). Quantification of secreted DKK-1. Secretory DKK-1 by the Caco-2 cells was quantified using an enzyme-linked immunosorbent assay (ELISA) kit that targets human DKK-1 (Quantikine ELISA, R&D Systems) according to the manufacturer's protocol. Culture medium collected from both apical and basolateral sides of the Transwell that contains a Caco-2 cell monolayer was used for the DKK-1 quantification. Assessment of epithelial barrier function. The barrier function of the intestinal epithelial layer was measured by monitoring transepithelial electrical resistance (TEER). TEER was measured by using Ag/AgCl electrodes connected to an ohm meter (87V Industrial Multimeter, Fluke Corporation). Normalization of TEER was performed following an equation below. TEER = (Ωt -Ωblank)/(Ω0 -Ωblank), where Ωt is resistance at the measured time point since the onset of the experiment, Ωblank is a resistance without the epithelium, and Ω0 is resistance at the onset time point. Genetic Analysis. Expression profile of the genes related to human Wnt pathway was analyzed by the quantitative real time polymerase chain reaction (qPCR) following the protocol provided by the manufacturer for preparing cDNA (Cells-to-cDNA II Kit, Life Technologies) and running multiplexed qPCR (TaqMan Array Human WNT Pathway 96well Plate, ThermoFisher Scientific) that targets pre-arrayed 92 Wnt related genes SERP5, SLC9A3R1, TCF7, TCF7L1, TCF7L2, TLE1, TLE2, TLE3, TLE4, TLE6, WIF1, WISP1, WNT1, WNT10A, WNT10B, WNT11, WNT16, WNT2, WNT2B, WNT3, WNT3A, WNT4, WNT5A, WNT5B, WNT6, WNT7A, WNT7B, WNT8A, WNT8B, WNT9A, WNT8B). As a control, Caco-2 cells grown in a static Transwell for two weeks were used. As test groups, Caco-2 cells cultured at three different flow rates (30, 100, and 200 µL/hr) without mechanical deformations were independently harvested. Fold increase of each target gene was estimated by comparing the gene expression level in the test groups (i.e., Gut Chips) to the one in control (i.e., Transwells). Two independent batches were performed with two technical replicates for qPCR analysis. Due to the limited sample amount, we merged two technical replicates in a single tube. Results were analyzed using the standard R statistical software. Quantification and statistical analysis. All results and error bars in this article are represented as a mean ± standard error (S.E.M.). For statistical analyses, a one-way analysis of variance (ANOVA) with Tukey-Kramer multiple comparisons test was performed using GraphPad InStat software, version 3.10 (GraphPad Software Inc.). Differences between groups were considered statistically significant when p<0.05. Microscopic images were recorded at more than ten different random locations in either Transwells or Gut Chips from at least two independent replicates at each time point, and representative images were displayed in the Figures.
2019-05-17T13:55:28.091Z
2019-05-03T00:00:00.000
{ "year": 2019, "sha1": "e2973b36bc582fcc684c4e88951b6a3e565a7b36", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2589004219301348/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2973b36bc582fcc684c4e88951b6a3e565a7b36", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
119198147
pes2o/s2orc
v3-fos-license
Geometric methods for the most general Ginzburg-Landau model with two order parameters The Landau potential in the general Ginzburg-Landau theory with two order parameters and all possible quadratic and quartic terms cannot be minimized with the straightforward algebra. Here, a geometric approach is presented that circumvents this computational difficulty and allows one to get insight into many properties of the model in the mean-field approximation. Introduction. The Ginzburg-Landau (GL) theory offers a remarkably economic description of phase transitions associated with breaking of some symmetry [1]. This breaking is described with an order parameter ψ: the high symmetry phase corresponds to ψ = 0, while the low symmetry phase is described by ψ = 0. In order to find when a given system is in high or low symmetry phase, one constructs a Landau potential that depends on the order parameter, and then find its minimum. Its classic form is Higher order terms o(|ψ| 4 ) are usually assumed to be negligible. The values of the coefficients a and b and their dependence on temperature, pressure, etc. can be either calculated from a microscopic theory, if it is available, or considered as free parameters in a phenomenological approach. The phase transition associated with the symmetry breaking takes place when an initially negative a becomes positive, and the minimum of the potential (1) shifts from zero to |ψ| = a/b. Many systems are known, in which two competing order parameters (OP) coexist. Among them are general O(m) ⊕ O(n)-symmetric models, [2], models with two interacting N -vector OPs with O(N ) symmetry, [3]; spindensity-waves in cuprates, [4]; SO(5) theory of antiferromagnetism and superconductivity, [5]; multicomponent, [6], spin-triplet p-wave, [7], and two-gap, [8,9], superconductivity, with its application to magnetism in neutron stars, [10]; two-band superfluidity, [11]; various mechanisms of spontaneous breaking of the electroweak symmetry beyond the Standard Model such as the two-Higgsdoublet model (2HDM), [12]. To describe such a situation within the GL model, one constructs a Landau potential similar to (1), which depends on two order parameters, ψ 1 and ψ 2 . Coefficients of this potential, a i and b i , can be considered independent, although in each particular application they might obey specific relations. One thus arrives at the most general two-order-parameter (2OP) GL model with quadratic and quartic terms. A natural question arises: what is the ground state of the most general 2OP GL model? A rather surpris-ing fact is that this question cannot be answered by a straightforward calculation. Differentiating the Landau potential in respect to ψ i leads to a system of coupled algebraic equations that cannot be solved explicitly. In this Letter we show that despite this computational difficulty, one can still learn much about the most general 2OP GL model. Namely, one can study the number and the properties of the minima of the Landau potential, classify possible symmetries and study when and how they are broken. In short, one can describe the phase diagram of the model, at least in the mean-field approximation, without explicitly minimizing the potential. There exists, in fact, an extensive literature dating back to 1970's on minimization of group-invariant potentials with several OPs with the aid of stratification of the orbit space, see e.g. [1,13]. Here we show that in the case of two order parameters realizing the same group representation the analysis can be extended much farther than in the general case, with important physical consequences. For a particular application of this formalism to the 2HDM, see [14,15]. The formalism. Let us focus on the simplest case when two OPs ψ 1 and ψ 2 are just complex numbers. The most general quadratic plus quartic Landau potential is It contains 13 free parameters: real a 1 , a 2 , b 1 , b 2 , b 3 and complex a 3 , b 4 , b 5 , b 6 . For the illustration of the main idea, we place no restriction on |ψ i | from above. Note that potential (2) contains quartic terms that mix ψ 1 and ψ 2 . Such terms are usually absent in particular applications of the 2OP GL model (for a rare exception, see [9]), but in the approach presented here it is essential that all possible terms are included from the very beginning. Once potential (2) is written, the physical nature of OPs becomes irrelevant. One can consider them as components of a single complex 2-vector Φ = (ψ 1 , ψ 2 ) T . The key observation is that the most general potential (2) keeps its generic form under any regular linear transformation between ψ 1 and ψ 2 : Φ → Φ ′ = T · Φ, T ∈ GL(2, C). It can be also accompanied with a suitable transformation of the coefficients a i , b i , so that one arrives at exactly the same potential as before. Thus, the problem has some reparametrization freedom with the reparametrization group GL(2, C). Among 13 free parameters, only 6 play crucial role in shaping the phase diagram of the model, while the other 7 just reflect the way we look at it. Let us now introduce a four-vector r µ = (r 0 , Here, index µ = 0, 1, 2, 3 refers to components in the internal space and has no relation with the space-time. Multiplying ψ i by a common phase factor does not change r µ , so each r µ parametrizes a U (1)-orbit in the ψ ispace. Since the Landau potential is also U (1)-invariant, it can be defined in this 1 + 3-dimensional orbit space. The SL(2, C) ⊂ GL(2, C) group of transformations of Φ induces the proper Lorentz group SO(1, 3) of transformations of r µ . This group includes 3D rotations of the vector r i as well as "boosts" that mix r 0 and r i , so the orbit space gets naturally equipped with the Minkowski space structure. Since r 0 > 0 and r µ r µ ≡ r 2 0 − r 2 i = 0, the orbit space is given by the "forward lightcone" LC + in the Minkowski space. All this allows us to rewrite (2) in a very compact form: with One usually requires that the quartic term of the potential increases in all directions in the OP-space. In the orbit space, this was proved in [14] to be equivalent to the statement that B µν is diagonalizable by an SO(1, 3) transformation and after diagonalization it takes form Since r µ r µ = 0, the matrices B µν andB µν = B µν −Cg µν are equivalent. This degree of freedom in the definition of B µν shifts all the eigenvalues by a common constant. Finding eigenvalues B µν explicitly in terms of a i , b i requires solution of a fourth-order characteristic equation, which constitutes one of the computational difficulties of the straightforward algebra. We reiterate that in our analysis we never use these explicit expressions. The analysis relies only on the fact that the eigenvalues are real and satisfy (6). Minima of the Landau potential. Let us first find how many extrema the potential (4) can have in the orbit space. Since extrema lie on the surface of LC + , we use the Lagrange multiplier method to arrive at the following simultaneous equations: Here, r µ labels the position of an extremum. This system cannot be solved explicitly in the most general case, however one can establish how many extrema a given potential has. To find it, we rewrite r i = r 0 n i , where n i is a unit 3D vector; then (7) becomes Assume for simplicity that A 0 > 0 and B 1 < B 2 < B 3 . The l.h.s. of (8) at fixed r 0 and all unit vectors n i parametrizes an ellipsoid with semiaxes A 0 − (B 0 − B i ) r 0 . As r 0 increases from zero to infinity, this ellipsoid first shrinks, then grows, collapsing at r to planar ellipses. One can see that during these transformations for r 0 = [0, ∞) it sweeps at least once the entire Minkowski space and at least twice the interior of the sphere of radius A 0 . In addition, there are two cusped regions, such as shown in Fig. 1, whose interior is swept twice more. So, by checking whether A i lies inside these regions, one can get the number of solutions of (8) without finding them explicitly. In the 1 + 3-dimensional space of A µ , these 3D regions serve as bases of corresponding conical regions with different numbers of extrema of the potential. Namely, at least one non-trivial solution of (7) exists, if A µ lies outside the past lightcone LC − (otherwise the global minimum of the potential is at the origin, ψ i = 0). If A µ lies inside the future lightcone LC + , then there are at least two non-trivial solutions. If in addition A µ lies inside one or both caustic cones, then two additional extrema per cone exist. In total, the potential can have up to six non-trivial extrema in the orbit space. This fact was also found independently in [16]. The above construction does not distinguish a minimum from a saddle point (with condition (6), there are no non-trivial maxima in the orbit space). A straightforward method for finding a minimum, which consists in checking that the hessian eigenvalues are all non-negative, is again of little use here. Instead, one can still use geometry to study the properties of the minima. As described above, physically realizable points of the orbit space lie on LC + . Nevertheless, let us consider expression (4) in the entire Minkowski space. Let us define an equipotential surface M V as a set of all vectors r µ with the same value of V . These equipotential surfaces do not intersect, are nested into each other, and have very simple geometry: they are second-order 3-dimensional manifolds (3-quadrics). Since LC + is also a specific 3quadric, finding points in the orbit space with the same value of V amounts to finding intersections of these two 3-quadrics. In particular, to find a local minimum of the potential in the orbit space, one has to find an equipotential surface that touches LC + in an isolated point (we say that two surfaces "touch" if they have parallel normals at the intersection points). The global minimum corresponds to the unique equipotential surface that only touches but never intersects LC + . The fact that the search for the global minimum is reformulated in terms of contact of two 3-quadrics leads to several Propositions listed below. Let us now find how many among the extrema are minima. Let us fix B µν and move A µ continuously in the parameter space. We first note that the signature of the hessian can change only when the total number of extrema changes. A saddle point cannot simply become a minimum; it can only split into several extrema, one of them being a minimum, or it can merge with other extrema to produce a minimum. Therefore, the conical 3-surfaces described above (LC − , LC + , caustic cones) separate regions in the A µ -space with a definite number of minima. One can then check a representative point A µ = (A 0 , 0, 0, 0) (in the basis where B µν is diagonal) of the innermost region in the A µ space and find that there are two distinct minima in this case. This proves the following Proposition: Proposition 1. The most general quadratic plus quartic potential with two order parameters can have at most two distinct local minima in the orbit space. Symmetries and their violation. The potential can have an additional explicit symmetry, i.e. it can remain invariant under some transformations of ψ i (or coefficients) alone. If the position of the global minimum is also invariant under the same group of transformations, we say that the symmetry is preserved; otherwise, we say that the explicit symmetry is spontaneously violated. In most applications, the Landau potential does possess some explicit symmetry, so whether it is preserved or violated can have profound physical consequences. An explicit symmetry corresponds to such a map of the Minkowski space that leaves invariant, separately, B µν r µ r ν and A µ r µ . In a general situation, it might be far from evident that the potential has any explicit symmetry. The following criterion helps recognize the presence of a hidden explicit symmetry and tells what symmetry it is: Suppose that the potential is explicitly invariant under some transformations of r µ . Let G be the maximal group of such transformations. Then: (a) G is non-trivial if and only if there exists an eigenvector of B µν orthogonal to A µ ; (b) G is one of the following groups: The proof will be given in [17] (see also [18] for a similar statement in 2HDM). In the case of a discrete symmetry the following Propositions can be easily proved: Proposition 3. The maximal violation of any discrete explicit symmetry consists in removing only one Z 2 factor: Proposition 4. For any explicit discrete symmetry, minima that preserve and spontaneously violate this symmetry cannot coexist. Both Propositions follow from Proposition 1 and the fact that the set of all minima preserves the explicit symmetry group G. If a discrete symmetry is spontaneously violated, then there are two generate minima in the orbit space. One can also prove the converse, i.e. the two degenerate minima can arise only via spontaneous violation of a discrete symmetry of the potential, [15]. Thus, the criterion for the spontaneous violation of a discrete symmetry is that A µ lies inside a caustic cone associated with the largest eigenvalue. For a concrete example, suppose that all eigenvalues of B µν are distinct and that A 3 = 0, while other components A i = 0. This potential has an explicit Z 2 symmetry generated by reflections of the third coordinate. The global minimum spontaneously violates this symmetry (i.e. r 3 = 0), if B 3 > B 1 , B 2 and The proof is based on the "shrinking ellipsoid" construction described above and will be given in detail in [17]. Local order parameters. In this Letter we have illustrated the idea using the global OPs ψ i . The approach can be easily extended to models, where OPs are defined locally, ψ i (x). In this case, one considers the free-energy functional F [ψ i ] = d 3 x[K(ψ i ) + V (ψ i )] with kinetic term K and potential V . In the general model the kinetic term must include all quadratic combinations of the gradient terms: where D is either ∇ or the covariant derivative. It can be rewritten in the reparametrization invariant form K = K µ ρ µ with ρ µ ≡ ( DΦ) † σ µ ( DΦ) and K µ defined in terms of κ i similarly to A µ defined in terms of a i . The presence of K µ leads only to minor complications of the above analysis. All the conclusions about the number of extrema and minima remain unchanged, however one should now distinguish symmetries of the potential and of the entire free energy functional, see [17] for details. Two local OPs also lead to the existence of quasitopological solitons. This possibility has been known for some time; for example, in [19], a soliton in the onedimensional two-band superconductor with a simple interband interaction term was described. Such a soliton corresponds to the relative phase between the two condensates that changes continuously from zero to 2π as x goes from −∞ to +∞, and it is stable against small perturbations. The general origin of such quasitopological solitons is obvious from the above construction. The orbit space of all non-zero configurations of OPs is the forward lightcone without the apex, which is homotopically equivalent to a 2-sphere S 2 . Depending on the exact shape of the potential, it can support either closed linear paths, which correspond to domain walls, or closed 2-manifolds, which corresponds to strings. Multicomponent order parameters. In many physical situations one encounters multicomponent OPs. Examples include 2HDM, superfluidity in 3 He, spin-density waves, etc. The formalism developed here works also for these cases. Just to mention some characteristic features, leaving a detailed discussion for [17], we note that SU (N )-symmetric potential depends on Nvectors only via combinations (ψ † i ψ j ). Since in general (ψ † 1 ψ 2 )(ψ † 2 ψ 1 ) = |ψ 1 | 2 |ψ 2 | 2 , one gets a new term in the potential (2) with its own coefficient. The definition of r µ remains the same, but r µ r µ ≥ 0, so r µ can lie not only on the surface, but also in the interior of LC + . This makes definition of B µν unique, fixes its eigenvalues, and depending on their signs, one has to consider separately several cases. Modifications to the overall results are minor, see [14,15] for a 2HDM analysis. One obtains a new phase, with r µ lying inside LC + , which has differ-ent symmetry properties (in 2HDM it corresponds to a charge-breaking vacuum). One can easily formulate conditions when it is the global minimum of the model, so the phase diagram remains equally tractable in this case. In conclusion, we considered the most general Ginzburg-Landau model with two order parameters, including all possible quadratic and quartic terms in the Landau potential. Since the minimization of the potential cannot be done explicitly, we developed a geometric approach based on the reparametrization properties of the model and used it to study the ground state of the model. Future research should include dynamics of the fluctuations above the ground state, corrections to the potential beyond the quartic term, renormalization group flow, as well as modifications of the results at finite temperature and in the presence of external fields. I thank Ilya Ginzburg for useful comments. This work was supported by FNRS and partly by grants RFBR 05-02-16211 and NSh-5362.2006.2.
2008-01-26T16:49:09.000Z
2008-01-26T00:00:00.000
{ "year": 2008, "sha1": "2eb4ef764943165f6cdc0c30af701327921ef0d8", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0801.4084", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2eb4ef764943165f6cdc0c30af701327921ef0d8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16333685
pes2o/s2orc
v3-fos-license
Interaction of an atom with layered dielectrics We determine the energy-level shift experienced by a neutral atom due the quantum electromagnetic interaction with a layered dielectric body. We use the technique of normal-mode expansion to quantize the electromagnetic field in the presence of a layered, non-dispersive and non-absorptive dielectric. We explicitly calculate the equal-time commutation relations between the electric field and vector potential operators. We show that the commutator can be expressed in terms of a generalized transverse delta-function and that this is a consequence of using the generalized Coulomb gauge to quantize the electromagnetic field. These mathematical tools turn out to be very helpful in the calculation of the energy-level shift of the atom, which can be in its ground state or excited. The results for the shift are then analysed asymptotically in various regions of the system's parameter space -- with a view to providing quick estimates of the influence of a single dielectric layer on the Casimir-Polder interaction between an atom and a dielectric half-space. We also investigate the impact of resonances between the wavelength of the atomic transition and the thickness of the top layer. I. INTRODUCTION The question of the interaction between a neutral atom and a macroscopic dielectric body, once of purely academic interest, has recently been promoted to a reallife physics problem thanks to the rapid developments in nanotechnology and experimental techniques. It is no longer the case that this interaction, the so-called Casimir-Polder interaction, is a tiny effect that can be ignored in all practical situations. Instead, on the lengthscales that nanotechnology nowadays operates in, dispersion forces, as they are also called, become significant and may appreciably influence miniaturized physical systems. Many of the current ambitions of cold-atom physics towards quantum computation and a variety of nanotechnological applications involves the trapping and accurate guiding of single atoms above dielectric substrates, socalled atom chips. With these the nearby environment of a trapped atom usually consists of a complicated array of inhomogeneous dielectrics. The questions then arising are: what are the magnitudes of the Casimir-Polder forces felt by the atom, and can one possibly engineer the types and shapes of surrounding materials either to minimize unwanted dispersion forces or to make them optimally contribute to the trapping or guiding? In order to investigate such possibilities one needs to go beyond simple featureless geometries and ground-state atoms and gain flexibility. The perhaps least sophisticated but still interesting example to study in this context is to consider a neutral atom, possibly excited, above a layered dielectric half-space, cf. Fig. 1. If the atom is in its ground state, then the Casimir-Polder force is always attractive for material surfaces with refractive indices greater than 1. In such case it is desirable to derive simple analytical formulae that would allow one to obtain quick estimates of the magnitudes of the forces involved in terms of the optical properties of the layer and the substrate [1]. On the other hand, if the atom is in its excited state, then, as it is widely recognised [2], the potential acquires a oscillatory contribution that can result in a repulsive force. Additionally, the presence of the layer creates the possibility of a resonance between the wavelength of the atomic transition and the thickness of the layer, which could lead to a suppression or enhancement of the interaction. There exist a variety of theoretical approaches devised to study the Casimir-Polder interaction (see e.g. [3] for a recent list of references) but perhaps the most successful ones being the linear response theory [4] and phenomenological macroscopic QED [5]. By using linear response theory [4] and expressing the field susceptibilities in terms of Fresnel reflection coefficients [2,6], one can express the Casimir-Polder interaction as an integral along the imaginary frequency axis of the product of the atomic and field susceptibilities. Thus in practice the problem is reduced to the calculation of the classical electromagnetic Green's tensor expressed in terms of Fresnel coefficients. Such calculations, while straightforward in principle, tend to be quite tedious and often inevitably lead to the use of numerical methods. However, there is a benefit to studying problems in quantum electrodynamics by using physically transparent methods that do not obscure the basic underlying physics. For the kind of geometry of plane layered dielectrics considered in this paper, the technique of electromagnetic field quantization based on a normal-mode expansion [7] seems to be best emphasizing the physics of the problem, namely the fact that the system supports two kinds of modes of the electromagnetic field [8]: these are travelling modes with a continuous spectrum and trapped modes with a discrete spectrum, i.e. occurring at only certain allowed frequencies. The trapped modes arise because of repeated total internal reflections within the top layer of higher refractive index than the substrate, and emerge as evanescent waves outside the wave-guide. This gives rise to an intricate assortment of evanescent modes outside a layered dielectric where evanescent waves with continuous spectrum, also arising in a half-space geometry [7], are superposed with discrete evanescent modes that arise only in the presence of the slab-like waveguide [1]. In the framework we apply in this work, in the same spirit as e.g. [1,9], the use of standard perturbation theory renders all calculations explicit and it is possible from the outset to track down and remove if necessary any ambiguities that tend to remain hidden in more elaborate theories. For example, linear-response theory results in an integral over the Fresnel reflection coefficients but gives no indication of whether the evanescent waves associated with the trapped modes contribute to the Casimir-Polder interaction or not. The question is answered at once if the normal-modes approach is used instead, see [8,11]. Also, interpretations of more complicated field-theoretical approaches [10] can be put to an explicit test [1]. The purpose of this paper is twofold. Firstly, it aims to support current experimental efforts by providing a range of analytical formulae useful for quick estimates of the dispersion forces acting on an atom placed in the vicinity of the layered dielectric, with particular emphasis on the corrections caused by the layer as compared to the standard half-space results reported in [9]. It also investigates the resonant interaction between an excited atom and a layer in the search for the possible enhancement or suppression of the Casimir-Polder force. Secondly, it formulates a simple and explicit theory based on well understood concepts of theoretical physics such as perturbation theory and electromagnetic field quantization in terms of a normal-modes expansion. The theoretical aspect, although serving only as a means to a practical end result, turns out to be interesting in its own right. The perturbative approach used in this work leads to the problem of the summation over the modes of the electromagnetic field, which is non-trivial because of the dual character of the modes of the electromagnetic field. The task of adding the discrete and continuous field modes is elegantly accomplished by the use of complex-integration techniques. This allows us to explicitly show that the canonical commutation relations between the field operators are satisfied, which is equivalent to saying that the completeness relation of the normal-modes holds in the geometry considered. Although this is not a surprise because the field modes are solutions of a Hermitean operator's eigenvalue problem, the explicit calculation we carry out provides us with the mathematics necessary to complete a typical perturbative calculation in this geometry. It also allows us to cast the end result in a simple and elegant form that is easy to study analytically in various asymptotic regimes. The same technique could be applied to any similar perturbative problem is such a geometry. This paper is organised as follows. First we quantize the electromagnetic field in the presence of a layered dielectric, Section II. Then, in Section II C, we explicitly prove the completeness relation for the electromagnetic field modes. Equipped with the necessary mathematical tools, we proceed to calculate the energy shift in Section III, and then study it analytically (Section IV) and numerically (Section V). Our ultimate aim is to work out the energy-level shift in an atom caused by the presence of a layered dielectric. In order to obtain a result that fully takes into account retardation effects, the quantization of the electromagnetic field is necessary. To emphasize the physics of the problem we choose to quantize the electromagnetic field by a normal-mode expansion as described in [12]. The dielectric environment we consider (cf. Fig. 1) consists of a substrate, a dielectric half-space occupying the region of space z < −L/2 described by a dielectric constant ǫ s = n 2 s , and on top of that substrate an additional dielectric layer of thickness L, which has a dielectric constant ǫ l = n 2 l . We assume that the dielectric constant of the layer is higher than that of the substrate ǫ l > ǫ s in order to account for modes that are trapped inside the layer. Although we work with this assumption, the final result will turn out to be valid even when the reflectivity of the substrate exceeds that of the layer, but that is the physically less interesting case. Throughout this paper we shall assume all dielectric constants to be frequency independent so that the optical properties of the system are described solely by a pair of real numbers, ǫ l and ǫ s . To solve Maxwell equations for the electromagnetic field operators in the Heisenberg's picture we introduce, in the usual manner [13], the electromagnetic potentials A(r, t) and Φ(r, t) and work in the generalized Coulomb with the dielectric permittivity being a piecewise constant function as shown in Fig. 1. In the absence of free charges one can set Φ(r, t) = 0 and work only with the vector potential A(r, t) which satisfies the wave equation Note that right on the interfaces condition (1) is singular due to discontinuities of the dielectric function and equation (2) does not hold at these points. The normal-modes of the field f (r)e iωt satisfy the Helmholtz equation and we have labelled them by their wave-vector k and polarization λ = {TE, TM}. This mode decomposition allows one to solve the field equation (2) in each distinct region of space separately and then stitch up the solutions across the interfaces by demanding that they are consistent with the Maxwell boundary conditions, i.e. that E , D ⊥ , and B are all continuous. The Helmholtz equation (3) is in fact the eigenvalue problem of an Hermitean operator [12] so that we expect the field modes √ ǫf kλ (r) to form a complete set of functions suitable for describing any field configuration. The completeness relation takes the form with δ ǫ ij (r, r ′ ) being the unit kernel in the subspace of functions satisfying (1); we shall call this the generalized transverse delta-function. From quite general considerations [14] we can expect it to be given by with the electrostatic Green's function of the Laplace equation given by where ρ = |r − r ′ | and, for brevity, we have chosen to confine ourselves to the case z, z ′ > L/2. The function J 0 in the above equation is a Bessel function of the first kind [15, 9.1.1]. The outline of the derivation of the Green's function is given in Appendix B. The sum over all modes in equation (5) is complicated because the spectrum of the field modes has non-trivial structure. It has been shown previously [8,16] that the system supports two kinds of quite distinct types of modes. There are travelling modes going from left to right or in the opposite direction, and there are guided modes that are trapped by the dielectric layer, which essentially acts as a wave-guide. The spectrum of the travelling modes is continuous whereas the spectrum of the modes trapped in the dielectric layer is discrete and only some values of the (perpendicular) wave vector are allowed, namely those satisfying a certain dispersion relation. This dual character of the spectrum of the field modes is a major obstacle in working with these modes and calculating, e.g. the energy shift of an atom nearby, but an elegant solution to this problem has been developed in [17], whose basic idea we follow here. We choose the normalization of the mode functions √ ǫf kλ (r) according to the convention Then, the electric field E(r) = −∂ t A(r) expanded in terms of the normal-modes can be written as where H.C. stands for Hermitean conjugate. The photon creation and annihilation operators, a † kλ and a kλ , satisfy bosonic commutation relation where the top and bottom of the RHS corresponds to the travelling and trapped photons, respectively. In order to be able to write out the electromagnetic field operators explicitly one needs to solve the eigenvalue problem (3) and determine the spatial dependence of functions f kλ (r) so we turn our attention to this now. A. Travelling modes Before we work out the travelling modes, for further convenience, we introduce Fresnel coefficients for a single interface. For that we assume that a plane wave is travelling from a medium with refractive index n b to a medium with the refractive index n a , and that the interface is the z = 0 plane. Then, the standard Fresnel reflection and transmission coefficients are given by [13] where k zi are the components of the wave vectors perpendicular to the interface in the medium i = {a, b}. The geometry of the problem (cf. Fig. 1) naturally divides the space into three distinct regions. Consequently there are three wave vectors to be distinguished. The wave vector in vacuum (z > L/2) the wave vector in the dielectric layer (|z| < L/2) and the wavevector in the substrate (z < −L/2) The components of the wave vector that are parallel to the surface are the same for all three regions of space. This follows directly from the requirement that the boundary conditions must be satisfied at all points of a given surface i.e. the spatial phase factors e iki·r must be equal at z = ±L/2 for all r . The different signs of the z-components of the wave vectors correspond to the waves propagating in different directions. However, the direction of the propagation of a particular mode needs to be consistent in all three layers so we require that on the real axis sign(k z ) = sign(k zl ) = sign(k zs ). Since the frequency ω of a single mode is fixed, the zcomponents of the wave vectors in the dielectric are related to the vacuum wave vector k z by The mode functions f kλ (r) are transverse everywhere except right on the interfaces z = ±L/2, cf. (1). To ensure this transversality, it is convenient to introduce orthonormal polarisation vectors defined aŝ with ∆ being the Laplace operator expressed in Cartesian coordinates and it is understood that the above operators act on the factors of the type e ik ± i r , i.e. e λ (k ± i ) ≡ê λ (∇)e ik ± i r . Polarization vectors defined in such a way are normalized to unity provided all three components of the wave vector are real. However, they are not of unit length in the case of evanescent waves which have wave vectors with pure imaginary components. The spatial dependence of the mode functions is worked out requiring that each mode consists of the incoming, reflected and transmitted parts that are joined together by standard boundary conditions across the interfaces, i.e. that E , D ⊥ and B are continuous. From this it is straightforward to derive that the travelling modes of the system incident from the left, normalized according to (8), are given by whereas the right-incident modes are given by For the sake of clarity the complete list of reflection and transmission coefficients is given in Appendix A. Here we only write down the ones most relevant for the calculation at hand: B. Trapped modes Trapped modes arise from repeated total internal reflections within the layer of higher refractive index n l . This happens when the angle of incidence of the incoming wave is sufficiently high and exceeds the critical angle. This critical angle is different for the two opposite waveguide interfaces. First consider the layer-vacuum interface. From equation (16) we can obtain the reciprocal relation expressing the k z in terms of the k zl Thus, whenever k 2 zl < (n 2 l − 1)k 2 then k z becomes pure imaginary and we have a mode that exhibits evanescent behaviour on the vacuum side. The sign of the square root is chosen such that these modes decay exponentially when one goes away from the layer in the positive z-direction. This also ensures that there truly is total internal reflection, i.e. that |r vl λ | 2 = 1. However, since on the other side of the waveguide we have a substrate rather than vacuum, not all of the modes that get totally internally reflected at the vacuum-layer interface necessarily get trapped. From the relation we obtain the condition of total internal reflection for the substrate-layer interface to be k 2 zl ≤ (n 2 l /n 2 s − 1)k 2 . Therefore, modes satisfying the condition are not trapped but appear in vacuum as a continuous spectrum of evanescent waves that are accounted for among the left-incident travelling modes. (They are analogous to the evanescent modes that occur at a singleinterface half-space, for which the normal-mode quantization was first presented in [7].) On the other hand, trapped modes occur if The procedure for obtaining the trapped modes is largely equivalent to that of the travelling modes. They can be written in the form The boundary conditions are imposed on both interfaces. From the boundary at z = −L/2 we get whereas from the z = L/2 boundary Since both equations, (30) and (31), need to be simultaneously satisfied we obtain a dispersion relation for these modes, which determines the allowed values of k zl within the layer. Since we will be dealing with an atom on the vacuum side it will be necessary to express the dispersion relation in terms of k z rather than k zl . It is straightforward to show that the allowed values of the z-component of the evanescent waves' wave vector appearing on the vacuum side are given by numbers q n λ : with The numbers q n λ lie on the imaginary k z -axis; they satisfy, cf. Eq. (25) and (28), The normalization constant N λ for trapped modes is easily obtained by direct evaluation of the integral (8). It is given by and the reader is reminded that in (35) the z-components of the wave vectors k and k s are pure imaginary and because of that the TM polarization vectorsê TM (k − ) andê TM (k − s ) are no longer normalized to unity, i.e. |ê TM (k − s )| 2 = 1. C. Field operators and commutation relations. Completeness of the modes. Now that we have determined the spatial dependence of the mode functions we are in position to write out the vector potential field operator explicitlŷ The sum in the last term runs over the allowed values of the z-component of the layer's wave vector k zl , i.e. the solutions of the dispersion relation (32). For a given type of mode, left-incident, right-incident, or trapped, photon creation and annihilation operators appearing in (36) satisfy the commutation relations (10). Commutators between photon operators corresponding to different types of modes vanish as a consequence of the orthogonality of the field modes (8), e.g. We would like to verify explicitly the equal-time canonical commutation relation between field operators, say, between the electric field operatorÊ(r, t) and the vector potential operatorÂ(r, t) with δ ǫ ij (r, r ′ ) given by Eq. (6) and (7). To evaluate (38) we shall need the electric field operator which is easily obtained from Eq. (36) using the relation E = −∂ t A. Plugging in the field operators into (38) and making use of commutation relations (10) and (37), we find that the LHS of (38) is given by The quantity on the right-hand side is the sum over all modes, just as prescribed by equation (5), and therefore we expect it to be equal to the generalized transverse delta function, Eq. (6). This shows that the statement of the completeness of the modes (5) is in fact equivalent to the commutation relation (38), as has been noted before in [18]. To prove that the relation holds for z, z ′ > L/2 we need to work out the sum over all field modes. To start with we carry out a change of variables in (40): we convert the k zs -integral and the k zl -sum to run over the values of k z . In the case of the k zs -integral this is a simple change of variables according to (17) ∞ 0 dk zs = n 2 with Γ s = (n 2 s − 1)k 2 /n s . Here it is seen explicitly that the contributions from the left-incident modes split into a travelling part and an evanescent part. The values of k z included in the last integral correspond to the condition for evanescent modes with continuous spectrum, Eq. (27). In the case of the sum we change the summation over k zl to run over the values of k z as defined by equation (33). Plugging in the mode functions (20) and (21) into equation (40) and utilizing straightforward properties of the reflection and transmission coefficients that hold for real k z , k zs , we can rewrite the completeness relation as The first term in the above equation is the standard transverse delta-function. Therefore, if equation (40) is to hold, the term in the curly brackets needs to be proportional to the reflection part of the electrostatic Green's function, cf. the second term on the RHS of Eq. (7). That this is indeed the case is at this stage far from obvious, as for the proof one would need to combine two integrals and a sum into one expression. Obviously, the discreteness of the spectrum of the trapped modes is a nuisance that needs to be overcome if one is to complete the task of summing over the electromagnetic modes successfully. A similar difficulty would arise in any perturbative calculation in this type of geometry, which motivated a previous investigation of this problem for the symmetric case of a single slab of dielectric material [17]. We proceed with a broadly analogous method to [17], first noting that what we have here can be considered as a superposition of a slab and a half-space geometry, cf. [17] and [18]. One can utilize the branch-cut due to k zs (which runs along the imaginary k z axis between ±iΓ s , cf. Fig. 2) to express the integral over |T L λ | 2 in (43) as an integral over the reflection coefficient R R λ that runs from 0 − along the square root cut up to the branch-point at +iΓ s and then back down to the origin 0 + . Note that the branchcut due to the k zl is irrelevant because of the symmetry property of the reflection coefficient R R λ (−k zl ) = R R λ (k zl ). In this way, the first two integrals in the curly braces in equation (43) can be combined together as a single integral in the complex k z plane [18]. This is possible because the relation continues to hold for coefficients (22) with a purely imaginary z-component of the vacuum wave vector, k z (cf. [19]). Thus, the contributions from the travelling and evanescent modes can be combined into a single contour integral along the path γ s depicted in Fig. 2 and the terms appearing in the curly brackets on the RHS of Eq. (43) become Here we have now included the polarization vectors explicitly in the integrals, which is a crucial step as they affect the analytical structure of the integrand in the complex k z -plane. In particular, the TM polarization vector introduces a pole at the points k z = ±i|k | due to the factor 1/|k| 2 in its normalization factor. We will see that it is precisely this pole that gives rise to the reflection term in (6). We note that, according to Eq. (22), the reflec- tion coefficient contains the phase factor e −ikzL . Thus, since z + z ′ − L > 0, the argument of the exponential in (45) has a negative real part in the upper half of the complex k z plane and we can evaluate the k z -integral in Eq. (45) by closing the contour in the upper half-plane. For this we need to determine the analytical properties of R R λ . We note that the denominator of the reflection coefficient (22) is precisely the dispersion relation (32). Rewriting the reflection coefficients in the form , allows us to deduce that R R λ has a finite number of simple poles on the imaginary axis. When closing the contour we enclose all of them and by Cauchy's theorem the problem is reduced to the evaluation of the residues at these points: Here, the first term represents the contributions from the poles in the reflection coefficient and corresponds the trapped modes, whereas the second term represents the contribution from the pole that arises due to the TM polarization vector. When calculating the residues explicitly one needs to remember that the two independent variables are k z and k and that, according to Eq. (16) and (17), k zl and k zs are functions of those. In addition, the denominator of the reflection coefficient is not of the form f (k z )(k z − q n λ ) so that multiplying it by (k z − q n λ ) does not remove its singularity; the whole expression is still indeterminate. Therefore, L'Hospital's rule needs to be used to evaluate the limit (cf.[17, Section V]). Doing so, we find that where G H (r, r ′ ) is the reflected part of the Green's function of the Poisson equation given in Eq. (7) and derived in Appendix B. We see that the poles of the reflection coefficient R R λ yield a term that exactly cancels out the contributions of the trapped modes to the completeness relation (43) whereas the pole of the TM polarization vector yields the term proportional to Green's function. Thus, the final result can be written as which is precisely what we have anticipated earlier. In the next section we demonstrate how the calculation presented here may be applied to accomplish typical perturbative QED calculations in a layered geometry. III. ENERGY SHIFT To work out the energy shift we use standard perturbation theory where the atom is treated by means of the Schrödinger quantum mechanics and only the electromagnetic field is second-quantized. We work with a multipolar coupling where the lowest order of the interaction Hamiltonian is Then the energy shift of the atomic state i, up to the second-order, is given by Here, µ is the atomic electric dipole moment, and the composite state |j; 1 kλ describes the atom in the state |j with energy E j and the photon field containing one photon with momentum k and polarization λ. Because the electric field operator is linear in the photon creation and annihilation operators, the first-order contribution vanishes and the second-order correction is the lowestorder contribution. Since the electric field does not vary appreciably over the size of the atom we use the electric dipole approximation. Then the energy shift can be expressed as where r 0 = (0, 0, z 0 ) is the position of the atom and we have abbreviated E ji = E j − E i . It is seen that the calculation involves a summation over the modes of the electromagnetic field as carried out in the proof of the completeness relation (43). Equation (49) can be written out explicitly as with |µ m | 2 ≡ | i|µ m |j | 2 . There are four distinct contributions to the energy shift. ∆ vac is the positionindependent contribution caused by the vacuum fields and gives rise to the Lamb shift in free space The remaining three contributions come from the travelling, evanescent, and trapped modes, respectively, with z 0 being the position of the atom with respect to the origin. Note that because of the dipole approximation the shorthand notation for polarisation vectors (19) can be no longer applied. Normally one is interested in the energy shift caused by the presence of the dielectric boundaries only i.e. the correction to the shift that would appear in the free space. Therefore, we renormalize the energy-level shift (50) by subtracting from it its free space limit, i.e. The renormalization procedure amounts to the removal of the contributions ∆ vac , Eq. (51), from the energy shift (50) and takes care of any infinities that would appear otherwise, provided we treat the remaining parts with care. As noted elsewhere [1], the contributions (52) suffer from convergence problems when treated separately. However, appropriate tools to handle the problem have been developed in Sec. II C. We aim to combine ∆ trav , ∆ evan and ∆ trap into one compact expression that is easy to handle analytically. We can use the same trick as in the proof of the completeness relation because the analytical structure of the integrand in the complex k z -plane is the same except for the function ω = (k 2 + k 2 z ) 1/2 that comes about due to the denominator of perturbation theory and introduces additional branch-points at k z = ±i|k | as compared to Fig. 2. This poses no difficulties though, if one chooses the branch-cuts to lie between ±i|k | and ±i∞. Then, the contributions to the energy shift from the travelling modes ∆ trav and the evanescent modes ∆ evan can be combined together into a single complex integral as explained in the steps between Eq. (43) and Eq. (45). This is possible because for imaginary k z we have e m * λ (k + ) = e m λ (k − ), whereas for real k z the relation e m * λ (k − ) = e m λ (k − ) holds. On the other hand, we also know from Eq. (47) that the sum in ∆ trap is equal to an integral over the reflection coefficient R R λ taken along any clockwise contour enclosing all of it's poles. Choosing this contour to run from k z = 0 − + iΓ s to k z = 0 − + iΓ l and then back down from k z = 0 + + iΓ l to k z = 0 + + iΓ s , cf. Fig. 3, we write down the renormalized energy shift compactly as (54) where the contour of integration γ l is shown in Fig. 3. It resembles that of Fig. 2 applicable to ground-state atoms |0 as it is to atoms that are in an excited state |i provided we use the contour of integration as given in Fig. 3 and interpret the k z integral as a Cauchy principal-value. As renormalization has now been dealt with we shall from now on omit the superscript "ren" and designate the renormalized energy shift of Eq. (54) simply by ∆E i . A. Ground state atoms In the case of a ground-state atom the energy difference E j0 ≡ E j − E 0 is always positive hence the denominator in Eq. (54) that originates from second-order perturbation theory, E j0 + ω, never vanishes. Then, Eq. (54) contains no poles in the upper half of the k z -plane other than those due to the reflection coefficient R R λ . To evaluate the k z integral we can deform the contour of integration in Eq. (54) from that sketched in Fig. 3 to the one as shown in Fig. 4 which is beneficial from the computational point of view as it simplifies the analysis of Eq. (54) considerably. Writing out explicitly the sums over the polarization vectors (19) and then expressing the integral in the k -plane in polar coordinates, k x = k cos φ, k y = k sin φ, where the angle integral is computable analytically, we rewrite the energy shift as with ω(k z ) = k 2 + k 2 z , |µ | 2 = |µ x | 2 + |µ y | 2 and the contour C is that in Fig. 4. The amended reflection coefficientsR R λ are given bỹ i.e. we have pulled out the phase factor e −ikzL in order to define Z = z 0 − L/2 as the distance between the atom and the surface, cf. Eq. (22). In order to perform the k z integration in (55) we need to analytically continue the function ω = ω(k z ), which is real and positive on the real axis, to the both sides of the branch cut along which the integration is carried out, (cf. Fig. 4). Doing so we find that on the LHS of the cut the positive value of the square root needs to be taken, and hence on the RHS of the cut we must take the opposite sign. Therefore we have . Now we carry out a sequence of changes of variables. First we re-express the k z integration in terms of one over the frequency ω by substituting ω = k 2 + k 2 z , Then, we make the integral run along the real axis by setting ω = iξ. After this is done, the energy shift of the ground state is expressed as a double integral that covers the first quadrant of the (k , ξ)-plane It seems natural to introduce polar coordinates, k = x sin φ, ξ =x cos φ. We also choose to scale the radial integration variablex = E j0 x with E j0 > 0 and set y = cos φ. This provides us with the final form of the energy shift that is more suitable for numerical computations and asymptotic analysis The reflection coefficientsR R λ are as expressed in (56) but with the wave vectors given by Note that even though the wave vector is imaginary, the final result is a real number, as it should, because the Fresnel coefficients contain only ratios of wave vectors. B. Excited atoms As mentioned previously, the energy-level shift of an excited atom is also given by Eq. (54). However, one needs to take account of the fact that the quantity E ji ≡ E j − E i can now become negative for E j < E i , so that the denominator originating from perturbation theory contributes additional poles lying on the path of k z integration, shown in Fig. 3 and is now to be understood as a Cauchy principal-value. These poles are located at k z = ± E 2 ji − k 2 , though their precise location depends on the value of |k | that is not fixed but varies as we carry out the k integrations in equation (54). For |k | ∈ [0, |E ji |] the poles are located on the real k z axis but as we increase the value of |k | to exceed |E ji | both poles move onto the positive imaginary axis according to the convention that Im(k z ) > 0. For |k | belonging to the interval [|E ji |, n s |E ji |] the poles are located on the opposite sides of the branch-cut due to the k zs and care needs to be taken when evaluating those pole contributions. To evaluate the Cauchy principal-value of the k z -integral we circumvent the poles and close the contour in the upper half-plane, as was done in the previous section. The contribution from the large semicircle vanishes and equation (54) acquires pole contributions that are easily worked out by the residue theorem. The energy shift splits into the a "non-resonant" ground-state-like part ∆E i and a "resonant" oscillatory part ∆E res i that arises only if the atom is in an excited state. In analogy to the result of the previous section, the "non-resonant" part is given by with wave vectors expressed as whereas the "resonant" part is given by with wave vectors expressed as The reflection coefficients are as given in (56). The integral in Eq. (61) contains poles because the dispersion relation present in the denominators of the reflection coefficients has now solutions on the real axis when q ∈ [n s , n l ]. This signals contributions from surface excitations (trapped modes). This fact has been mentioned in [2] where the interaction of an excited atom with layered dielectric has been studied, although using mainly numerical analysis. Here we will attempt to study the results (59) and (61) analytically. To do so it will prove beneficial to rewrite equation (61) slightly. We change variables according to 1 − q 2 = η and split the contributions to Eq. (61) into two parts. The first one is a contribution from the travelling modes and given by where the wave vectors in reflection coefficients are all real and can be expressed as and the second is a contribution from the evanescent modes ∆E res,evan where the wave vectors in reflection coefficients can be expressed as Finally, it is worth noting that the imaginary part of Eq. (61) is actually proportional to the modified decay rates [9]. These have already been studied in [16] so that we focus on energy shifts only. However, the methods of analysis that are reported in the next section do allow one to write down at once equivalent analytical formulae for the decay rates. IV. ASYMPTOTIC ANALYSIS The interaction between the atom and the dielectric is electromagnetic in nature and it is mediated by photons. The atomic system in state |i evolves in time with a characteristic time-scale that is proportional to E −1 ji , with E ji being the energy-level spacing between the states |i and |j which are connected by the strongest dipole transition from state |i . Since it takes a finite time for the photon to make a round trip between the atom and the surface, the atom will have changed by the time the photon comes back. Therefore, the ratio of the time needed by the photon to travel to the surface and back and the typical time-scale of atomic evolution is a fundamental quantity that plays decisive role in characterising the interaction. In natural units, if 2E ji Z ≪ 1 we can safely assume that the interaction is instantaneous and we are in the so-called non-retarded or van der Waals regime. If 2E ji Z ≫ 1 the interaction becomes manifestly retarded as the atom will have changed significantly by the time the photon comes back. However, the problem we have considered here provides us with yet another length scale, namely the thickness of the top layer L. We shall now consider the energy shift in various asymptotic regimes. A. Ground state atoms. Electrostatic limit, (2EjiZ ≪ 1) In this limit the interaction is instantaneous (or electrostatic) in nature and the energy shift is obtainable using the Green's function of the classical Laplace equation (cf. e.g. [20]). This classical derivation is outlined in the Appendix B. The end result for the energy shift reads with µ 2 ≡ µ 2 x + µ 2 y and µ 2 ⊥ ≡ µ 2 z . We will now show that one can also obtain the above result as a limiting case of the results of previous section, thus providing a cross-check for our general calculation. To start with we note that equation (58) cannot be used to take the electrostatic limit in which we mathematically let E ji → 0 because it has been scaled with E ji . Therefore, it is best to start from equation (54). The result of Eq. (66) can be derived very quickly if we observe that in the limit E ji → 0 the branch cut due to ω = k 2 + k 2 z is no longer present and the contour in Fig. 4 collapses to a simple enclosure of the point k z = i|k |. The contribution from the TE mode vanishes as the product of the polarization vectors is regular at k z = i|k |, but for the TM mode this point is a simple pole, cf. Eq. (19). Therefore we obtain Taking the limit and expressing the remaining integrals in polar coordinates, where the angle integral is elementary, yields equation (66) (66) can be further analysed depending on the relative values of L and Z. Thin layer (Z/L ≫ 1) In this case the distance of the atom from the surface is much greater than the thickness of the layer of refractive index n l (but still small enough for the retardation to be neglected). Then, rescaling the integral in equation (66) with k = x/L allows us to use Watson's lemma [22] to derive the following result with the coefficients a i given by where ∆E el ns is the well-known electrostatic interaction energy between an atom and a dielectric half-space of refractive index n s that can be obtained by the method of images The corrections to this result are represented by the remaining elements of the asymptotic series. Note that if n l > n s then a 1 > 0 and, not surprisingly, the interaction, as compared to a half-space alone, is enhanced by the presence of the thin dielectric layer of higher refractive index n l . Thick layer (Z/L ≪ 1) In this case the thickness of the layer is much greater than the distance between the atom and the surface. The top layer now appears from the point of view of the atom almost as a half-space of refractive index n l only that it is in fact of finite thickness. To analyse the result (66) in this limit we cast it in a somewhat different form. Note that, especially when kL is large but not only then, 66) can be written as geometrical series. Since the series is absolutely convergent we can integrate it term by term and obtain the following representation of the electrostatic result where ∆E el n l is the electrostatic energy shift due to a single half-space of refractive index n l , i.e. Eq. (68) with n s replaced by n l . The sum in Eq. (70) represents the correction to ∆E el n l due to the finite thickness of the layer. For fixed Z and L it can be easily computed numerically to any desired degree of accuracy. We note however, that to the leading order in Z/L the interaction is weakened by the same amount independently of the distance of the atom from the surface and therefore is not measurable. The next-to-leading order correction is the first to be distance-dependent and is proportional to Z/L 4 , which can be easily seen by expanding the factor in series around Z/νL = 0: B. Ground state atoms. Retarded limit, (2ZEji ≫ 1) Thin layer (Z/L ≫ 1) In this case we study the situation when the top layer is much thinner than the distance between the atom and the surface. To obtain the asymptotic series we use Watson's lemma in much the same way as in the electrostatic case [21]. Series expansion of the integrand in Eq. (58) about x = 0 decouples the integrals and the resulting integrals can be calculated analytically. Thus, to first approximation, for an atom located sufficiently far from the interface, the impact of the thin dielectric layer on the standard Casimir-Polder interaction can be described by where ∆E ret ns is the retarded limit of energy shift as caused by a single dielectric half-space of refractive index n s , which was calculated in [9]. We give this result in Appendix C. The coefficients a and a ⊥ in (72) can be expressed in terms of elementary functions as Both, a and a ⊥ , are positive for n l > n s so that, as one would expect, the interaction, as compared to a halfspace alone, is enhanced by the thin dielectric layer of the higher refractive index n l . The above result simplifies significantly in the case when n s approaches unity i.e. when the situation resembles that of an atom interacting with a dielectric slab of refractive index n l . The coefficients a and a ⊥ reduce then to those recently calculated in [1] and are given by a = (n 2 l − 1)(9n 2 l + 5) 10n 2 l , a ⊥ = (n 2 l − 1)(5n 2 l + 4) 10n 2 l . Thick layer (Z/L ≪ 1) Here we assume that the thickness of the top layer is much greater than the distance between the atom and the surface, but which is still large enough for retardation to occur. Note that the reflection coefficientR R λ (22) can be separated into L-dependent and L-independent parts in the following manner: This way of writing the reflection coefficient splits the energy shift (58) into a shift due to the single interface of refractive index n l and corrections due to the finite thickness and the underlying material. It can be shown numerically, see Sec. V, that for large values of L the correction term is vanishingly small and can be safely discarded. Brute-force asymptotic analysis allows us to draw similar conclusions as in the electrostatic case, Section IV A 2. To leading order the interaction gets altered by the same amount regardless of the position of the atom with respect to the interface. The next-to-leading-order correction is proportional to Z/L 5 . C. Excited atoms. Non-retarded limit, (2Z|Eji| ≪ 1) The energy shift of an excited atom is given by equations (59) and (61). The "non-resonant" part, i.e. Eq. (59) has the same form as the energy shift of the ground state atom and has been analysed in the previous section. Therefore we now focus on the "resonant" part of the interaction that is given by equation (61). In order to conveniently obtain the non-retarded limit of (61) we will work with its slightly modified form given in equations (62) and (64). We start by noting that close to the interface we expect asymptotic series to be in the inverse powers of Z. Equation (62), where the η integration runs over η ∈ [0, 1], contributes only positive powers of Z. This is most easily seen by expanding the exponential exp(2i|E ji |Zη) about origin as we may do in the limit 2Z|E ji | → 0. Therefore, to leading-order in the electrostatic limit, only (64) contributes. Further we analyse (64) by setting η = β/(|E ji |Z). Then, according to (65), in the limit |E ji |Z → 0 the wave vectors can effectively be approximated as Then the result for the energy shift, after substituting β = kZ, reduces to This result turns out to have the same dependence on Z and L as the Coulomb interaction of the ground state atom, cf. Eq. (66); therefore we shall not analyse Eq. (75) any further. Note however, that the dependence on the atomic states is different in equations (66) and (75). We would also like to point out that in the electrostatic limit, to the order we are considering, the quantity ∆E res,el turns out to be real, which would imply that the corrections to the decay rates vanish. However, this conclusion is incorrect as it is known that the change of spontaneous emission in the non-retarded limit is in fact constant for a non-dispersive dielectric half-space [9]. However, any serious analysis of the changes of the decay rates induced by a surface needs to take into account the absorption of the material, which in the non-retarded limit plays a crucial role and cannot be neglected. Furthermore we note that we have started from Eq. (61), which, as explained before, contains poles on the real axis signalling the trapped modes. However, the denominator of (75) never vanishes which reflects the fact that in the electrostatic limit the trapped modes cease to exist and do not contribute towards the energy shifts, as first mentioned in [2]. D. Excited atoms. Retarded limit, (2Z|Eji| ≫ 1) The leading-order behaviour of equation (61) in the retarded limit can be obtained by repeated integration by parts. Unlike in the electrostatic case now both equations, Eq. (62) and Eq. (64) contribute. We integrate them by parts and note that the non-oscillatory contributions that arise from the boundary terms evaluated at η = 0 cancel out. It turns out that the leading-order contributions to the energy shift are due to the perpendicular component of the atomic dipole moment. They dominate the retarded interaction energy and behave as Z −1 . The contributions due to the component of the atomic dipole moment that is perpendicular to the surface contribute only terms proportional to Z −2 . We find that in the retarded limit the interaction energy up to the leading-order is given by where we have defined the optical thickness of the layer as τ = n l L and r vl = 1 − n l 1 + n l , r ls = n l − n s n l + n s . The final result agrees with that derived for a half-space in [9] if we take either L → 0 or n l → n s , which is a consistency check of our calculation. However, the limit of perfect reflectivity of the top layer does not make sense and one has to start from equation (61) and rewrite the reflection coefficient in the form (73) in order to study this case. Equation (76) is valid only approximately when the distance between the atom and the surface is much greater than the wavelength of the strongest atomic dipole transition, but it nevertheless allows us to draw important conclusions. We note that the interaction is resonant i.e. it is enhanced for certain values of LE ji . The most convenient way to understand the essence of these resonance effects is to take the slab limit of equation (76) i.e. set n s = 1. In this limit we have ∆E res,ret It is easily seen that whenever cos(2|Eji|τ ) = 1 then ∆E res,ret i = 0, i.e. the leading-order interaction vanishes. Conversely, the amplitude of oscillations in equation (78) is maximized when cos(2|Eji|τ ) = −1. Therefore we have a condition for resonance in terms of the wavelength of the strongest atomic dipole transition λ ji Eq. (79) holds for Z|E ji | ≫ 1 but if the value of Z|E ji | approaches unity, the relation loses its validity, because complications arise from the fact that when the atom is close to the surface the evanescent waves come into play whereas the condition (79) refers to the interaction of an atom with travelling modes only. In the non-retarded limit Z|E ji | ≪ 1 the notion of resonance loses its meaning altogether, cf. Eq. (75). Exploring the extreme case in the retarded limit we note that at anti-resonance i.e. when equation (76) becomes i.e. the atom does not feel the presence of the layer and the interaction assumes the form of that between an atom and a single half-space of refractive index n s , cf. [9]. This means that in the retarded regime the leading-order interaction between an excited atom and a slab of thickness L vanishes whenever the optical thickness of the slab τ = n l L is equal to a half-integer multiple of the wavelength of the dominant atomic transition λ ji (cf. also Fig. 11 later on). Conversely, at resonance the shift becomes ∆E res,ret so that the amplitude of oscillations exceeds the amplitude that would have been caused by a single half-space of refractive index n l . It also reaches the perfect reflector limit n l → ∞ more rapidly. Finally, we shall also remark that the meaning of the conditions (79) and (80) is interchanged if the refractive index of the substrate n s exceeds that of the layer n l i.e. when n s > n l . V. NUMERICAL EXAMPLES In this section we present a few numerical results designed to illustrate the influence of the dielectric layer on the Casimir-Polder interaction between an atom and a dielectric half-space. In practice, the sum over intermediate states j in Eq. (58) and in Eq. (61) is restricted to one or a few states to which there are strong dipole transitions. Hence, we assume a two-level system in which E ji is a single number, namely the energy spacing of the levels with the strongest dipole transition. Additionally, we focus just on the contributions to the energy shift due to the component of the atomic dipole that is parallel to the interface of the dielectrics. The contributions due to the perpendicular components of the atomic dipole moment can be easily generated with from Eq. (58) using standard computer algebra packages like Mathematica or Maple. We start by simple checks on the asymptotic expansions derived in the previous section. We choose to plot the energy-level shift ∆E multiplied by Z 4 so that the asymptotic behaviour of it as a function of distance is more apparent, because Z 4 ∆E for a dielectric half-space approaches constant [9]. Then, one can easily track the variation of the energy shift caused by the top layer as compared to the half-space shifts, Fig. 5 and Fig. 6. We remark that even though the derivation of the energy shift in this paper was based on the assumption n l > n s , the results are also valid in the case when the top layer has a smaller reflectivity than the substrate. In such a case the result can be used e.g. to model a thin layer of oxide or any kind of dirt on the substrate which is often present under realistic conditions. The asymptotic expansion (72) works well for large Z/L and not too high values of the refractive index n l . This is demonstrated in Fig. 7. The increase of the refractive index n l has an impact on the accuracy of the approximation which is valid provided with λ ji being the wavelength of the dominant atomic transition and τ l = n l L is the optical thickness of the top layer. In Fig. 8 we demonstrate the behaviour of the energy shift depending on the various values of the parameter E ji measured in units of the layer's thickness. For small E ji we clearly observe linear behaviour that corresponds to the Z −3 dependence of the shift in the electrostatic regime. We also find it instructive to plot the energy-level shift as a function of the thickness of the top layer L for different values of the refractive index n l while keeping the distance of the atom from the surface fixed, Fig. 9 and Fig. 10. B. Excited atoms The energy shift of an excited atom splits into two distinct parts, cf. Eq. (59) and Eq. (61). The nonoscillatory part displays the same behaviour as the en- ergy shift of the ground-state atoms, which we have already analysed numerically in the previous section. Here we will focus on the oscillatory contributions to the level shifts that are given by Eq. (61). We choose to plot the dimensionless integrals contained in equations (62) and (64) as this is numerically more efficient than plotting the integral in Eq. (61). It should be borne in mind that the reflection coefficients contain the dispersion relation in denominators that now has solutions on the real axis. For the purpose of the present demonstration it is sufficient to simply displace the poles off the real axis by adding small imaginary part to the denominator of the re- 11 we demonstrate that indeed, if the anti-resonance condition (80) is satisfied, the interaction energy between the excited atom and the slab is strongly suppressed for ZE ji ≫ 1. In general, for the layered dielectric rather than the slab, the effect of resonance is shown in Fig. 12 and Fig. 13. Note that the energy-level shift in an excited atom due to the layered dielectric can be significantly enhanced. Unlike in the case of the ground state atom where the energy shift caused by the layered structure of refractive indices n l and n s is bounded by the single half-space shifts (compare Fig. 5), the excited atom can experience shifts greater than those caused by the unlayered half-space of the refractive index n = max(n l , n s ), Fig. 12, which is due to resonance effects. Conversely, it is also possible that the interaction with the layer will be unnoticeable if the anti-resonance condition (80) is satisfied, Fig. 13. Next, in Fig. 14, we show that the approximation of Eq. (61) derived in (76) turns out to be quite accurate and can be safely used to quickly estimate the energy shift in an excited atom caused by the layered dielectric, provided the condition ZE ji ≫ 1 is satisfied. It is also interesting to plot the resonant part of the energy shift as a function of LE ji while keeping ZE ji fixed. This is done in Fig. 15. It is seen that the energy shift indeed experiences the oscillatory resonant behaviour. The subsequent minima and maxima are less and less pronounced as the value of LE ji increases. This is because as we increase LE ji the resonances and antiresonances move closer and closer together so that their effects cancel out. It is interesting to note that this behaviour could not have been inferred from equation (76), which indicates that the approximation (76) can be useful only for LE ji ≪ 1, which can also be easily verified numerically. VI. SUMMARY Using perturbation theory we have calculated the energy-level shift in a neutral atom placed in front of a layered dielectric half-space, as shown in Fig. 1. The major difficulty in working out the energy shift is the sum over all modes that appears in this type of calculation, Eq. (50), especially when the spectrum of the modes consists of the continuous and discrete parts, Sec. II A and II B. This obstacle can be circumvented by using complex-variable techniques to express the sum over all modes as a single contour integral in the complex k zplane, Eq. (54) and Fig. 4. Then, the energy shift (58) is easily analyzed asymptotically as well as numerically. For a ground-state atom, regardless of whether in retarded or non-retarded regimes, we find that the leading-order correction to the interaction of an atom with an unlayered interface is proportional to L/Z. The asymptotic series are given by (67) and (72) and provide reasonable estimate of the influence of the single dielectric layer on the standard half-space result, Fig. 7. In the opposite case of a very thick layer i.e. Z/L << 1 we find that the result is well approximated by a dielectric half-space [9]. For excited atoms we find that the interaction between an atom and the layered dielectric (61) is subject to resonances that occur between the wavelength of the dominant atomic transition λ ji and the thickness of the layer L, Sec. IV D. In particular, the interaction between an atom and the slab can be strongly suppressed in the retarded regime, cf. Fig. 11, whenever the optical thickness of the slab τ is equal to the half-integer multiple of the wavelength of the dominant atomic transition λ ji . The existence of resonance effects suggests a physical picture of the excited atom as a radiating dipole. The resonance and anti-resonance correspond to constructive and destructive interference. We have also provided reasonable approximations in the non-retarded (75) and re-tarded (76) regimes that can be used to quickly estimate the magnitude of the resonant interaction between an atom and a layered dielectric. ence+Business Media, Inc. 1999). [22] The essential idea is to spot that, since the integrand is strongly damped by the exponential, most of the contributions to the integral will come from small values of k. Thus, it is permissible to Taylor-expand the remaining part of the integrand about k = 0. For a more rigorous treatment see [21]. 10
2010-10-10T09:01:48.000Z
2010-10-10T00:00:00.000
{ "year": 2010, "sha1": "42e96ee67510b26af5c731603de257a4bfbaf1b3", "oa_license": null, "oa_url": "http://sro.sussex.ac.uk/id/eprint/29723/1/PhysRevA.82.062506.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "42e96ee67510b26af5c731603de257a4bfbaf1b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1498137
pes2o/s2orc
v3-fos-license
High Rates of Asymptomatic, Sub-microscopic Plasmodium vivax Infection and Disappearing Plasmodium falciparum Malaria in an Area of Low Transmission in Solomon Islands Introduction Solomon Islands is intensifying national efforts to achieve malaria elimination. A long history of indoor spraying with residual insecticides, combined recently with distribution of long lasting insecticidal nets and artemether-lumefantrine therapy, has been implemented in Solomon Islands. The impact of these interventions on local endemicity of Plasmodium spp. is unknown. Methods In 2012, a cross-sectional survey of 3501 residents of all ages was conducted in Ngella, Central Islands Province, Solomon Islands. Prevalence of Plasmodium falciparum, P. vivax, P. ovale and P. malariae was assessed by quantitative PCR (qPCR) and light microscopy (LM). Presence of gametocytes was determined by reverse transcription quantitative PCR (RT-qPCR). Results By qPCR, 468 Plasmodium spp. infections were detected (prevalence = 13.4%; 463 P. vivax, five mixed P. falciparum/P. vivax, no P. ovale or P. malariae) versus 130 by LM (prevalence = 3.7%; 126 P. vivax, three P. falciparum and one P. falciparum/P. vivax). The prevalence of P. vivax infection varied significantly among villages (range 3.0–38.5%, p<0.001) and across age groups (5.3–25.9%, p<0.001). Of 468 P. vivax infections, 72.9% were sub-microscopic, 84.5% afebrile and 60.0% were both sub-microscopic and afebrile. Local residency, low education level of the household head and living in a household with at least one other P. vivax infected individual increased the risk of P. vivax infection. Overall, 23.5% of P. vivax infections had concurrent gametocytaemia. Of all P. vivax positive samples, 29.2% were polyclonal by MS16 and msp1F3 genotyping. All five P. falciparum infections were detected in residents of the same village, carried the same msp2 allele and four were positive for P. falciparum gametocytes. Conclusion P. vivax infection remains endemic in Ngella, with the majority of cases afebrile and below the detection limit of LM. P. falciparum has nearly disappeared, but the risk of re-introductions and outbreaks due to travel to nearby islands with higher malaria endemicity remains. Introduction Nations in the Southwest Pacific have endured considerable malaria transmission, with the highest Plasmodium falciparum burden outside the African continent and possibly the highest Plasmodium vivax transmission in the world [1]. Historically, transmission has ranged from hyperendemic areas in West Papua (Indonesia) and Papua New Guinea [2] to high and moderate transmission in Solomon Islands and Vanuatu [3], which are the southwestern boundary of global malaria transmission. Intensified control over the last 20 years has resulted in remarkable declines in malaria transmission in this region [3,4], reviving the agenda of elimination. However, it is in these countries where outstanding progress towards elimination has been made, that more knowledge is needed if the vision of malaria elimination is to be realized, such as reliable prevalence estimates, role of low-density, asymptomatic carriers and determinants of transmission maintenance. In Solomon Islands, the incidence of clinical malaria cases diagnosed by light microscopy (LM) dropped by 90% from 442/1000 population in 1992 [5] to 44/1000 population in 2012 [6]. These drops in incidence are similar to those achieved by the Malaria Eradication Program in Solomon Islands (1970)(1971)(1972)(1973)(1974)(1975) [7]. National statistics based on passive surveillance indicate that 65% of clinical malaria cases in 2012 were attributable to P. falciparum, 33% to P. vivax and 2% to mixed P. falciparum/P. vivax. Conversely, active case detection surveys indicate that P. vivax is the predominant species in the general population [6]. Current malaria transmission appears to be focal, ranging from moderate to high levels in Honiara City (96/1000) and Guadalcanal (64/1000) to very low in Temotu (10.8/1000) and Isabel provinces (1.2/1000). Temotu and Isabel are the only two provinces in which pilot elimination agenda has been proposed to be actively pursued, having resulted in more intensive control activities and interventions including stratification, active case detection, and the earlier roll out of control activities (e.g. rapid diagnostic tests, RDTs and indoor residual spraying) than the rest of the country [3]. These provinces are also the only areas of Solomon Islands with recent surveys in which both LM and PCR-based diagnoses of Plasmodium spp. infections were performed [8,9]. In 2008, a parasite prevalence of 2.7% by LM was found in Temotu, with P. vivax accounting for 82.5% of infections. Only 5.5% of these infections were associated with febrile illness. Among a subset of 1,748 samples, which included LM positive, febrile and 10% of LM negative participants, an additional 63 P. falciparum, 23 P. vivax and 10 mixed P. falciparum/P. vivax infections were detected by PCR, indicating a 6.5% prevalence of sub-microscopic infections. Even lower levels of infection were reported in Isabel in 2009: 1 of 8,554 participants had a LM-detectable P. falciparum infection (0.01%). In a random subset of 2001 participants, PCR identified an additional 13 (0.55%) P. vivax infections. PCR consistently detects at least twice as many infections as LM [10]. Numerous studies have confirmed that sub-microscopic infections are a common feature of malaria endemic areas, spanning all age groups and involving both P. falciparum and P. vivax [11][12][13]. Although these sub-microscopic infections are rarely associated with febrile illness, they have been shown to be efficient gametocyte producers [14][15][16][17][18][19] and thus constitute a source of ongoing transmission [10]. Given the lack of data from other areas of Solomon Islands, it is currently unknown whether the pattern of asymptomatic, low-density infection carriage identified in Temotu and Isabel [8,9] is unique to these elimination provinces. In addition, whereas these earlier surveys detected a large burden of sub-microscopic infections, they did not determine if these infections were also gametocytaemic and therefore did not assess their potential contribution to transmission. Therefore, we conducted in May-June 2012 a household-based, cross-sectional survey in Ngella, Central Islands Province to determine how common low-density, asymptomatic infections are in communities where transmission is mesoendemic and whether these infections are gametocyte producers and hence, potential contributors to local transmission. This survey is the first epidemiological description of malaria in Ngella since the 1970-1975 Malaria Eradication Program [7] and the only one in Solomon Islands to employ highly sensitive molecular diagnosis for the detection of both blood-stage parasites and gametocytes. Ethics statement This study was approved by The Walter and Eliza Hall Institute Human Research Ethics Committee (HREC number 12/01) and the Solomon Islands National Health Research Ethics Committee (HRC12/022). The informed consent process recognized the community and cultural values of Solomon Islands. Following consultation with and approval by community leaders, community meetings were held to explain the aims, risks and potential benefits of the study. Individual informed consent was obtained from all participants or the parent or legal guardian of children<18 years of age. At the point of collection, all samples were de-identified. Study site Ngella, previously known as the Florida Islands, consists of 3 islands, Anchor, Big Ngella and Small Ngella, located approximately 27 miles north of Guadalcanal and 50 miles southwest of Malaita (Fig 1). Along with Tulaghi, Savo, Russel and Buenavista Islands it forms part of the Central Islands Province (Fig 1). Despite their proximity, the three islands of Ngella have diverse geographical characteristics: Anchor Island is characterized by less dense rainforest and sandier soil. Big Ngella is heavily forested, although commercial deforestation is common, and smaller villages are encountered in the Bay area around Tulagi, the provincial capital. The more remote northern villages of Big and Small Ngella and those on the southern coast are larger. The communities of the Utuha Channel lay in an extensive mangrove system and are smaller in size. There is minimal seasonal variation in temperature and despite a northwesterly monsoon from November-April, the distinction between wet and dry season is not pronounced. The most recent census estimates 26,051 inhabitants (approximately 60% of these reside in Ngella), 49% females and a median age of 19.9 years [20]. There is significant migration between Ngella and other malaria endemic areas, in particular Honiara (Guadalcanal) and Malaita provinces. These provinces are well connected to Ngella by a popular ferry service and numerous private, unscheduled motorized boat trips. The Ngella population is serviced by a hospital in Tulagi, six rural health sub-centres and ten nurse aid posts. National malaria statistics describe Ngella as mesoendemic, with a reported Annual Parasite Index [21] of 46.1/1000 in 2012, P. falciparum being the main cause of malaria cases [6]. Overall API for Solomon Islands indicates that there were two transmission peaks in 2012 for the months of February and October. As elsewhere in the country, long lasting insecticidal nets and indoor residual spraying are the mainstay of malaria control in Ngella. Cases are diagnosed by LM or RDT and treatment with artemether-lumefantrine has been introduced nationally in 2008. The last malaria epidemiological report of Ngella [7] described it as 'the most malarious group in all Solomon Islands" and the "most difficult from which to clear malaria". Malariometric surveys preceding the Malaria Eradication Program (March 1965-January 1970, unpublished World Health Organisation Field Reports (reviewed in [8]) identified a combined parasite rate of 69.6% and a spleen rate of 69.3% in the 2-9 years age group. In the same surveys, villages on the North coast had spleen rates in the 80% range and qualified for the hyperendemic classification [7], whereas the villages in the Bay area and South coast were noted to have had spleen rates in the 30-50% range [7]. Coast and Channel). The survey included 3501 individuals of all ages 6 months residing in 874 households in 19 randomly selected communities. The households were enumerated and geo-positioned and demographic information of the male and female heads of the household collected. Enumeration, but not geopositioning, was achieved for the households in the villages of the South Coast. The timing of the survey was approximately 4 weeks after the peak of the wet season. Study population and blood sample collection Following consent and enrolment from each participant, a short clinical assessment was conducted (including tympanic temperature, history of fever during the previous 48 hours, history of malaria in the last 2 weeks and spleen size in children 2-9 years old) and demographic information collected (age, sex, residency status and history of travel, bed net use). A febrile participant was defined as having a tympanic temperature 38.0°C and/or a history of febrile illness in the past 48 hours. Study participants who reported being ill at the time of the survey were diagnosed by RDT (Access Bio, CareStart, USA) and treated if positive with artemether/ lumefantrine, as per the national treatment guidelines. Where available the participant's health records were checked for recent anti-malarial treatment and applicable information recorded. A 250 μL finger prick blood sample was collected into EDTA-Microtainer tubes (Becton Dickinson, NJ, USA). 50 μL were immediately stabilized in 250 μL RNAProtect (Qiagen, Germany) for RNA studies and stored at ice pack cooling conditions until their transport to a centralized field laboratory. Thick and thin films were prepared for determination of microscopic malaria infection. Haemoglobin measurement was performed with Hemocue HB 301 analyzer. A measurement below 11g/dL was classified as anaemia. Upon return to the centralized laboratory, the RNAProtect fractions were frozen immediately. The remaining 200 μL of whole blood was separated into red blood cells pellets and plasma and promptly frozen. LM detection of Plasmodium spp. parasites Giemsa stained blood films were examined under x1000 power. One hundred fields of view were examined before calling a sample "no parasites seen". When a parasite was observed, counts of both white cells and parasites were commenced, and continued until 300 white cells had been counted. The parasite count was then calculated, based on an assumed white cell count of 8,000 white cells/ μL. However if no further parasites were observed, the process of scanning to a total of 100 fields of view was completed. When only 1 parasite had been observed in 100 fields of view, an assumed count at the notional lower limit of detection of 10 parasites/μL was applied, based on a further assumption of an average of 8 white cells per field of view. All slides were stained within 24 hours at the regional malaria laboratory and read by experienced microscopists, all of whom had completed WHO quality assurance courses. All LM positive slides as well as the slides from all PCR positive / LM negative plus 10% of LM & PCR negative slides were re-read by an Australian Level 1 expert microscopist that was blinded to the PCR results. None of the 10% LM negative slides were found to be positive by the expert microscopist. In case of discrepancies between the two microscopy reads, the read of the expert microscopist was considered final. DNA and RNA extraction Genomic DNA (gDNA) was isolated from red blood cell pellets (100 μL, corresponding to 200 μL whole blood) using FavorPrep 96-well Genomic DNA kit (Favorgen, Taiwan). DNA was eluted in 200 μL elution buffer and stored at -20°C. The RNA isolation procedure from whole blood in RNAProtect cell reagent has been described elsewhere [22], the only exception being an increased elution volume of 60μL of RNase-free water. Due to problems with storage of RNAProtect samples in the field, the quality of the RNA was tested using an RT-qPCR for the human beta globin transcript [23]. This revealed a 10x lower total human RNA concentration than in samples from a comparable study in Papua New Guinea [24]. RNA samples were therefore concentrated 10-fold using a CentriVap Concentrator (Labconco, United States) before testing for the presence of gametocytes. Molecular detection of Plasmodium spp. parasites All 3501 DNA samples were first screened using a genus-specific qPCR targeting a conserved region of the 18S rRNA gene [22]. Singleplex species-specific P. falciparum and P. vivax Taqman qPCRs and a duplex P. malariae/P. ovale qPCR, targeting species-specific regions of 18S rRNA gene, were used to identify species as described previously [22,25]. Prevalence values reported in this study include only those infections confirmed by the species-specific qPCR Taqman assays. Each detection experiment carried a dilution series of plasmids containing the target sequence of each PCR (10 4 , 10 3 , 10 2 , 10 1 , 5, 10 0 copies/μL), in duplicate, and were used to determine standard curves and therefore estimate parasite densities (reported as 18S rRNA gene copy numbers/μL). All assays were run in 384-well plate format on the Roche LightCy-cler480 platform. Those infections detected by qPCR, but not by LM, were defined as sub-microscopic infections. P. falciparum and P. vivax samples that were positive by species-specific Taqman qPCR were examined for presence of gametocytes using RT-qPCRs targeting the pfs25 and pvs25 orthologues, which are expressed only in mature gametocytes, as described previously [22]. All gametocyte assays were also run in 384-well plate format on the Roche LightCycler480 platform. Plasmodium spp. genetic diversity All samples that were P. falciparum or P. vivax positive were genotyped to determine the multiplicity of infection (MOI) using highly diverse size-polymorphic molecular markers msp2 for P. falciparum and msp1F3 and MS16 for P. vivax, respectively. PCR and capillary electrophoresis were performed with slight modifications to the published protocols [26,27]. Genotyping data was analyzed as described previously [26,27]. Statistical analysis Study data were collected and managed using REDCap electronic data capture tools hosted at the Walter and Eliza Hall Institute [28]. Analyses were done using the STATA12 statistical software package (College Station, TX). Differences in participant characteristics at enrolment and prevalence differences among geographical areas and groups of individuals were assessed using Chi-square (χ 2 ) or Fisher's exact tests. Differences in median ages and median household size were explored with quantile regression. Univariable and multivariable logistic regression were used for associations of P. vivax infection and exposure variables. Associations with P. vivax parasite density were investigated in simple and multivariable linear regression models on only those subjects who tested positive to qPCR diagnosis. Poisson regression analyses were utilized to explore associations between multiplicity of infection and exposure variables. Study population A total of 3501 Ngella residents across 874 households were surveyed. The gender and age profiles of the participants were representative of the Central Islands Province population, with 52.5% females and a predominance of younger individuals (median age 18 years). The age distribution was as follows: <2 years, 4.7%; 2-4 years, 10.6%; 5-9 years, 14.8%; 10-14 years, 14.3%; 15-19 years, 7.5%; 20-39 years, 27.0%;>40 years, 21.3%. The majority of participants (95.2%) resided in the village for 2 months. Of 447 participants who spent at least one night outside their village of residence in the last month, 69.1% travelled within Central Islands Province. 73.3% of participants reported having slept under a long lasting insecticidal net the night before and 56.4% owned a bednet for longer than 24 months. Of all households, 84.5% of households reported to have been sprayed with insecticide, and 70.4% of household heads spoke English. Of all participants, 687 (19.4%) had a history of fever in the previous two days, 685 (19.7%) reported feeling unwell/sick at the time of survey and 23.3% had a haemoglobin measurement <11g/dL. No participant aged 2-9 years of age was found to have an enlarged spleen. A detailed description of demographic and clinical characteristics by geographical region is given in S1 Table. Prevalence of Plasmodium spp. infection by LM Overall, 130 individuals (3.7%) had Plasmodium spp. parasites detectable by LM: 126 P. vivax, three P. falciparum mono-infections and one P. vivax/P. falciparum mixed infection. No infections with P. malariae or P. ovale were observed. The prevalence of P. vivax infection varied significantly by geographical region (p<0.001) (Fig 1B) and was lowest in the South Coast and Anchor regions (0.8%), followed by Channel (3.0%) and Bay (3.5%) and North Coast (11.7%). P. vivax prevalence showed strong age trends and peaked in adolescents 10-15 years of age (8.6%, p<0.001) (Fig 2A). Prevalence of P. vivax by qPCR Overall, 468 participants (13.4%) had qPCR-detectable infections: 463 were P. vivax mono-infections and five were mixed P. falciparum/P. vivax infections (0.14%). The 126 P. vivax infections and the one mixed infection by LM were confirmed by qPCR. Overall, 72.9% of P. vivax infections were sub-microscopic. In two of the catchments, Kelarekeha and Vuturua (Fig 1B), only sub-microscopic infections were observed among 59 and 156 individuals surveyed, respectively. P. vivax qPCR prevalence displayed spatial heterogeneity among the five geographical areas and the 19 catchments, varying from 3.0-38.5%. Prevalence by qPCR was highest in villages on the North Coast (25.0-38.5%) and lowest on Anchor Island (3.0-5.5%) (Fig 1B). Of 874 households sampled across Ngella, 559 had no infected members, 210 had only one infected member and 105 had two or more infected members. There was no association between household size and probability of being infected (p = 0.550). Not taking into account any other variables, there was an increased risk of being infected if at least one other member of the household was infected (OR = 2.59, p<0.001, CI 95 [2.13, 3.16]). P. vivax prevalence was age-dependent (p<0.001), lowest in<2 years (3.0%, n = 166) and peaking in the 10-14 year old age group (24.3%, n = 499) (Fig 2A). Prevalence of infection did not differ significantly between male and female participants (p>0.650). Participants who were residents (lived in the village 2 months) were more frequently infected with P. vivax than non-residents (infected residents: 13.7% vs. infected non-residents 8.3%, p = 0.045). Once residency status was taken into account, recent travel (defined as spending at least 1 night away from the village of residence in the last month) was not associated with a difference in infection risk (p = 0.300). Those living in a household where the household head speaks English, a proxy for education level, were infected less frequently (12.0%) than those living in a household where the head does not speak English (18.5%, p<0.001). There was a moderate increase in risk of P. vivax infection in those who reported not having slept under a net the night before compared to net users (users: 12.6% vs. non-users: 15.6%, p = 0.022). The majority of P. vivax-infected individuals (84.6%) neither reported febrile symptoms (defined as history of fever or measured fever at survey) nor feeling ill (85.4%). Six of the 26 participants that had a measured fever at the time of the survey (tympanic temperature 38°C, 18.8%) were infected with P. vivax. Compared to uninfected participants, those with a P. vivax infection were less likely to report having had febrile symptoms in the previous two days (uninfected 20.0% vs. infected 15.2%, p = 0.014) or report feeling unwell at the time of survey (uninfected 20.5% vs. infected 14.6%, p = 0.003). A total of 280 P. vivax infections (60.0%) were both asymptomatic and sub-microscopic. There were no significant differences in the proportion of asymptomatic P. vivax infections between different age groups (p> 0.200) and regions (p> 0.240). Of 468 P. vivax-infected individuals, 19.6% had a haemoglobin<11 g/dL compared to 23.9% of uninfected individuals (p = 0.045). Multivariable associations with P. vivax infection Age was the strongest independent association with P. vivax. infection, peaking in 10-14 year olds ( p<0.001). The reference group in this analysis was composed of children aged <2 years. Being a local resident, region of residency and living in a household with at least 1 additional infected member were all associated with an excess risk of infection. Significant protective factors included an English-speaking household head and reporting feeling unwell at the time of the survey. Detailed results are given in Table 1. Age trends of P. vivax infections. A: P. vivax blood stage parasite prevalence by LM and qPCR (error bars represent binomial 95% confidence intervals, CI 95 ). B: P. vivax parasite densities by qPCR (18S DNA copies/l) and LM counts (parasites/l) (error bars represent 95% confidence intervals, CI 95 ). C: Gametocyte prevalence (in the total sampled population) and positivity (only among P. vivax infected) (error bars represent 95% confidence intervals, CI 95 ). D: P. vivax multiplicity of infection (MOI) of blood stage parasites (error bars represent 95% confidence intervals, CI 95 ). 2B). Of 130 LM-detectable infections, 55% (n = 70), were at the assumed limit of practical detection, i.e. approximately 10 parasites/μL. Similarly, P. vivax parasite densities by qPCR were also low (estimated geometric mean 18S DNA copy numbers of 4.6/μL, CI 95 [3.9, 5.4]) (Fig 2B). In LM and qPCR positive infections, parasite density by LM (parasites/μL) and by qPCR (18S DNA copies/μL) were correlated (n = 130, R 2 = 0.76, p<0.001). Factors predicting parasite densities by qPCR included age, history of fever in the preceding two days, anaemia (haemoglobin<11 g/dL) and geographical region of sampling. Parasite densities were highest among individuals aged <2 years and decreased in older age groups (p<0.001). Detailed results of associations with P. vivax density are given in Table 2. P. vivax genetic diversity Genotyping results for markers msp1F3 and/or MS16 were obtained from 349 P. vivax positive samples. Both markers were highly diverse; 15 msp1F3 alleles and 43 MS16 alleles were detected (Fig 3), resulting in an expected heterozygosity (H E ) of 0.834 for msp1F3 and 0.937 for MS16. Out of 349 samples, 102 (29.2%) carried multi-clonal infections. MOI (combined msp1F3 and MS16), defined as the concurrent infections per individual, ranged from 1 to 4; mean MOI was 1.36. Mean MOI did not differ significantly among age groups (Fig 2D, household with at least one other infected individual had moderately higher mean MOI (1.56) than those who were the sole infected person of the household, but evidence for an association was moderate to weak (mean MOI = 1.42, p = 0.086). Mean MOI was positively associated with qPCR parasite density (p = 0.007). P. falciparum infection By qPCR, P. falciparum was detected in 5 individuals, all of whom were co-infected with P. vivax. Of these, only four infections were detectable by LM, one as a co-infection with P. vivax and three as P. falciparum mono-infections. In two of the LM mono-infections and the mixed infection, only P. falciparum gametocytes were observed on the blood smear. The range of parasite densities, by qPCR, was 7.35-364 copy numbers/μL and in LM positive samples the densities ranged from 20 to 1430 parasites/μL. The low number of P. falciparum infections precluded analyses with densities for this species. The presence of gametocytes in the four LMpositive infections was confirmed by pfs25 RT-qPCR. No gametocytes were detected in the one sub-microscopic P. falciparum infection by RT-qPCR. All five P. falciparum-infected individuals resided in the same village (Halavo, circled in Fig 1B) and ranged in age from 3 to 60 years. The oldest had a history of fever in the preceding two days. Two of the carriers had a haemoglobin measurement <11g/dL. None of the individuals reported to have slept outside the village in the previous month. All five P. falciparum infections were monoclonal and carried the same msp2 genotype of the Fc27 subtype. Discussion Solomon Islands has achieved a remarkable 90% reduction in malaria incidence over the last two decades as a result of scaled-up malaria control interventions [6] and is now intensifying its efforts towards malaria elimination [3]. The present study is the first to undertake sensitive molecular diagnosis at this scale in Solomon Islands and the first large epidemiological description of malaria in Ngella since the Malaria Eradication Program (1970)(1971)(1972)(1973)(1974)(1975). Our findings illustrate a striking distinction between the epidemiology of P. falciparum and P. vivax in Ngella. High prevalence (13.4% by qPCR) and genetic diversity, as well as an increased risk for local residents and evidence of potential within-household transmission indicate considerable levels of endemic P. vivax transmission. There was significant variation of P. vivax transmission in different regions of Ngella, with the highest prevalence found on the remote North Coast (25.0-38.5%), which prior to the Malaria Eradication spraying operations was described as holoendemic and having an environment highly favourable to the mosquito [7]. The lowest rates of P. vivax infection were observed on Anchor (3.9% by qPCR), where 15 years ago a community-based initiative eliminated a substantial number of breeding sites through environmental management [Lodo, personal communication]. It is therefore likely that the presence of suitable larval habitats and vector abundance may be key factors influencing P. vivax transmission on Ngella. It remains unclear whether autochthonous P. falciparum transmission remains in Ngella or parasites are being re-introduced by incoming travelers or returning residents from areas with higher P. falciparum burden, such Guadalcanal or Malaita provinces. In this survey, only five P. falciparum cases were identified in the village of Halavo ( Fig 1B). As all five infections carried the same msp2 allele and four were gametocytaemic, a small local outbreak following recent re-introduction seems more likely. This is reminiscent of the situation in epidemic-prone areas of the Papua New Guinea highlands, where a clonal P. falciparum epidemic on a background of endemic, low level P. vivax transmission has been reported [29]. P. falciparum populations in neighbouring Guadalcanal province were in fact found to be of low genetic diversity [30,31]. Based on case statistics at the local health facilities, 30% of malaria cases detected in Central Islands Province are caused by P. falciparum [6] indicating that either importation of P. falciparum parasites is common or that low levels of endemic P. falciparum transmission may remain in some parts of Ngella. Further studies are therefore required to ascertain the absence of endemic P. falciparum transmission in this area of Solomon Islands and whether the cases found are the result of inter-island travel. The current situation of malaria in Ngella (i.e. 3.7% prevalence by LM, clear P. vivax dominance and absence of enlarged spleens in children 2-9yrs of age) is a consequence of the dramatic reduction in malaria transmission achieved throughout Solomon Islands in the last 20 years [6]. This change is similar to that encountered at the end of 1974, after approximately 5 years of twice-yearly Malaria Eradication Program spraying. Then, prevalence in 2-9 year olds had dropped from pre-spraying rates of 60% to 1.4% and P. vivax became predominant [7]. Similar shifts in malaria epidemiology were also observed in the elimination provinces of Temotu [9] and Isabel [8]. In Temotu, P. falciparum accounted for 17.5% of infections in population survey conducted in 2008 [9], but by 2012, the national program's surveillance system reported only P. vivax cases from both Temotu and Isabel [6]. This shift in the relative importance of P. falciparum and P. vivax are not unique to Solomon Islands and have been reported after periods of sustained malaria control from other settings where P. falciparum and P. vivax occur sympatrically, such as the Amazon [21,32], Central America [4] and Thailand [33]. As in other endemic settings [12,13,34], P. vivax infections were of low density and PCR found three times more infections than LM. The majority of infections were not accompanied by febrile symptoms or anaemia. On the contrary, participants who reported feeling unwell or febrile were less likely to be infected with P. vivax. While this significantly lower level of febrile symptoms in P. vivax carriers is likely to be an artifact of the large samples size it does indicate that P. vivax is not a common cause of fever in Ngella. Whereas asymptomatic P. vivax infections have been commonly found in areas of high transmission [12,35,36], the advent of molecular diagnosis has revealed that even at low transmission the majority of infections in crosssectional surveys are symptomless [11,37,38], including in the previous surveys in Temotu [9] and Isabel [8] where 97.1% and 92.9% of P. vivax infected individuals infections were asymptomatic, respectively. Both the presence of P. vivax infections and their level of parasitaemia were found to be strongly age-dependent, albeit in different ways: while P. vivax parasite densities decreased with age, prevalence of P. vivax infections rose throughout childhood and only started dropping in adolescents and adults. These contrasting patterns are most likely due to local mosquito biting behavior and acquisition of immunity. Anopheles farauti, the only coastal malaria vector in Solomon Islands, is biting predominantly in the early evening (i.e. before 10pm) and outdoors [39], when small children tend to be indoors but older ones still active. The increase in prevalence during childhood is thus likely to represent an increase in exposure to infective bites. At all levels of transmission, immunity to P. vivax tends to be more rapidly acquired than that to P. falciparum [40]. Thus, the strong reduction in prevalence and parasite densities with increasing age in Ngella indicate that P. vivax transmission there remains sufficiently high for relatively rapid acquisition of clinical and anti-parasite immunity. Despite very low overall parasite densities, gametocytes were detected in almost a quarter of all P. vivax infections (in 41.5% of LM-positive infections and 16.6% of sub-microscopic infections). Given issues with RNA quality, it is likely that the gametocytaemic reservoir in Ngella was underestimated in our survey and the true prevalence of gametocytes is higher, especially in the sub-microscopic group. Given the rapid and ongoing production of P. vivax gametocytes, most if not all, blood stage infections could harbor concurrent gametocytes [41]. Whilst sub-patent P. falciparum infections have been shown to infect up to 43.5% of mosquitoes [17,19], the role of sub-microscopic P. vivax gametocyte carriage in sustaining transmission is poorly understood. The capacity of sub-microscopic P. vivax infections to infect mosquitoes has been established in studies from Thailand [18,42,43], Sri Lanka [44], Peru [45] and malaria therapy settings [14,15], but at varying proportions and with weak associations of gametocyte density. Although sub-patent infections may infect fewer mosquitoes, their higher prevalence in endemic settings may mean that the net transmission potential of low-density infections is higher. In Ngella, asymptomatic, sub-microscopic infections of adolescents and adults may thus be an important source of local transmission. These considerations may constitute a significant challenge to the success of the Solomon Islands malaria control program. The national malaria surveillance system, based on passive case detection and irregular mass blood surveys, only employs traditional microscopy diagnosis. This diagnostic test may not only underestimate the true burden of malaria in the Solomon Islands but also lack the means to detect and attack a substantial part of the P. vivax transmission reservoir. Despite outstanding gains in the last two decades, traditional tools of the Solomon Islands malaria control program may therefore have reached their effectiveness in the face of a large and silent reservoir of P. vivax infection. Our observation that people living in a household with another P. vivax infected individual is a noteworthy finding. Not only does it indicate likely within-household transmission, but also highlights that reactive case detection strategies [46][47][48] and focal mass drug administration [34] might be appropriately applied in Solomon Islands. In the Southwest Pacific, MDA campaigns that included primaquine to target the undetectable liver stage parasites have previously been successful in interrupting P. vivax transmission on Aneytium Island in Vanuatu [49] and Nissan Island in Papua New Guinea [50]. Combining automated registration of observed cases and rapid identification of transmission foci (e.g. in a spatial decision support system) [51] with reactive mass-screen and treat (MSAT) or with focal, household-based mass drug administration [52,53] should therefore be evaluated as possible additional malaria elimination tools in Solomon Islands and neighbouring Vanuatu. All interventions will be most efficacious if they include routine administration of primaquine to all P. vivax infected individuals. This will however require addressing the challenges posed by the potential primaquine toxicity in G6PD deficient individuals. Supporting Information S1 Checklist. STROBE checklist. (DOC) S1 Table. Demographic and clinical characteristics of the study population, by geographical area. (PDF)
2016-05-04T20:20:58.661Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "4c258c4e2c022a509a2227315c45a21ceb513e63", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0003758&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6773333e03b5be3064f793399140874127adcfe8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
18641439
pes2o/s2orc
v3-fos-license
p-Adic Haar Multiresolution Analysis and Pseudo-Differential Operators The notion of {\em $p$-adic multiresolution analysis (MRA)} is introduced. We discuss a ``natural'' refinement equation whose solution (a refinable function) is the characteristic function of the unit disc. This equation reflects the fact that the characteristic function of the unit disc is a sum of $p$ characteristic functions of mutually disjoint discs of radius $p^{-1}$. This refinement equation generates a MRA. The case $p=2$ is studied in detail. Our MRA is a 2-adic analog of the real Haar MRA. But in contrast to the real setting, the refinable function generating our Haar MRA is 1-periodic, which never holds for real refinable functions. This fact implies that there exist infinity many different 2-adic orthonormal wavelet bases in ${\cL}^2(\bQ_2)$ generated by the same Haar MRA. All of these bases are described. We also constructed multidimensional 2-adic Haar orthonormal bases for ${\cL}^2(\bQ_2^n)$ by means of the tensor product of one-dimensional MRAs. A criterion for a multidimensional $p$-adic wavelet to be an eigenfunction for a pseudo-differential operator is derived. We proved also that these wavelets are eigenfunctions of the Taibleson multidimensional fractional operator. These facts create the necessary prerequisites for intensive using our bases in applications. Introduction According to the well-known Ostrovsky theorem, any nontrivial valuation on the field Q is equivalent either to the real valuation | · | or to one of the padic valuations | · | p . This p-adic norm | · | p is defined as follows: if an arbitrary rational number x = 0 is represented as x = p γ m n , where γ = γ(x) ∈ Z and the integers m, n are not divisible by p, then The norm | · | p satisfies the strong triangle inequality |x + y| p ≤ max(|x| p , |y| p ) and is non-Archimedean. The field Q p of p-adic numbers is defined as the completion of the field of rational numbers Q with respect to the norm | · | p . Thus there are two equal in rights universes: the real universe and the padic one. The latter has a specific and unusual properties. Nevertheless, there are a lot of papers where different applications of p-adic analysis to physical problems, stochastics, cognitive sciences and psychology are studied [6]- [10], [13]- [19], [35]- [37] (see also the references therein). In view of the Ostrovsky theorem, such investigations are not only of great interest in itself, but lead to applications and better understanding of similar problems in usual mathematical physics. Since there exists a p-adic analysis connected with the mapping Q p into Q p and an analysis connected with the mapping Q p into the field of complex numbers C, one considers two corresponding types of p-adic physical models. For the p-adic analysis related to the mapping Q p → C, the operation of differentiation is not defined , and as a result, large number of models connected with p-adic differential equations use pseudo-differential operators (see the above-mentioned papers and books). In particular, fractional operator D α are extensively used in applications. A very important fact that the eigenfunctions of a one-dimensional fractional operator D α form an orthonormal basis for L 2 (Q p ) was observed by V.S. Vladimirov, I.V. Volovich, E.I. Zelenov (see [35]). S. V. Kozyrev [20] found an orthonormal compactly supported p-adic wavelet basis for L 2 (Q p ): (1.2) θ k;ja (x) = p −γ/2 χ p p −1 k(p j x − a) Ω |p j x − a| p , x ∈ Q p , k = 1, 2, . . . , p − 1, j ∈ Z, a ∈ I p = Q p /Z p . Wavelets (1.2) are also eigenfunctions of the one-dimensional fractional operator D α : D α θ k;ja (x) = p α(1−j) θ k;ja (x), x ∈ Q p , α ∈ C. Some wavelet-type systems generalizing (1.2) were suggested by S. V. Kozyrev [21], [22], A. Yu. Khrennikov and S. V. Kozyrev [16], [17], J. J. Benedetto and R. L. Benedetto [8], R. L. Benedetto [9]. Multidimensional p-adic bases obtained by direct multiplying out the Kozyrev's wavelets (1.2) were considered in [3]. The authors of [18] found the following new type of p-adic wavelet basis: where m ≥ 1 is a fixed positive integer; s = p −m s 0 + s 1 p + · · · + s m−1 p m−1 , s r = 0, 1, . . . , p − 1, r = 0, 1, . . . , m − 1, s 0 = 0; j ∈ Z, a ∈ I p . It turned out that wavelets (1.2) and their generalizations are eigenfunctions of p-adic pseudo-differential operators [3]- [5], [16], [17], [18], [20] - [22]. Moreover, a necessary and sufficient conditions for a class of p-adic pseudo-differential operators (2.15) (including fractional operator (2.23)) to have wavelets (1.2) and (1.3) as eigenfunctions was derived in [3], [18]. So, wavelets play an important role for application of p-adic analysis and gives a new powerful technique for solving p-adic problems. Nevertheless, in the cited papers, there was no any attempt to create a theory describing common properties of wavelet bases and giving methods for their finding. The goal of this paper is to start development of such a theory. It's interesting to compare appearing first wavelets in p-adic analysis with the history of the wavelet theory in real analysis. In 1910 Haar [12] constructed an orthogonal basis for L 2 (R) consisting of the dyadic shifts and scales of one piecewise constant function. A lot of mathematicians actively studied Haar basis, different kinds of generalizations were introduced, but during almost the whole century nobody could find another wavelet function (a function whose shifts and scales form an orthogonal basis). Only in early nineties a general scheme for construction of wavelet functions was developed. This scheme is based on the notion of multiresolution analysis (MRA in the sequel) introduced by Y. Meyer and S. Mallat [28], [26], [27]. Smooth compactly supported wavelet functions were found in this way, which has been very important for various engineering applications. In the present paper we introduce MRA in L 2 (Q p ) and study a concrete MRA for p = 2 being an analog of Haar MRA in L 2 (R). The same scheme as in the real setting leads to a Haar basis. It turned out that this Haar basis coincides with Kozyrev's wavelet system (1.2). However, 2-adic Haar MRA is not an identical copy of its real analog. We proved that, in contrast to Haar MRA in L 2 (R), there exist infinity many different Haar orthogonal bases for L 2 (Q 2 ) generated by the same MRA. The paper is organized as follows. In Sec. 2, we recall some facts from the p-adics. The basic results on the theory of the Bruhat-Schwartz distributions are given in Subsec. 2.1 (see [11], [33], [34], [35]). Some facts from the theory of the p-adic space Φ ′ (Q n p ) of Lizorkin distributions [3] are summarized in Subsec. 2.2. In Subsec. 2.3, 2.4, we recall some facts [3] related to the multidimensional pseudo-differential operators (2.15) in the space of Lizorkin distributions Φ ′ (Q n p ). In particular, the multidimensional fractional operators introduced by Taibleson [33, §2], [34,III.4.] are discussed. The spaces Φ ′ (Q n p ) is a "natural" domain for the class of pseudo-differential operators (2.15). The spaces Φ ′ (Q n p ) are invariant under our pseudo-differential operators. It is appropriate to mention here that the class of our operators includes the pseudo-differential operators studied in [19], [38], [39]. In Sec. 3, a notion of p-adic MRA is introduced (Definition 3.1). In Subsec. 3.2, we discuss refinement equation (3.7): whose solution (a refinable function) φ is the characteristic function Ω |x| p of the unit disc. The conjecture to use the above equation as a refinement equation was proposed in [18]. The above refinement equation is natural and reflects the fact that the characteristic function of the unit disc B 0 = {x : |x| p ≤ 1} is represented as a sum of p characteristic functions of the mutually disjoint discs B −1 (r) = x : x p − r p p ≤ 1 , r = 0, 1, . . . , p − 1 (see (2.7)). In Subsec. 3.3, the 2-adic Haar MRA is constructed. Namely, we proved that the refinable function φ(x) = Ω |x| 2 generates a MRA, which is an analog of the classical Haar MRA. It is shown that a 2-adic analog of the real wavelet function generated by Haar MRA generates an orthonormal basis (3.14) for L 2 (Q 2 ). This basis coincides with Kozyrev's one (1.2) for p = 2. We proved that Kozyrev's basis is not a unique orthonormal wavelet basis generated by 2-adic Haar MRA. In Sec. 5, we study multivariate Haar bases. A general scheme for construction of 2-adic multidimensional separable MRA is described in Subsec. 5.1,. According to this scheme, separable 2-adic Haar wavelets (5.7) in L 2 (Q n 2 ) are constructed in Subsec. 5.2. p-Adic distributions. Here and in what follows, we shall systematically use the notations and the results from [35] and [11,Ch.II]. Let N, Z, C be the sets of positive integers, integers, complex numbers, respectively, N 0 := {0}∪N. The field of p-adic numbers is denoted by Q p . The canonical form of any p-adic number x = 0 is where γ = γ(x) ∈ Z, x j = 0, 1, . . . , p − 1, x 0 = 0, j = 0, 1, . . . . The series is convergent in the p-adic norm (1.1), and one has |x| p = p −γ . By means of representation (2.1), the fractional part {x} p of a number x ∈ Q p is defined as follows The function for every fixed ξ ∈ Q p is an additive character of the field Q p , where {·} p is a fractional part (2.2). The space Q n p := Q p × · · · × Q p consists of points x = (x 1 , . . . , x n ), where x j ∈ Q p , j = 1, 2 . . . , n, n ≥ 2. The p-adic norm on Q n p is where |x j | p is defined by (1.1). Denote by B n γ (a) = {x ∈ Q n p : |x − a| p ≤ p γ } the ball of radius p γ with the center at a point a = (a 1 , . . . , a n ) ∈ Q n p and by S n γ (a) = {x ∈ Q n p : |x − a| p = p γ } = B n γ (a) \ B n γ−1 (a) its boundary (sphere), γ ∈ Z. For a = 0, we set B n γ (0) = B n γ and S n γ (0) = S n γ . For the case n = 1, we will omit the upper index n. It is clear that of radius p γ with the center at a point a j ∈ Q p , j = 1, 2 . . . , n. Any two balls in Q n p either are disjoint or one contains the other. Every point of a ball is its center. There exists the Haar measure dx on Q p . This measure is positive, invariant under the shifts, i.e., d(x + a) = dx, and normalized by |ξ|p≤1 dx = 1. The invariant measure dx on the field Q p is extended to an invariant measure d n x = dx 1 · · · dx n on Q n p in the standard way. A complex-valued function f defined on Q n p is called locally-constant if for any x ∈ Q n p there exists an integer l(x) ∈ Z such that f (x + y) = f (x), y ∈ B n l(x) . Let E(Q n p ) and D(Q n p ) be the linear spaces of locally-constant C-valued functions on Q n p and locally-constant C-valued functions with compact supports (so-called test functions), respectively [35, VI.1.,2.]. If ϕ ∈ D(Q n p ), according to Lemma 1 from [35, VI.1.], there exists l ∈ Z, such that ϕ(x + y) = ϕ(x), y ∈ B n l , x ∈ Q n p . The largest of such numbers l = l(ϕ) is called the parameter of constancy of the function ϕ. Let us denote by D l N (Q n p ) the finite-dimensional space of test functions from D(Q n p ) having supports in the ball B n N and with parameters of constancy ≥ l [35, VI.2.]. The following embedding holds: is a complete locally convex vector space. According to [35,VI,(5.2')], any function ϕ ∈ D l N (Q n p ) is represented in the following form where Ω(p −l |x − c ν | p ) are the characteristic functions of the mutually disjoint balls B l (c ν ), and the points c ν = (c ν 1 , . . . c ν n ) ∈ B n N do not depend on ϕ. Denote by D ′ (Q n p ) the set of all linear functionals (p-adic distributions) on {ξ j x j }p ; ξ · x is the scalar product of vectors and χ p (ξ j x j ) are additive characters (2.3). The Fourier transform is a linear isomorphism D(Q n p ) into D(Q n p ). Moreover, according to [ Then for a distribution f ∈ D ′ (Q n p ) the following relation holds [35,VII,(3.3)]: The p-adic Lizorkin spaces. Let us introduce a space of the p-adic Lizorkin test functions (see [3], [4]) can be equipped with the topology of the space D(Q n p ) which makes Φ a complete space. In view of (2.10), and Ψ ′ (Q n p ) denote the topological dual of the spaces Φ(Q n p ) and Ψ(Q n p ), respectively. We call Φ ′ (Q n p ) the space of p-adic Lizorkin distributions. 2.3. Pseudo-differential operators in the Lizorkin space. Consider the following class of pseudo-differential operators A in the Lizorkin space of the test functions Φ(Q n p ) defined by is invariant under the pseudodifferential operators (2.15). Moreover, A(Φ(Q n p )) = Φ(Q n p ). Given pseudo-differential A with a symbols A, define the conjugate operator A T by . If A, B are pseudo-differential operators with symbols A, B ∈ E(Q n p \ {0}) respectively, then the operator AB is well defined and represented by the formula , then operator we the pseudo-differential operator . is, evidently, the inverse operator to A. The Riesz kernel has a removable singularity at α = 0 and according to [ According to [3], [4], one can similarly introduce a Lizorkin distribution κ n (·) by Due to (2.21), (2.22), we can define the operator (2.19) in the Lizorkin space of test functions φ ∈ Φ(Q n p ) by [3]). Consequently, the family of operators D α , α ∈ C, in the Lizorkin space forms an Abelian group: 3. Multiresolution analysis (one-dimensional case) 3.1. p-Adic multiresolution analysis. Consider the set This set can be identified with the factor group Q p /Z p . It is well known that we have a "natural" decomposition of Q p to a union of mutually disjoint discs: So, I p is a "natural" group of shifts for Q p , which will be used in the sequel. (e) there exists a function φ ∈ V 0 such that the system {φ(· − a), a ∈ I p } is an orthonormal basis for V 0 . The function φ from axiom (e) is called refinable or scaling. It follows immediately from axioms (d) and (e) that the functions p j/2 φ(p −j · −a), a ∈ I p , form an orthonormal basis for V j , j ∈ Z. According to the standard scheme (see, e.g., [30, §1.3]) for construction of MRA-based wavelets, for each j, we define a space W j (wavelet space) as the orthogonal complement of V j in V j+1 , i.e., Taking into account axioms (b) and (c), we obtain If now we find a function ψ ∈ W 0 such that the system {ψ(x − a), a ∈ I p } is an orthonormal basis for W 0 , then, due to (3.3) and (3.4), the system {p j/2 ψ(p −j · −a), a ∈ I p , j ∈ Z}, is an orthonormal basis for L 2 (Q p ). Such a function ψ is called a wavelet function and the basis is a wavelet basis. 3.2. p-Adic refinement equation. Let φ be a refinable function for a MRA. As was mentioned above, the system {p 1/2 φ(p −1 · −a), a ∈ I p }, is a basis for V 1 . It follows from axoim (a) that We see that the function φ is a solution of a special kind of functional equation. Such equations are called refinement equations. Investigation of refinement equations and their solutions is the most difficult part of the wavelet theory in real analysis. A natural way for construction of a MRA (see, e.g., [30, §1.2]) is the following. We start with an appropriate function φ whose integer shifts form an orthonormal system and set V j = span φ p −j · −a : a ∈ I p , j ∈ Z. It is clear that axioms (d) and (e) of Definition 3.1 are fulfilled. Of course, not any such a function φ provides axiom (a). In the real setting, the relation V 0 ⊂ V 1 holds if and only if the refinable function satisfies a refinement equation. Situation is different in p-adics. Generally speaking, a refinement equation (3.5) does not imply the including property V 0 ⊂ V 1 . Indeed, we need all the functions φ(· − b), b ∈ I p , to belong to the space V 1 , i.e., the identities φ(x − b) = a∈Ip α a,b φ(p −1 x − a) should be fulfilled for all b ∈ I p . Since p −1 b + a is not in I p in general, we can not state that Nevertheless, some refinable equations imply including imply property, which may happen because of different causes. The refinement equation reflects some "self-similarity". The structure of the space Q p has a natural "self-similarity" property which is given by formulas (2.6), (2.7). By (2.7), the characteristic function Ω |x| p of the unit disc B 0 is represented as a sum of p characteristic functions of the mutually disjoint discs B −1 (r), r = 0, 1, . . . , p − 1, i.e., (3.6) Thus, in p-adics, we have a natural refinement equation (3.5): and its solution, the refinable function φ(x) = Ω |x| 2 , we construct a 2-adic multiresolution analysis. Set It is clear that axioms (d) and (e) of Definition 3.1 are fulfilled and the system {2 j/2 φ(2 −j · −a), a ∈ I p } is an orthonormal basis for V j , j ∈ Z. Since the numbers 2 −1 b, 2 −1 b + 2 −1 are in I 2 for all b ∈ I 2 , it follows from the refinement equation (3.8) that V 0 ⊂ V 1 . By the definition (3.9) of the spaces V j , this yields axiom (a). Due to the refinement equation (3.8), we obtain that V j ⊂ V j+1 , i.e., the axiom (a) from Definition 3.1 holds. Note that the characteristic function of the unit disc Ω |x| 2 has a wonderful feature: Ω(| · +ξ| 2 ) = Ω(| · | 2 ), for all ξ ∈ Z 2 because the p-adic norm is non-Archimedean. In particular, Ω(| · ±1| 2 ) = Ω(| · | 2 ), i.e., (3.10) φ Thus φ is a 1-periodic function. Proof. According to (2.8), any function ϕ ∈ D(Q 2 ) belongs to one of the spaces D l N (Q 2 ), and consequently, is represented in the form Since any number 2 l c ν can be represented in the form 2 l c ν = a ν + b ν , a ν ∈ I 2 , b ν ∈ Z 2 , using (3.10), we have i.e., ϕ(x) ∈ V −l . Thus any test function ϕ belongs to one of the space V j , where j = j(ϕ), j ∈ Z. Since the space D(Q 2 ) is dense in L 2 (Q 2 ) [35, VI.2], approximating any function from L 2 (Q 2 ) by test functions ϕ ∈ D(Q 2 ), we prove our assertion. Proof. Assume that ∩ j∈Z V j = {0}. Then there exists a function f ∈ D(Q 2 ) such that f = 0 and f ∈ V j for all j ∈ Z. Hence, due to (3.9), we have f (x) = a∈I 2 c ja φ 2 −j x − a for all j ∈ Z. According to the above scheme, we introduce the space W 0 as the orthogonal complement of V 0 in V 1 . Set Proposition 3.3. The shift system {ψ (0) (· − a), a ∈ I 2 }, is an orthonormal basis of the space W 0 . Thus according to Propositions 3.1, 3.2, 3.3, the collection {V j : j ∈ Z} is a MRA in L 2 (Q 2 ) and the function ψ (0) defined by (3.12) is a wavelet function. This MRA is a 2-adic analog of the real Haar MRA and the wavelet basis generated by ψ (0) is an analog of the real Haar basis. But in contrast to the real setting, the refinable function φ generating our Haar MRA is periodic with the period 1 (see (3.10)), which never holds for real refinable functions. It will be shown bellow that due this specific property of φ, there exist infinity many different orthonormal wavelet bases in the same Haar MRA (see Sec. 4). Due to (2.3), (2.7), the function ψ (0) can be rewritten in the form Thus the Haar wavelet basis is Since a locally-constant function ψ ja (x) belongs to the Lyzorkin space Φ(Q 2 ). Remark 3.1. The Haar wavelet basis (3.14) coincides with Kozyrev's wavelet basis (1.2) for the case p = 2. In present paper we restrict ourself by constructing the Haar wavelets only for p = 2. Since Haar refinement equation (3.7) was presented for all p, a similar construction may be easily realized in the general case. Moreover, it is not difficult to see that Kozytev's wavelet function θ j (x) from (1.2) can be expressed in terms of the refinable function φ(x) as where h r = p 1/2 e 2πi{ kr p }p , r = 0, 1, . . . , p − 1, k = 1, 2, . . . , p − 1. Since the vectors A r u 0 , r = 0, 1, . . . , 2 s − 1 form a basis in the 2 s -dimensional space, we conclude that AB = BA. Description of multidimensional 2-adic Haar bases 5.1. p-Adic separable multidimensional MRA. Here we describe multidimensional wavelet bases constructed by means of a tensor product of onedimensional MRAs. This standard approach for construction of multivariate wavelets was suggested by Y. Meyer [29] (see, e.g., [30, §2.1]). Let {V (ν) j } j∈Z , ν = 1, . . . , n, be one-dimensional MRAs (see Subsec. 3.1). We introduce subspaces V j , j ∈ Z, of L 2 (Q n p ) by Since the system {φ (ν) (·−a)} aν ∈Ip is an orthonormal basis for V (ν) 0 (axiom (e) of Definition 3.1) for any ν = 1, . . . , n, it is clear that V 0 = span{Φ(· − a) : a = (a 1 , . . . , a n ) ∈ I n p }, where I n p = I p × · · · × I p is the direct product of n sets I p , and the system Φ(· − a), a ∈ I n p , is an orthonormal basis for V 0 . It follows from Definition (5.1) and axiom (d) of Definition 3.1 that f ∈ V 0 if and only if f (2 −j ·) ∈ V j for all j ∈ Z. Since axiom (a) from Definition 3.1 holds for any one-dimensional MRA {V (ν) j } j , it is easy to see that Φ(2 −j · −a) ∈ V j+1 for any a ∈ I n p . Thus, V j ⊂ V j+1 . It is not difficult to check that the axioms of completeness and separability for the spaces V j hold. Thus we have the following statement. j } j∈Z , ν = 1, . . . , n, be KMAs in L 2 (Q p ). Then the subspaces V j of L 2 (Q n p ) defined by (5.1) satisfy the following properties: Similarly to Definition 3.1, the collection of spaces V j , j ∈ Z, which satisfies conditions (a)-(e) of Theorem 5.1 is called a multiresolution analysis in L 2 (Q n p ), the function Φ from axiom (e) is called refinable.
2007-05-16T08:08:26.000Z
2007-05-16T00:00:00.000
{ "year": 2008, "sha1": "3b1f5a79fb778c2eadee8a03ef586d4980f4a71d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0705.2294", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "08c9e8a395285afeeef7faa0513aebf9edcd3084", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
209894083
pes2o/s2orc
v3-fos-license
Hepatic Fat Content Is Associated with Fasting-Induced Fibroblast Growth Factor 21 Secretion in Mice Fed Soy Proteins. Previous studies suggest that circulating fibroblast growth factor 21 (FGF21) levels are elevated in patients with fatty liver, while fasting-induced secretion of FGF21 is lower in obese patients. It has been reported that soy protein prevents hepatic fat accumulation and induces FGF21 secretion. The present study was designed to evaluate the response of circulating FGF21 levels to feeding and fasting in mice fed soy protein-rich diets. For this, C57BL/6J mice were distributed into control, high-fat high-sucrose (HFHS)-casein protein, HFHS-soy protein, and HFHS-β-conglycinin diet groups. Plasma samples were collected after 10 and 11 wk either in dark periods with feeding conditions or light periods under fasting conditions using a crossover design. After a 12-wk period of feeding, HFHS-induced hepatic fat accumulation was significantly reduced in the groups fed HFHS-soy protein and HFHS-β-conglycinin as compared to that in the HFHS-casein-fed group (p<0.05). Plasma FGF21 concentration was significantly higher in the dark/feeding periods in the HFHS-casein group (p<0.05), while in the HFHS-β-conglycinin group it was higher in the light/fasting periods (p<0.05). The amount of mesenteric fat was significantly lower in the HFHS-β-conglycinin group than in the HFHS-casein and HFHS-soy protein groups (p<0.01). The fasting-induced FGF21 secretion was significantly and negatively correlated with hepatic fat content (p<0.05). The present study revealed that hepatic fat accumulation was associated with lower fasting-induced FGF21 secretion, which was regulated better by dietary intake of soy protein. These results support the preventive effects of soy protein on central obesity. Hepatokine fibroblast growth factor 21 (FGF21) is a metabolic regulator of adipose tissue browning (1) and glycolipid metabolism (2). Therefore, it seems likely that elevated levels of circulating FGF21 are associated with leanness, but in contrast, previous studies have shown that obese patients have a higher circulating FGF21 concentration (3). The increased FGF21 levels are thought to represent an FGF21-resistant state (4), and the chronically high levels of circulating FGF21 are associated with the onset of type 2 diabetes (5, 6), metabolic syndrome (6,7), and cardiovascular diseases (8). A fasting state is a stimulator of FGF21 in both rodents and humans (9,10), and circulating FGF21 has a circadian rhythm (11,12), which corresponds to a peak FGF21 level from midnight to early morning, and FGF21 concentration steadily declines after wak-ing and eating (12). Moreover, obese subjects showed chronically elevated circulating FGF21 and an attenuated FGF21 rhythm (12) that may be associated with reduced response of FGF21 secretion to fasting. On the other hand, injection of FGF21 itself or of a FGF21 variant reduces obesity and metabolic dysfunction (13)(14)(15). These results suggest that, unlike under conditions of chronically elevated levels of FGF21, a temporary increase in the FGF21 concentration has beneficial effects on thermogenesis and glycolipid metabolism. It has been reported that hepatic fat accumulation is related to high levels of circulating FGF21 (16,17), and serum FGF21 concentration is decreased by a reduction in hepatic fat content (18). It is therefore likely that hepatic fat content is a major determinant of circulating FGF21 levels, but the possible associations between hepatic fat content and the fasting-induced FGF21 secretion have not been evaluated. Previous study has revealed that consumption of b-conglycinin, a soy protein, helps to reduce hepatic lipid content (19,20), and single ingestion of b-conglycinin temporarily increases circulating FGF21 levels, which exert antiobesity effects in mice (21). It is also well documented that soy protein itself has an inhibitory effect on hepatic lipid levels (22), and thus soy protein, especially b-conglycinin, may play a role in the regulation of the FGF21 secretion by pre-venting hepatic fat accumulation and/or via independent effects. We hypothesized that hepatic fat content is associated with response of circulating FGF21 levels to feeding and fasting conditions, which is expected to be modulated by dietary soy protein intake. This study was aimed at evaluating the relationship between chronically high levels of FGF21, fasting-induced FGF21 secretion, and hepatic fat accumulation in mice fed soy protein-rich diets. MATERIALS AND METHODS Mice and diets. The study protocol was approved by the ethics committee of Ryukoku University (No. 2017-6). Four-week-old male C57BL/6J Jms Slc mice were purchased from Shimizu Laboratory Supplies (Kyoto, Japan). The mice were housed at controlled temperature (21-25˚C) and humidity (45-65%) on a 12 h light-dark cycle, with lights on at 00:00 h. After a 1-wk acclimation period, the mice were distributed into four groups (n58 per group): control, high-fat high-sucrose (HFHS)-casein protein (HFHS-casein), HFHS-soy protein (HFHS-soy), and HFHS-b-conglycinin diet groups. Casein (Lactic Casein 720; Fonterra, New Zealand), soy protein (Supro 661; Solae, USA), and b-conglycinin (Lipoff; Fuji oil, Japan) were provided as dietary protein sources. It has been reported that fructose ingestion stimulates circulating FGF21 levels (23), suggesting that habitual fructose consumption might be associated with chronically high levels of circulating FGF21. Therefore, the experimental diets included higher sucrose amounts (40% of calories from sucrose) to examine the relation between elevated FGF21 levels and fasting-induced FGF21 secretion. The control group was fed only a lowfat diet (70% of calories from carbohydrates, 10% from fat, and 20% from protein). To examine the preventive effects of soy proteins on hepatic fat accumulation, the other three groups were fed with HFHS diets (40% of calories from sucrose, 40% from fat, and 20% from protein). Pellet diets were purchased from Research Diets (NJ, USA), and dietary composition is shown in Table 1. All the mice had free access to food and tap water during the 12 wk. Blood sample collection under feeding and fasting conditions. Blood was sampled after 10 and 11 wk from lateral tail veins via heparinized glass capillaries. The sampling was performed two times at the end of the dark and light periods using a crossover design. Samples corresponding to the dark/feeding conditions were collected between 21:00 and 23:00 h, which is at the end of the dark period, and mice were fed ad libitum with each diet. On the other hand, samples during the light period were collected after 15 h fasting, which correspond to sedentary and overnight fasting conditions as observed in a human study (12). Each type of feed was removed at 18:00 h, and blood was sampled between 9:00 and 11:00 h the next day, which is at the end of the light period. Thus, half the mice in each group were alternately subjected to fasting in either the 10th or 11th week. The glass capillaries were centrifuged at 12,000 rpm for 5 min (KUBOTA 3100; Kubota, Japan), and the plasma was frozen at 280˚C until analysis. Metabolic measurements. Mice were placed in metabolic cages, and their expired gas was analyzed on a CO 2 and O2 mass spectrometric analyzer (ARCO-2000; ARCO System, Japan). The gas analysis system has been described in detail previously (24)(25)(26)(27). Energy expenditure was calculated from the volume of O2 consumed and volume of CO2 expired and was expressed as the average and total for each 12 h dark or light cycle per body weight. Respiratory exchange ratio was calculated from the expired volumes of CO2 and O2 by using the formula: the expired volumes of CO2/O2. Tissue sampling and histology. After the 12-wk feeding period, tissue samples were collected from 10:00 to 15:00 h, and time of sampling was matched among the groups. All the mice were subjected to food deprivation for 1-2 h, and then anesthetized with sodium pentobarbital (20 mg/(kg body weight)). Thus, tissue sampling was performed without accounting for the acute effects of food intake. Whole blood was collected from the inferior vena cava. Serum was prepared by centrifugation at 5,000 rpm for 10 min at 4˚C (Tomy MX-307; Tomy, Japan), and frozen at 280˚C until analysis. The liver, epididymal fat, retroperitoneal fat, mesenteric fat, and gastrocnemius muscle were excised and weighed. A part of the liver was stored in the RNAlater solution (Qiagen, Japan), and mesenteric fat tissue was immediately frozen in liquid nitrogen and stored at 280˚C until experiments. Another part of the liver was fixed in 10% formaldehyde (Mildform 10 N; FUJIFILM Wako Pure Chemical Corporation, Japan), and embedded in paraffin by standard procedures. The embedded tissues were sliced and stained with hematoxylin and eosin (HE). Stained sections were examined under a standard microscope (Olympus CX41; Olympus, Japan), and pictures were taken using an Olympus DP22 digital camera (Olympus) at 203 magnification. Blood analysis. Levels of plasma FGF21, free fatty acids (FFA), and triglycerides (TG) were measured in dark/feeding and light/fasting samples. The plasma FGF21 concentration was determined with a commercially available ELISA kit (MF2100; R&D Systems, MN, USA). The working range of the assay was 31.3 to 2,000 pg/mL. The intra-and inter-assay coefficients of variation reported by the manufacturer were 2.4-6.2% and 6.0-7.4%. The plasma FFA and TG concentrations were analyzed by means of NEFA C-test Wako and Triglyceride E-Test Wako kits (FUJIFILM Wako), respectively. Serum total cholesterol (Total-C) was measured on a Cobas b 101 system (Roche Diagnostics, Japan). Serum enzymatic activities of aspartate aminotransferase (AST) and alanine aminotransferase (ALT) were determined with Transaminase CII Test Wako (FUJIFILM Wako). Liver TG. Hepatic lipids were extracted according to the Folch method as previously described (28). Briefly, liver tissues were homogenized, and total lipids were extracted with a chloroform : methanol (2 : 1, v/v) mixture. After centrifugation, the lower phase was dried, and the dried lipid extracts were dissolved in isopropanol with 10% of Triton X-100. The TG concentration was measured by enzymatic methods (Triglyceride E-Test Wako kits; FUJIFILM Wako). Real-time PCR. Total RNA was extracted from the liver using the NucleoSpin RNA kit (Macherey-Nagel, Germany) and from mesenteric fat with the RNeasy Lipid Tissue Mini Kit (QIAGEN). RNA concentration and purity were determined on a Nanodrop (Thermo Fisher Scientific, Japan). The isolated RNA samples, which had an A260/A280 ratio over 2.0, were reverse-transcribed by means of the PrimeScript RT Reagent Kit with gDNA Eraser (Takara Bio Inc., Japan). Thermal cycling was performed on a T100 Thermal Cycler (Bio-Rad, Japan). The synthesized cDNA was subjected to real-time PCR with SYBR Premix Ex Taq (Takara Bio Inc.) on an ABI 7300 Real-Time PCR System (Applied Biosystems, Japan). Primer sequences were designed to amplify genes encoding FGF21, FGF21 receptors, FGF21 pathways, lipid synthase, uncoupling protein 1 (UCP1), and adipocytokines (Table S1, Supplemental Online Material). Relative mRNA expression was quantified by the DDCt method, and the results were normalized to the expression of cyclophilin. The results were expressed as a fold change relative to the control group. Statistics. All statistical analyses were performed in the SPSS software, version 23.0 (SPSS Japan, Japan). The Kolmogorov-Smirnov test was performed to assess the normality of data distribution, and non-normally distributed data were log-transformed prior to analysis. Differences among the groups were assessed by one-way ANOVA with post hoc Bonferroni's test. The paired t test was conducted to evaluate the differences in plasma FFA, TG, and FGF21 levels between dark/feeding and light/fasting conditions in each group. To examine the effects of hepatic fat accumulation on FGF21 secretion, associations between liver TG content and plasma FGF21 levels were evaluated using Spearman correlation coefficient. All measurements and calculated values are presented as the mean6standard error (SE), and the level of statistical significance was set to p,0.05. Table 2 presents body and tissue weights after 12 wk on each diet. HFHS diet groups (casein, soy protein, and b-conglycinin) resulted in significantly greater body weights, epididymal fat, and retroperitoneal fat weights as compared with the control group (p,0.001). Liver and mesenteric fat weights were significantly higher in HFHS-casein and HFHS-soy protein groups than in the control group (p,0.01), but there was no significant difference in the liver and mesenteric fat weights between control and HFHS-b-conglycinin groups. Among the HFHS diet groups, the HFHS-b-conglycinin group had significantly lower body weights as compared with HFHS-casein and soy groups (p,0.001), and the HFHS-b-conglycinin group had significantly lower liver weights in comparison with the HFHS-casein group (p,0.001). There was no significant difference in epididymal fat weights among the HFHS diet groups, whereas retroperitoneal and mesenteric fat weights were significantly lower in the HFHS-b-conglycinin group than in the HFHS-casein and soy protein groups (p,0.05). Although there was no statistically significant difference in the retroperitoneal fat per body weights among the HFHS diet groups, the mesenteric fat per body weights were still significantly lower in the HFHS-b-conglycinin group than in the HFHS-casein and HFHS-soy protein groups (p,0.01). Weights of the gastrocnemius muscle did not differ among the groups. Effects of soy protein-rich diets on body composition and serum biochemical parameters in mice Serum Total-C levels and AST and ALT activities were significantly higher in the HFHS-casein group than in the other three groups (p,0.05). There were no significant differences in serum Total-C, AST, and ALT results among control, HFHS-soy protein, and HFHS-b-conglycinin groups. The influence of soy protein-rich diets on energy expenditure and respiratory exchange ratio in mice Average and total values of energy expenditure during the light and dark periods were measured during ad libitum feeding (Fig. 1). Energy expenditure per body weight in both periods was significantly higher in the control group than in the other groups (p,0.01). Among the HFHS-protein fed groups, average energy expenditure in the HFHS-b-conglycinin group was significantly higher during the dark period (p,0.05) and tended to be higher during the light period (p50.08) as compared with the HFHS-casein and HFHS-soy protein groups. Total 12 h energy expenditure was significantly higher in the HFHS-b-conglycinin group during both the dark and light periods (p,0.05). There was no significant difference in energy expenditure per body weight between the HFHS-casein and soy groups. The respiratory exchange ratio was significantly higher in the control group than in the other groups (p,0.01), and was not significantly different among the mice fed HFHS-casein, soy, and b-conglycinin diets. Average and total energy expenditures for each 12 h light or dark cycle per body weight are presented for each group (n58 for each group). Different letters indicate significant differences (p,0.05; one-way ANOVA followed by Bonferroni's test), and equal letters indicate no statistically significant results. Data are expressed as mean6SE. Effects of soy protein-rich diets on liver histology and TG content in mice As depicted in Fig. 2, HE staining uncovered histological changes and lipid accumulation in the liver of mice fed HFHS diets as compared with the control group. The largest lipid droplets were observed in the HFHScasein group, and smaller lipid droplets were found in the HFHS-b-conglycinin group than in the HFHS-soy protein group. Histological results were in agreement with liver TG content, and the mice fed the control diet had significantly lower liver TG content than did the other groups ( Fig. 2; p,0.05). The liver TG content in the HFHS-soy and HFHS-b-conglycinin groups was significantly lower than that in the HFHS-casein group (p,0.05). Average values of the liver TG contents were higher in the HFHSsoy group than in the HFHS-b-conglycinin group, but the difference was not significant. The influence of soy protein-rich diets on plasma FFA, TG, and FGF21 levels in the dark/feeding and light/fasting periods Plasma FFA levels in the dark/feeding periods were significantly higher in the HFHS-b-conglycinin group than in the control and HFHS-soy groups ( Fig. 3; p,0.05). The HFHS-casein group in the dark/feeding periods had significantly higher levels of plasma FFA as compared with the HFHS-soy protein group (p,0.05). In the light/fasting periods, significantly higher plasma FFA levels were detected in the control group compared with the HFHS diet groups (p,0.01). Fasting-induced increases in plasma FFA levels were found only in the control group (paired t test, p,0.05), and the HFHSb-conglycinin group turned out to have significantly higher feeding plasma levels of FFA in comparison with fasting FFA levels (paired t test, p,0.05) in the light/ fasting periods. Approximately twofold higher levels of plasma TG were detected in the dark/feeding periods relative to the fasting conditions in all the groups (Fig. 3; paired t test, p,0.001). Plasma TG levels were significantly higher in the HFHS-b-conglycinin group than in the other groups in the dark/feeding periods (p,0.05). In the light periods under fasting conditions, only the HFHS-casein group had a significantly higher plasma TG concentration than did the control group (p,0.01). Plasma FGF21 concentrations were significantly higher in the HFHS-casein group than in the other groups in the dark/feeding periods ( Fig. 3; p,0.05), whereas the HFHS-b-conglycinin group manifested significantly higher levels of plasma FGF21 levels in the light/fasting periods as compared with the other groups (p,0.05). Results of the paired t test revealed significantly higher levels of plasma FGF21 in the light periods under fasting conditions than in the dark periods under feeding conditions in the control, HFHS-soy, and HFHSb-conglycinin groups (p,0.05). On the contrary, only the HFHS-casein group was found to have significantly lower FGF21 concentrations in the light/fasting periods compared with in the dark/feeding periods (paired t test, p,0.05). Correlation analysis showed that liver TG content was tended to be positively correlated with plasma FGF21 levels in the dark/feeding periods (Rho50.349, p50.059) and was significantly and negatively correlated with plasma FGF21 levels in the light/fasting periods (Rho520.399, p,0.05). Effects of soy protein-rich diets on the expression of FGF21 and related genes in liver and mesenteric fat tissues All mice were dissected at the end of the dark period to the beginning of the light period. Expression of hepatic FGF21 was approximately 1.5-fold higher in the HFHS-casein group than in the control and HFHS-soy groups, but this difference did not reach statistical significance (Fig. 4). There was no significant difference in the expression of the FGF21 receptors either-including the fibroblast growth factor receptor (FGFR) 1c, FGFR4, and b-klotho genes-in the liver among all the groups. The relative expression levels of activating transcription factor 4 (ATF4), which is reported to act upstream of FGF21 (21), were not different among the groups. Another upstream gene, peroxisome proliferator-activated receptor (PPAR) a, in the liver was expressed significantly more strongly in the HFHS-soy group than in the control and HFHS-b-conglycinin groups (p,0.05), and PPARa expression was significantly higher in the HFHS-casein group than in the control group (p,0.01). Significantly high expression of PPAR target genes, including carnitine palmitoyltransferase 1 (CPT1) and acyl-CoA oxidase (ACO), were observed in the HFHScasein and HFHS-soy groups compared to that in the control and HFHS-b-conglycinin groups (p,0.01). Expression levels of carbohydrate-responsive elementbinding protein (ChREBP) and sterol-regulatory element-binding protein 1c (SREBP1c), which stimulate lipogenic gene expression in the liver, were significantly higher in the HFHS-casein group than in the control and HFHS-b-conglycinin groups (p,0.05). In addition, the HFHS-casein group had a significantly higher expression level of fatty acid synthase (FAS) compared with the HFHS-b-conglycinin group (p,0.01). As illustrated in Fig. 4, mesenteric fat in the control group showed lower FGF21 expression without statisti- cal significance, whereas FGFR1c expression was significantly lower in the control group than in the HFHS diet groups (p,0.05). There were no significant differences in the expression of FGF21 pathway-related genes (such as b-klotho and PPARg2), ChREBP, and SREBP1c in mesenteric fat among all the groups. Although mRNA expression of UCP1 in mesenteric fat was higher in the HFHS-b-conglycinin group than in the HFHS-casein and HFHS-soy protein groups, this difference did not reach statistical significance. Expression of adiponectin in mesenteric fat tended to be higher in the HFHS diet groups than in the control group (p,0.10). In addition, leptin expression was significantly lower in the control group and significantly higher in the HFHS-casein group than in the other groups (p,0.01). DISCUSSION Obese individuals manifest endocrine resistance, which is characterized by an abnormally high concentration of a circulating hormone, and endocrine resistance is an independent risk factor of some diseases, as is the case for insulin resistance (29). Chronically elevated levels of circulating FGF21 are a predictive factor of metabolic diseases (5,6,8), and FGF21 resistance is observed in obese individuals (3) who have an attenuated fasting-induced increase in circulating FGF21 levels (12). In the present study, mice with diet-induced fatty liver showed increased and reduced plasma FGF21 levels during the dark/feeding and light/fasting periods, respectively, whereas the abnormal FGF21 secretion was prevented in non-fatty-liver mice fed soy protein-rich diets. Because it has been reported that nonalcoholic fatty liver disease is a risk factor for type 2 diabetes mellitus (30) and cardiovascular disease (31), an abnormal FGF21 secretion in patients with fatty liver may be associated with the onset of metabolic diseases. On the other hand, injection of FGF21 itself or a variant of FGF21 improves glucose homeostasis and alleviates dyslipidemia regardless of obesity in mice and humans (13)(14)(15), suggesting that the beneficial effects of FGF21 are preserved even in an obesity-related FGF21-resistant state. In the present study, an apparent fasting-induced FGF21 secretion and lower visceral fat mass were observed in mice fed a b-conglycinin-rich diet. These results indicate the necessity to distinguish between the effects of chronically high levels of circulating FGF21 and those of temporary increases. It is possible that the decreased dark/feeding and increased light/fasting levels of circulating FGF21 denote FGF21 sensitivity. Thus, further research is needed to confirm the relation between the reactivity of FGF21 secretion and FGF21 function, including adipose tissue thermogenesis and glucolipid metabolism. In this study, hepatic fat accumulation was prevented by both soy protein and b-conglycinin consumption in mice fed HFHS diets. The plasma FGF21 concentration in the dark periods was significantly higher in the HFHS-casein group, which had the highest hepatic TG content. Hepatic FGF21 expression tended to be higher in obese mice fed the HFHS-casein diet, and this result is in line with the elevated FGF21 concentrations in obese mice and human subjects (3,4). The HFHS-soy protein group turned out to have lower plasma FGF21 levels under the dark/feeding conditions as compared with the HFHS-casein group. Correlation analysis hinted that liver TG contents may be associated with basal plasma FGF21 levels in the dark/feeding periods, though not to a statistically significant level. Our data support previous reports (16)(17)(18) suggesting that hepatic fat content was associated with high levels of plasma FGF21 levels under the dark/feeding conditions. PPARs are ligand-activated transcription factors, and FGF21 is induced by activation of PPARa in the liver and of PPARg in adipose tissue (32). Besides, fastinginduced increases in FFA levels facilitate expression of the hepatic PPAR-FGF21 pathway (33). In the present study, the control group showed higher plasma FFA and FGF21 concentrations in the light periods under the fasting conditions than in the dark periods under the feeding conditions, whereas the fasting response of FFA was not in agreement with that of the HFHS diet groups. Circulating FGF21 levels were increased by fasting for 15 h in the control, HFHS-soy, and HFHS-b-conglycinin groups as described elsewhere (34), whereas only the HFHS-casein group yielded the results contradicting the fasting-induced increase in circulating FGF21. Our correlation analysis revealed that liver TG contents were associated with fasting-induced FGF21 secretion, suggesting that hepatic fat content is associated with FFA-independent effects on FGF21 secretion in mice fed HFHS diets. These findings support the claim that soy protein consumption prevents the abnormal secretion of FGF21 that is induced by the HFHS diet. In addition, chronically high levels of FGF21 are likely to cause compensatory dysfunction of hepatic FGF21 secretion, similar to pancreatic b cell failure in insulin-resistant hyperglycemia (29). High levels of plasma FGF21 levels in the dark/feeding periods may similarly lead to lower levels of fasting-induced FGF21 secretion seen in the present study. The highest average concentration of plasma FGF21 was observed in mice fed the HFHS-b-conglycinin diet in the light periods under fasting conditions. A previous study reported that hepatic FGF21 expression is increased by the ATF4-FGF21 axis in mice fed b-conglycinin diets (21). Although the hepatic ATF4 expression was measured in the present study, the real-time PCR analysis was not performed under the fasting condition. It is possible that ATF4 regulates the marked increase in fasting FGF21 levels. Further experiments are needed to investigate the association between the ATF4-FGF21 axis and the response of FGF21 expression to both feeding and fasting conditions. In addition, the mice fed the HFHS-b-conglycinin showed lower mesenteric fat weight, which is representative of visceral fat mass. The preventive effects of b-conglycinin on visceral fat accumulation are consistent with another interventional study (35), and our cross-sectional study has revealed an association between higher serum FGF21 levels and a smaller visceral fat area (36). Here, epididymal fat weight did not differ among the HFHS diet groups, and thus the effect of dietary b-conglycinin may be specific to visceral adipose tissue. These results suggest that FGF21 modulates body fat distribution. Because the mice fed the HFHS-b-conglycinin showed the chronically high levels of FGF21 and the apparent fastinginduced FGF21 secretion, the sensitivity and reactivity of FGF21 may exert preventive action on visceral fat accumulation. Blood flows into the liver via the portal vein, which means that visceral fat is the most distant tissue to respond to liver-derived FGF21. It is therefore possible that FGF21 actions are influenced by the location of the adipose tissues. This study uncovered an approximately twofold higher plasma FGF21 concentration in the dark periods under feeding conditions as compared with other studies (4,21). The experimental diets included fructose, which stimulates FGF21 secretion (23), and thus habitual fructose consumption might be associated with chronically high levels of circulating FGF21. Furthermore, a recent study indicates that ChREBP has an important role in FGF21 secretion after fructose intake (37). This finding is in agreement with our data showing that mice fed the HFHS-casein diet had higher expression of ChREBP and plasma FGF21 levels during the dark periods. It is considered that the chronic expression of ChREBP is related to an increase in basal secretion of FGF21, and this may be reason for the lower fasting plasma FGF21 levels in the HFHS-casein group, but not in the other groups. On the other hand, the lean mice fed the control diet showed FGF21 levels in the dark periods that were similar to those of the mice fed the HFHS diets rich in soy proteins, regardless of different liver TG content. Although further study is needed to explore the precise mechanism, there was no significant difference in the expression levels of hepatic ChREBP among the control, HFHS-soy protein, and HFHS-b-conglycinin fed groups. The results suggest the possibility that fructose-induced ChREBP expression is associated with circulating FGF21 levels in the dark/feeding condition in the present study. Furthermore, one study has revealed that FGF21deficient mice fed a control diet undergo a mild weight gain but manifest a marked body weight gain and body fat accumulation when fed a ketogenic (high-fat) diet (38). These observations suggest that a lack of FGF21 is related to an impaired ability to metabolize lipids on a high-fat diet; therefore, mice fed a control low-fat diet had a lean phenotype in this study despite the high levels of plasma FGF21. This study evaluated gene expression in the liver and visceral adipose tissue, which underwent smaller accumulation in the mice fed the HFHS-b-conglycinin diet. FGFR1 is the major receptor for FGF21 (39), and FGF21 signaling requires both FGFR and b-klotho, which form the FGF21-FGFR-b-klotho complex (40). In the present study, there were no significant differences in FGFR4 and b-klotho expression levels among the groups. Hepatic FGFR1c was not significantly different among the groups, whereas FGFR1c was highly expressed in mesenteric fat tissue of the mice fed HFHS diets. Although the underlying mechanism is unclear, another study also indicates that FGFR1 expression is high in subcutaneous adipose tissues of obese humans and rats (41). These results suggest that FGFR1 is an obesity-induced gene in both subcutaneous and visceral adipose tissues, and this induction may be partly caused by overnutrition. PPARg2 expression in mesenteric fat did not differ among the groups, whereas the hepatic PPARa gene was upregulated in the HFHS-casein and HFHS-soy protein groups but not in the HFHS-b-conglycinin group. CPT1 and ACO, which are PPAR target genes, showed similar expression patterns as PPARa in the liver, suggesting that the HFHS-diet induced the PPAR pathway in the HFHS-casein and HFHS-soy protein groups. It has been reported that PPARa is induced by isoflavone, which is a component of soy and participates in the reduction in hepatic fat contents (42). Therefore, hepatic fat accumulation may be inhibited by PPARa induction in the HFHS-soy protein group. Hepatic PPARa expression was significantly higher in the HFHS-casein group too, but there were significantly higher mRNA levels of FAS, ChREBP, and SREBP1c (which are lipogenesis genes) in the HFHS-casein group. These data suggest that these lipogenesis genes are more strongly associated with hepatic fat accumulation than PPARa expression. Moreover, the present study indicates that b-conglycinin intake suppresses hepatic FAS, ChREBP, and SREBP1c expression. This result suggests that dietary b-conglycinin prevents expression of these lipogenesis genes, thereby preventing hepatic fat accumulation, regardless of PPARa expression status. Some studies have revealed that FGF21 stimulates browning of adipose tissues, and this process upregulates UCP1, thereby increasing energy expenditure and reducing body fat content (43). The expression of thermogenic protein UCP1 was approximately 1.5-fold higher in the HFHS-b-conglycinin group, but this difference did not reach statistical significance. Some studies have suggested that UCP1 has a circadian expression pattern in brown adipose tissue (44), and therefore it is possible that the sampling times influenced the FGF21induced changes in UCP1 expression in adipose tissue. In addition, our metabolic measurements revealed a higher energy expenditure in the HFHS-b-conglycinin group. It is well known that fat-free mass is a strong predictor of energy expenditure (45), and intraperitoneal fat was smaller in the mice fed the HFHS-b-conglycinin diet than in the other HFHS diet groups. This result suggests that fat-free mass per body weight may be higher in the HFHS-b-conglycinin group than in the other HFHS diet groups, and thus consideration needs to be applied to the factors of higher energy expenditure in mice fed the b-conglycinin diet. Adipose tissue is an endocrine organ that performs a critical function in metabolic homeostasis. It is believed that adiponectin has beneficial effects on glycolipid metabolism and is negatively associated with visceral fat mass (46); however, the present study showed that adiponectin expression was approximately twofold higher in the HFHS diet groups than in the control group. It has been reported that PPARg has a role in transcriptional activation of adiponectin (47), and the expression levels of PPARg2 and adiponectin manifested similar patterns in the present study. Therefore, it is likely that adiponectin expression was influenced by PPARg, regardless of visceral fat mass. Leptin is another adipocytokine that regulates food intake and energy homeostasis, and obese individuals show leptin resistance (48). It has been reported that a high-fat high-fructose diet increases both leptin expression in adipose tissue and circulating leptin levels (49), and the present study revealed that leptin expression in mesenteric fat was elevated in the HFHS-casein diet group but not in the HFHS-soy protein and HFHSb-conglycinin groups. This result suggests that dietary soy proteins suppress HFHS diet-induced leptin resistance. Although further analysis is needed regarding the relation between leptin resistance and fatty liver (50), it has been demonstrated that serum leptin levels are elevated in patients with fatty liver, independently of the body-mass index and percentage of body fat (51), and leptin improves fatty liver by stimulation of b-oxidation (52). It was reported that leptin suppressed the lipogenic enzyme gene by down-regulating the SREBP1c gene (53), suggesting that in the HFHS-b-conglycinin group the lower leptin resistance was associated with lower expression levels of FAS and ChREBP, which was regulated by SREBP1c. The leptin-regulated SREBP1c expression may be associated with plasma FGF21 levels, which is induced by ChREBP as seen in the HFHS-casein group. These observations point to the preventive effects of leptin on hepatic fat accumulation, and suppressed leptin expression may be associated with lower hepatic fat content and FGF21 secretion in the HFHS-soy protein and HFHS-b-conglycinin groups under feeding conditions. This study has several limitations mainly due to time points for the sampling of blood and tissues. Chronological observation makes it possible to precisely determine the effects of FGF21 resistance on a body fat distribution; however, it is difficult to collect blood and tissue samples at all time points, and a large number of animals would be necessary. Noninvasive techniques, such as luciferase bioluminescence imaging, are necessary in future studies, at least for gene expression measurements. This study reveals that hepatic fat accumulation is associated with an abnormal FGF21 secretion, which is prevented by soy protein-rich diets. Mice fed HFHSsoy protein and HFHS-b-conglycinin diets showed lower hepatic fat content, which is associated with lower plasma FGF21 levels in the dark/feeding periods and higher fasting-induced plasma FGF21 levels. These results suggest that hepatic fat content is a determinant of chronically high levels of circulating FGF21 and the attenuated FGF21 secretion under fasting conditions. In addition, mice fed the HFHS-b-conglycinin diet had a marked increase in plasma FGF21 levels under fasting conditions, as well as lower visceral fat mass. These results support the role of hepatic fat content in FGF21 metabolism and the beneficial effects of soy protein intake on body fat distribution. Disclosure of state of COI No conflicts of interest to be declared.
2020-01-02T21:45:21.704Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "e76ef6195d9400e769bee75b2cd7975e600ac2bc", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/jnsv/65/6/65_515/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "66e8832e07e52616744c7b794f69da0127462e24", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
213982990
pes2o/s2orc
v3-fos-license
GdBCO (123) HTS 2G tapes superconducting characteristics investigation under the pulsed electron beam exposure impact The 2nd generation composite HTS tapes samples based on the GdBCO (123) compound were irradiated in a pulsed electron accelerator (Terek-2 facility, IOF RAS) through a tantalum target to determine the thermal and shock loads effect at the tantalum-HTS boundary on the HTS tapes superconducting characteristics. Scanning Hall magnetometry was used to characterize the HTS samples after the electron irradiation exposure. The thermal modes and shock load in a composite superconductor under the irradiation influence are estimated depending on the magnitude of energy contribution to the target. At a silver surface temperature equal to the melting temperature, the critical current value drops by 87% of the initial value. At lower energy, a weaker critical current decrease was observed. The temperature effects and shock waves roles under the irradiation is discussed. Introduction Currently, superconducting wires (HTS-2) production technologies and physical parameters have reached the level at which the unique devices engineering development, mechanisms and machines for use in space and aerospace fields and in nuclear physics has begun. This is about the energy storage devices, electric motors, powerful magnetic systems. Under operating conditions, HTS can be exposed to various external factors, such as ionizing radiation, shock waves, high temperatures, etc., which will affect the superconductors functional parameters. Therefore, extreme impacts are simulated in the laboratories, allowing to get an idea of the HTS-2 viability under these conditions. In the literature there are a large number of works on the superconductivity under the ionizing radiation conditions study. The results show that in certain cases the critical current increasing effect as the radiation defects creation result -the Abrikosov vortices pinning centers was observed. This was previously discovered for the Nb3Sn superconducting compound (see, for example, [1,2]). Also, in the HTS cuprates, the critical parameters improvement was obtained under the various nature ionizing particles irradiation influence (for example, [3][4][5][6]). In addition, the superconductors critical parameters improvement under the shock waves action due to the more equilibrium structural phase states creation and the dislocation interstitial loops and vacancy pores -pinning centers -formation was discovered in [7,8]. This work is devoted to the superconducting properties changes study of HTS-2 tapes samples based on the GdBa2Cu3O7-x compound upon the relativistic electron beam (REB) irradiation in the Terek-2 Experimental methods The Terek-2 experimental installation is the pulse electron accelerator with the pulse current magnitude up to 10 kA and the pulse duration up to 35 ns. The beam electron energy can be continuously adjusted in the range from 200 to 550 keV. The accelerator used a storage capacity (2.5 μF, 50 kV) as a charger, which is discharged through a pulse autotransformer to a double forming line. As a result, a trapezoidal high-voltage pulse is generated in the gap between the anode and cathode, resulting in the high-energy electron current pulse appearance due to cathode explosive emission. The sample irradiation scheme is shown in figure 1. To avoid the direct electron beam action on the superconductor, a tantalum target 12×12 mm 2 in size and 0.1 mm thick was placed in front of the sample, to the back side of which the HTS sample was glued. Exposure modes are given in the table 1. As samples, the segments of HTS-2 tapes based on the GdBCO (123) compound measuring 12×12×0,1 mm 3 manufactured by SuperOx were used. HTS-2 tapes are multilayer composites with the following technical characteristics: critical temperature is 93 K, critical current density at Т=77К jc= 2×10 6 А/cm 2 [9]. The composite tape architecture is shown in the figure 2. Buffer layers of La2O3, MgO, Y2O, Al2O3 with a 200-300 nm total thickness were deposited on a Hastelloy C276 substrate with a thickness of 100 μm, onto which a 1-μm GdBCO (123) layer was deposited by laser sputtering, and a 2-μm Ag layer was deposited on top. The critical temperature was measured before and after irradiation with direct current using the fourprobe method. To study the HTS tape critical current spatial distribution after the REB action, the scanning Hall magnetometry method was used. An automated experimental bench was used, which included a threecoordinate system for moving the sensor and permanent magnets for the superconductor magnetization. The sample, previously cooled in liquid nitrogen, passes in the gap between two closed permanent magnets, while in the superconducting layer shielding superconducting currents are created. Using the Hall sensor, a trapped magnetic field above the sample is scanned; the measurement height is 0.5 mm. A Hall transducer with the following characteristics was used: transducer size -2×1.5×0.6 mm 3 , sensor working area size 00.45×0.15 mm 2 , magnetic sensitivity is 94 μV / mT. From the obtained twodimensional magnetic field Bz normal component distribution, by solving the inversion of the Bio-Savart law in the Bean model framework, the critical current and the local distribution of the current components along and across the sample were determined [10][11][12]. A conclusion about the local changes in the HTS tapes samples current-carrying characteristics is proposed based on the presence of current domains and their topology. In a first approximation, Ic ~ grad Bz. Experimental results The measuring results of the trapped magnetic flux induction Bz by the Hall scanning magnetometry are shown in the figure 3 (a, b, c, d) for the original and three irradiated samples. As can be seen from the illustrations, in the case of sample №1 (Е=42 J), the distribution uniformity corresponds to the original sample, but the current amplitude is 1.8 times less than the initial one. For the sample №3 (Е=26 J), the distribution uniformity corresponds to the original sample. The current amplitude is 5.5% less than the initial one. In the case of the sample №2 (Е=60 J), figure 3d, the tantalum target was severely deformed and pierced. The sample №2 has a significant current heterogeneity with respect to the original sample. The middle part (presumably the shock region) -only focal superconducting sections remain, the current closures along the sample edges correspond the original sample. The current amplitude is the 6 times less than for the initial sample. In all other cases (samples №№ 4, 5, 6) no effect on superconductivity was found. The electron beam energy is too low. Heat processes calculation A To explain the obtained experimental results, the thermal and shock wave processes occurring in the material under study -a composite superconductor -as a result of irradiation of a tantalum target with a relativistic electron beam were evaluated. As can be seen from the Table 1, six samples with different energy input levels from 8 to 60 J were exposed. Since the tantalum foil 100 μm thick was directly exposed to the electron beam, the pressure and temperature perturbation on the HTS was determined by the temperature and pressure on the external side of the foil respected to electron beam ( figure. 1). As a result of irradiation of the sample № 2 (E=60 J), the outer side of the tantalum target was strongly deformed and melted. The sample itself underwent mechanical damage with traces of silver melt. When the beam energy level was reduced to E=42 J and lower, mechanical damage to HTS samples (№ 1, 3, 4, 5, 6) and tantalum foil were not observed. To assess the temperature effect at the Ta-foil-silver section boundary on the HTS samples superconducting characteristics, the energy deposition in the region of interaction between the electron beam and the tantalum target was estimate. When a target is irradiated with an electron beam, the part of the electron beam is reflected from the target. The fraction of reflected electrons and the energy carried away by them weakly depend on the beam initial energy, and is determined only by the target material core charge Z. For the tantalum, this value is ~ 40% [13]. Other beam energy loss types (0 <E <2.5 MeV) upon the metal targets irradiation do not exceed 1-3%. Thus, ~ 50% of the beam energy is used to heat the interaction region of electron beam with the tantalum target. The temperature value on the tantalum foil outer surface is estimated at the beam energies given in the Table 1, based on the simplest model concepts. First of all, we assume that the substance (Ta) absorbing the energy of the beam can be considered locally equilibrium during the current pulse. An increase in internal energy causes only the temperature increase, and the processes of heat and mass transfer can be neglected [14]. The energy release zone temperature expression in this case is: where с(T) -specific heat, ρ -tantalum density, n(r,∆t)-beam electron density, ( , ) -fraction of energy lost by the electron during braking in the foil, T0(r,t)tantalum temperature before the REB exposure. Formula (1) can be represented as: where E -beam energy, V=Re×Sc-energy zone volume, Re -extrapolated 460 keV electron range in tantalum foil, Scbeam cross-sectional area. To determine the tantalum target temperature depending on the beam energy, we determine the electron beam energy deposition region volume V in the Ta foil at the end of the REB pulse (beam energy 60 J, τ ~ 35 ns). It is known that the extrapolated electrons range in a substance with a charge Z and mass number A is related to the range in aluminum as follows [15]: where RE(Al) = 0. (2) the volume of the energy input region can be determined. The volume of the energy input region of the electron beam is ~ 1.0×10-8 м3. With the beam energy change, the energy deposition region volume remains practically unchanged (electron energy ~ 460 keV). With this in mind, substituting the corresponding values of the beam energy in the equation (2), we obtain the temperature in the energy input for various beam energy values (table 1, two last lines). The tantalum foil irradiation by a REB pulse leads to the pressure increase that arises in the heated zone immediately after the electron beam energy absorption, which can lead to the shock wave (SW) formation. The criterion for the SW formation [16] has the form: where E -total absorbed pulse energy with a duration τ, cL-longitudinal sound velocity in Ta, ρtantalum density. Inequality (4) in our case does not hold ( ≪ 4 0 2 ). High-energy beam tantalum target irradiation-electrons with the energy of (8-60) J in the circuit shown in the figure 1, leads to heating of the Ta-Ag interface from 450 K to 3000 K. The shock wave with such energy depositions in the Ta target does not form. Conclusions Therefore, estimation presents that the HTS tape samples irradiation through 100 μs tantalum target with REB pulses with an energy deposition(contribution) of 42 J and 26 J leads to small thermal disturbances, as a result of which the critical current decreases by about 50% and 5.5%, respectively. REB with 60 J energy leads to a significant (6 time) critical current decrease and mechanical damage to the tape sample. Calculations show that the critical current dependence on the REB energy contribution (figure 4) gives reason to consider that the observed changes are associated only with thermal and not shock-wave factors. The critical current decrease by the REB influence may be due to the transition of the orthorhombic superconducting phase to the tetragonal nonsuperconducting one. This structural phase transition for YBCO (123) occurs at a temperature Тp ≈ 1000 K [17]. We assume that in GdBCO (123) Тp has the same value. With an energy input increase the fraction of the superconducting phase decreases, which leads to a decrease in the critical current value (figures. 3,4).
2019-12-19T09:18:51.506Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "bbcafa3b7ddf620916e0bf5b6ce26c0477dcf94c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1347/1/012026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "48c119fbd3b00d1037d0bb0cb7d326e939c8704e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
204149491
pes2o/s2orc
v3-fos-license
Wildlife Insights: A Platform to Maximize the Potential of Camera Trap and Other Passive Sensor Wildlife Data for the Planet Summary Wildlife is an essential component of all ecosystems. Most places in the globe do not have local, timely information on which species are present or how their populations are changing. With the arrival of new technologies, camera traps have become a popular way to collect wildlife data. However, data collection has increased at a much faster rate than the development of tools to manage, process and analyse these data. Without these tools, wildlife managers and other stakeholders have little information to effectively manage, understand and monitor wildlife populations. We identify four barriers that are hindering the widespread use of camera trap data for conservation. We propose specific solutions to remove these barriers integrated in a modern technology platform called Wildlife Insights. We present an architecture for this platform and describe its main components. We recognize and discuss the potential risks of publishing shared biodiversity data and a framework to mitigate those risks. Finally, we discuss a strategy to ensure platforms like Wildlife Insights are sustainable and have an enduring impact on the conservation of wildlife. Summary Wildlife is an essential component of all ecosystems. Most places in the globe do not have local, timely information on which species are present or how their populations are changing. With the arrival of new technologies, camera traps have become a popular way to collect wildlife data. However, data collection has increased at a much faster rate than the development of tools to manage, process and analyse these data. Without these tools, wildlife managers and other stakeholders have little information to effectively manage, understand and monitor wildlife populations. We identify four barriers that are hindering the widespread use of camera trap data for conservation. We propose specific solutions to remove these barriers integrated in a modern technology platform called Wildlife Insights. We present an architecture for this platform and describe its main components. We recognize and discuss the potential risks of publishing shared biodiversity data and a framework to mitigate those risks. Finally, we discuss a strategy to ensure platforms like Wildlife Insights are sustainable and have an enduring impact on the conservation of wildlife. Introduction Wildlife is an essential component of all ecosystems. The unsustainable removal of wildlife (defaunation) can create cascading effects that affect plant communities (e.g., Harrison et al. 2013) and thus the ecosystem functions and services they provide (Dirzo et al. 2014, Kurten 2013. For example, without wildlife, the carbon storage capacity of tropical forests could decrease by up to 12% (Osuri et al. 2016), or by as much as 26-37% in Amazonian forests (Peres et al. 2016). Since conserving forests represents as much as 37% of the solution to mitigating climate change (Griscom et al. 2017), ensuring wildlife population numbers are stable is necessary in order to maximize carbon storage. However, the health of wildlife populations often gets overlooked in conservation programmes. There is growing evidence that many populations of wildlife are declining globally (Dirzo et al. 2014, Ripple et al. 2016), but currently, we do not have access to timely, local and reliable primary (unprocessed) data on the status of these populations for most places on the globe. The Living Planet Index (LPI), an annual indicator that measures changes in vertebrate wildlife population levels at the global level, reported an average decrease of 60% in the abundance of 16 704 populations comprising 4005 species of vertebrates between 1970 and 2014 (WWF 2018). The LPI is calculated by compiling published population time series studies, which can limit its interpretation given that it represents past states of population abundance (due to a built-in lag in the publication of journal articles), key features of the raw data are not available (e.g., error estimates) and publications can be biased (e.g., there are more publications on populations that are decreasing or increasing versus those that are stable). Other evidence for wildlife population declines comes from global assessments by the International Union for Conservation of Nature (IUCN), which compiles information from various sources to assess the conservation status of wildlife. These sources have different degrees of uncertainty or weight, rendering them difficult to interpret and understand (O'Brien et al. 2010). For mammals and birds, assessments only happen every 4-5 years, which is not frequent enough to capture rapid population changes. The available primary data on wildlife populations are largely incomplete and spatially biased, particularly in megadiverse regions with limited resources (Meyer et al. 2015). This information must be publicly accessible, sufficiently detailed and in a digestible format to be useful at several scalesfrom individual protected areas to countries, regions and continents. The miniaturization of hardware and explosion of digital data types (images, video, sound) over the last 15 years has led to new devices and systems that can be used to sample wildlife populations in a cost-effective way (O'Brien & Kinnaird 2013). Camera traps are the most popular such approach, and their use has increased exponentially over the past decade, with thousands of individuals and organizations collecting these data (Steenweg et al. 2017). Methods to deploy camera traps over large areas in order to monitor and evaluate populations have been well developed (TEAM Network 2011, Wearn & Glover-Kapfer 2017. These sensors are easy to set up, provide a verifiable record of the data (the image or the video) and sample a range of medium-to-large grounddwelling mammal and bird species of interest to conservation. These data can be useful to governmental and non-governmental organizations (NGOs) (Ahumada et al. 2016), academic and research institutions, extractive companies, citizen scientists (McShea et al. 2016) and local and indigenous communities (Schuttler et al. 2019). However, data collection has increased at a much faster rate than the development of integrated tools to manage, process and analyse these data (Fegraus et al. 2011, Harris et al. 2010. A recent global survey of camera trappers (Glover-Kapfer et al. 2019) found that 61% of respondents identified image cataloguing and data analysis as substantial barriers to effective camera trapping. Without these tools, it is difficult to generate insights from the raw camera trap data and effectively manage wildlife populations. We identify barriers hindering the use of camera trap data today and describe solutions to take advantage of these data for science and conservation. We propose integrating these solutions in Wildlife Insights (WI), a unique technology and knowledge platform. Barriers faced by collectors and users of camera trap data Significant barriers prevent the wildlife management, scientific and conservation communities from taking advantage of camera trap data. First, the camera trap data pipeline is too slow, laborious and tedious. Processing thousands of images from a typical camera trap survey can take several weeks or months. Second, most camera trap data are not shared. Because of the sheer volume of data, most data collectors use only removable or built-in hard drives for data storage. This makes data vulnerable to loss (Michener & Jones 2012), creates data silos (data are not easily accessible) and is a barrier to collaboration and data exchange (Scotson et al. 2017). Third, most data users are unable to easily analyse camera trap data or extract insights for conservation. Camera trap data are inherently difficult to analyse, and simple questions such as, 'Is this species population increasing or decreasing at my site?' can only be answered by a minority of users with training in advanced statistics. Furthermore, most of these users reside in high-income countries, leaving low-income, biodiversity-rich countries vulnerable to analytical gaps. Finally, most of the existing camera trap hardware was not originally designed for science and conservation, but was shaped to satisfy the needs of wildlife hunters. Most current camera trap models lack basic environmental sensors (e.g., light, temperature, humidity), distance and speed measurement sensors (important for estimating movement and density of animals), global position system capabilities, Bluetooth connectivity and other smart features (e.g., identifying blanks or misfires) (Table 1). A solution to overcome these barriers: Wildlife Insights We propose to address these barriers through a technology and open knowledge platform, WI, where global wildlife data can be aggregated, analysed and shared. We propose a specific solution to each of the barriers listed above and then explain how these solutions will be integrated into the platform (Table 1). First, the data processing pipeline needs to be sped up by at least tenfold. New software that uses artificial intelligence (AI) to automatically identify the species in the image, learns as new species are added to the system and is integrated in a user-friendly interface is needed. This should be coupled with an intuitive and clean interface to manage images, sequences of images and projects all in one place. Second, data silos should be eliminated and data exposure increased, where appropriate, by developing a solution whereby data will be securely archived and available to the wider community of conservationists, scientists and managers. Flexible data licensing models (e.g., Creative Commons) should be available to easily allow data collectors to share data and potential data users to build upon data that are being shared. Third, analytical tools and interfaces that enable the easy visualization and analysis of camera trap data at different levels of aggregation, such as various temporal, spatial and taxonomic scales, are required. A key goal is to improve the ability of wildlife managers and others to make local decisions on the ground based on wildlife data and the socio-environmental context surrounding them. For example, WI could provide wildlife biodiversity outcome indicators that can be integrated into emerging frameworks in order to assess protected area management effectiveness such as IUCN Green List Standards (IUCN & World Commission on Protected Areas 2017). The development and adoption of a new generation of hardware that fits the needs of the wildlife conservation science community should be accelerated. We envision partnering with NGOs already developing next-generation sensor technologies and technology companies in order to achieve this. Wildlife Insights platform components There are three basic components to ensure the WI platform fulfils its intended functionality: data input; AI and analysis; and data sharing (Fig. 1). Some of the functionality of the platform is illustrated in Fig. 2, with an emphasis on image display, review and management (Fig. 2(a-c)), as well as some simple analytics ( Fig. 2(d)) at the project level. Data input component This is composed of a data ingestion module and a sensor management module. The data ingestion module allows data to come into the system in four different ways: new raw/unprocessed images; existing catalogued camera trap data uploaded in a batch; catalogued data coming into the system by using application programming interfaces (APIs); and data coming from a desktop client or smartphone app. The sensor data management module organizes all of this information in a database including images, the metadata associated with those images (e.g., time, date, etc.) and ancillary environmental/social covariates associated with a camera trap location or project (e.g., temperature, elevation, etc.). When uploading existing catalogued data either in batch mode or via an API, the data ingestion module implements quality assurance and quality control measures in order to identify typos in species identifications, location errors and inconsistencies in time, date and other variables, and then presents these to the user for correction in order to ensure the ingested data are of the highest quality. Artificial intelligence and analysis component This layer consists of two main modules: a wildlife identification service (WIS) and an analytics engine module. In its initial version, the WIS is a multiclass classification deep convolutional neural network model using pre-trained image embedding from Inceptiona model that is widely used in image processing for identifying real-world objects (Szegedy et al. 2016). We fine tune the model using the extensive labelled dataset from the WI partners, taking advantage of temporally and geospatially correlated images to increase accuracy and develop models in order to identify images without wildlife. The WIS will be trained with c. 18 million labelled images from the core partners of WI, and it will be continually retrained as new data come in and better identifications are made. Users will run inferences in the cloud as images are uploaded to the platform and locally in edge environments on desktop and smartphone devices. An additional part of the data processing and analysis component is an analytics engine and reporting services module that provides key indicators and reports at the species, project, initiative (collection of projects) and global levels. A first level of analytics provides operational statistics for each camera trap project (or initiative), including administrative information, project goals, number of sampling locations, number of Environmental Conservation 3 deployments (a set of camera trap data coming from a particular spatial location collected over a finite time interval), sampling effort, number of images and observation events and number of species detected. A second level of analytics and visualization tools includes simple statistics and models that can be derived using existing approaches. These include analyses at the species/individual level (e.g., activity budgets), population level (e.g., temporal trends from different metrics such as occupancy, point abundance and density) and community level (species diversity indices). A third level of analytics provides users with more complex spatial and/or temporal products that use environmental/social covariates and allow spatial and/or temporal inferences for various population and community metrics in unsampled locations within the vicinity of a camera trap project (e.g., an entire protected area). These covariates can come from existing standardized global layers (e.g., climate, elevation, protection status) or be provided by data providers at the project level (e.g., habitat structure, management level). In a fourth level, the camera trap data are envisioned to be integrated with other biodiversity data types (i.e., incidental occurrence records, animal movement data, expert range maps, checklists and modelled distributions) from other platforms (i.e., Map of Life, Half Earth) to support global biodiversity conservation (Jetz et al. 2012). The integration of camera trap data will offer an unprecedented view of global conservation patterns, which will support the calculation of essential biodiversity variables (EBVs) (Kissling et al. 2018, Pereira et al. 2013, specifically the species abundance and species distribution EBVs (Jetz et al. 2019) and indicators for the Intergovernmental Science-Policy Platform for Biodiversity and Ecosystem Services (IPBES). A final level of analytics will aggregate all of the camera trap data in WI to produce global products, including population and community indices that can be disaggregated at lower spatial scales (e.g., a global Wildlife Picture Index, disaggregated by country or region, similar to the Living Planet Index) (O'Brien et al. 2010). All of these products are being designed in a consultative waywith end users in mindto ensure that they inform understanding of wildlife as well as provide insights for conservation and management at the local scale. Data sharing component How the information is shared and presented is key to ensuring achievement of the desired goals of WI. A key module of this component is a front-end multilingual website with basic information about WI, engaging visualization tools and statistics, camera trap resources and standards and a customizable content management system, which allows users/clients to personalize the way they organize, analyse and work with their data. A platform preview website is at www.wildlifeinsights.org. People and/or organizations that upload their data to WI (data providers) have the option of sharing their data under a flexible license model outlined by the Creative Commons, including public domain CC0, CC BY 4.0 (attribution required) and CC BY-NC 4.0 (attribution non-commercial) (https://creativecommons.org/licenses). Data providers can embargo their data on the platform for an initial period of 1-24 months with the possibility of renewal up to a maximum of 48 months. Embargoed data are not publicly displayed on the platform, but are available to a Wildlife Insights Analytics Taskforce for the purpose of developing platform-wide derived products that are publicly displayed on the WI website (e.g., Wildlife Picture Index). Camera trap images containing people will not be publicly displayed on the platform (see 'Potential risks' section). People and/or organizations looking for camera trap data on WIdata userscan browse and download publicly available data, including images and all associated metadata (time, date, camera trap location, species name, etc.), after registering on the platform. When downloading WI data, data users need to record an intended purpose (e.g., use for class exercise or research). Whenever a dataset is downloaded by a data user, the corresponding data provider will receive a notification with information about the data user and intended purpose of use (if shared under CC BY license). This will foster collaboration and communication between data users and data providers while ensuring appropriate attribution if required by the specific data sharing license. Wildlife Insights organizational structure Creating a solution with these characteristics cannot be done by one single organization or organization type. This challenge requires a partnership between institutions that collect camera trap data as part of their core mission, organizations with core competencies in data analysis and modelling and technology companies that are at the forefront of emerging software and hardware development in the era of big data. It also requires technologists and conservationists that can bridge domain-specific semantics to design intuitive interfaces for managing and visualizing these data in a way that makes sense to a wide variety of audiences. To date, WI is a partnership between eight organizationsthe core partnersincluding NGOs (Conservation International, Wildlife Conservation Society, World Wildlife Fund, Zoological Society of London), academic organizations (The Smithsonian Institution, North Carolina Museum of Natural Sciences, Yale University) and a technology company (Google). The governance of WI follows the model of many open source software collaborations, with three different bodies sharing responsibility and guiding the decisions of WI: a steering committee (SC); four thematic standing committees; and a secretariat. The SC is the highest governing body of the platform, and it is composed of individuals representing each of the core partner organizations, chairs of each of the four standing committees (see below) and an executive director from the secretariat. The SC reviews and approves the high-level financial and programmatic agenda of WI. The sharing of members between the standing committees and the SC provides for a more open and transparent decision-making process. There are four programmatic standing committees: Technology; Science and Analytics; Communications; and Sustainability. Each of these committees, composed of individuals from the core partners, advisors and members of the camera trapping community at large, provides programmatic and strategic guidance to the SC and the secretariat on these thematic areas. Finally, a small secretariat office (currently at Conservation International) ensures the implementation of major programmatic and operational activities and guides collaborative fundraising activities among the partners. Potential risks Sharing location-based biodiversity data incurs potential risks, particularly for threatened species. The main risk is the perverse use of spatial information on threatened species by wildlife traffickers and poachers. WI aims to mitigate this risk by adopting a recent framework that provides a decision tree on whether or not biodiversity information should be published based on a cost-benefit assessment (Tulloch et al. 2018). WI will maintain and periodically review a list of camera-trapped species that have been assessed as Critically Endangered (CR), Endangered (EN) or Vulnerable (VU) according to the latest version of the IUCN Red List and for which hunting and/or collecting have been identified as threats. WI will not make public the exact locations of camera traps that have recorded these species; instead, summary information at a coarse spatial scale will be shown. Individual data providers will have control over releasing this information to particular data users for legitimate uses if requested. Given the ubiquity of national and local red lists that extend IUCN's global Red List, data providers will have the option of adding up to five additional species that are considered threatened or sensitive by local and national red lists. The locations of these species will also be obscured following the guidelines outlined for threatened species. Posting images with humans also poses a privacy risk. Human images will be kept private to each data provider, will not be public and will include the ability to face blur and delete the images altogether. Finally, WI will use industry security standards to ensure data are safe and backed up regularly. The future of Wildlife Insights and other shared data platforms We envision WI as a long-lasting solution to a problem currently faced by camera trap and other passive sensor data, not as a regular science or conservation project. Given that the amount of wildlife and environmental data is growing at an unprecedented rate (Farley et al. 2018, Stephenson et al. 2017), WI's successand the success of existing and new shared data platformsis strongly tied to its permanence, growth and continuity. Pathways to sustainability need to be found that go beyond external philanthropic funding. WI requires a business plan to address this problem explicitly from the beginning. This plan needs to identify ways to generate revenue such as software as a service for certain types of users (e.g., companies and/or large companies), core partner contributions, multilateral and bilateral funding and interest-generating financial instruments. A sustainability strategy would allow WI to maintain core operating costs, grow and innovate while at the same time maintaining the platform as free of charge for most of its users. Environmental Conservation To be successful, WI needs to demonstrate that timely and reliable data can be mainstreamed in the design and implementation of wildlife conservation programmes in a proactive, evidencebased approach. More importantly, it needs to show that this approach has impact on wildlife populations, helping to ensure their future conservation.
2022-07-31T15:22:01.552Z
0001-01-01T00:00:00.000
{ "year": 2019, "sha1": "936330823f2626dc9b820f7d4cf4c1d92ea92703", "oa_license": "CCBYNC", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/98295387F86A977F2ECD96CCC5705CCC/S0376892919000298a.pdf/div-class-title-wildlife-insights-a-platform-to-maximize-the-potential-of-camera-trap-and-other-passive-sensor-wildlife-data-for-the-planet-div.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "936330823f2626dc9b820f7d4cf4c1d92ea92703", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
165082127
pes2o/s2orc
v3-fos-license
Recent advances in Staphylococcus aureus infection: focus on vaccine development Abstract Staphylococcus aureus normally colonizes the nasal cavity and pharynx. After breaching the normal habitat, the organism is able to cause a number of infections at any site of the body. The development of antibiotic resistance has created a global challenge for treating infections. Therefore, protection by vaccines may provide valuable measures. Currently, several vaccine candidates have been prepared which are either in preclinical phase or in early clinical phase, whereas several candidates have failed to show a protective efficacy in human subjects. Approaches have also been made in the development of monoclonal or polyclonal antibodies for passive immunization to protect from S. aureus infections. Therefore, in this review we have summarized the findings of recently published scientific literature to make a concise report. A scenario of Staphylococcus aureus infection Staphylococcus aureus is a common human pathogen which can colonize the skin, nose, and pharynx with anterior nares as the main reservoir. 1,2 S. aureus is one of the major disease-causing organisms due to its unique ability to escape the innate immune response such as phagocytic, complement or antimicrobial peptide (AMP)-mediated killing, which assists survival in blood and other tissue during persistent infections. 3 S. aureus has been found to be associated with a high rate of health care-associated infections (HAIs) in hospitalized and immuno-compromised patients as well as community-acquired infections (CAIs). 4 A report found the nasal colonization of S. aureus in 37.8% of adults which rose up to 54.7% when throat samplings were added for detection. 5 In fact, the challenges of HAIs and CAIs have increased in the last two decades. This organism has acquired an ability to cause a wide range of infections, from minor infections such as skin and eye infections to major infections such as bloodstream infections (BSIs) and pneumonia. [6][7][8] Multi-drug-resistant S. aureus has been found to be one of the major organisms causing BSIs which are associated with high morbidity and mortality worldwide. 9 Among BSIs, neonatal septicemia has been reported to be most commonly caused by this organism. 10 Epidemiological studies found that BSIs-causing pathogen differs significantly between developed and developing countries. 11 A recent Europen report from a Finnish Hospital Infection Program which was conducted during 1999-2001 and 2005-2010, found that S. aureus ranked among the top three organisms causing BSIs. 12 Moreover, in another nationwide observational study conducted recently in Switzerland on all intravascular catheter (IVC) tip culture cases, S. aureus was reported as one of the most prevalent organisms causing subsequent BSIs in nonintensive care (non-ICU) and ICU patients. The findings also highlighted that particular attention should be paid if Candida albicans, S. aureus, Serratia marcescens, and Pseudomonas aeruginosa are isolated from IVC tips, as these organisms are associated with a higher frequency of subsequent BSIs than other pathogens. 13 It has also been found that S. aureus was the leading organism causing native and prosthetic valve infection in high-income countries. 14 Besides, S. aureus has also been isolated from lower respiratory tract infections such as pneumonia. Several clinical studies have highlighted its role as the predominant organism causing ventilator-associated pneumonia (VAP), [15][16][17] which is the single most common HAI in ICUs around the world. 18,19 A surveillance study conducted in European Union (EU) and European-Economic Area countries on health care-associated pneumonia (HAP) reported that 12% of cases were caused by S. aureus, which was the second most prevalent bacteria causing HAP, with 47% isolates resistant to methicillin. 20 Despite causing infections in seriously ill patients, S. aureus has also been reported as the most predominant bacterial causative agent of communityacquired pneumonia. 21 Cystic fibrosis, a predominantly P. aeruginosa-associated disease, has also been found to be caused by S. aureus. 22 S. aureus and antimicrobial resistance The emergence of infections caused by drug-resistant bacteria is a serious and growing global health concern. Therefore, significant efforts are being made in the development of new antimicrobial compounds with improved efficacy. 23,24 However, despite these efforts, an increasing number of multidrug-resistant bacteria including methicillin-resistant S. aureus (MRSA), extended-spectrum beta-lactamase (ESBL) producing Enterobacteriaceae, and carbapenem-resistant Gramnegative bacteria are being reported continuously. [25][26][27] Once, beta-lactams, aminoglycosides, fluoroquinolones, macrolides, and trimethoprim-sulfamethoxazole were considered effective antibiotics to treat infections caused by S. aureus. However, its abuse and misuse have caused resistance and up to 85% of isolates have been reported to be non-susceptible to most of these antibiotics in current clinical use. [28][29][30] In recent years, antimicrobial resistance has become a major public health issue and MRSA strains which have developed resistance to all beta-lactam antibiotics including penicillins, cephalosporins (except ceftaroline and ceftobiprole), and carbapenems have been reported to represent around 25% and even in some regions greater than 50%. The Centers for Disease Control and Prevention has reported 80,000 severe MRSA infections in the United States alone in 2011, with a rate of 11,000 deaths every year. 31,32 More than half of hospital-acquired infections are caused by S. aureus in most Asian countries. 33,34 Similarly, in 2012, MRSA was estimated to have caused infections in over 75,000 patients leading to the death of more than 9,600 in the United States. 35 In the EU, the proportion of fatal cases is about 50,000 caused by multidrug-resistant staphylococci out of approximately 3 million nosocomial infection cases, as reported by the European Centre for Disease Prevention and Control. 36 A Chinese surveillance study reported S. aureus as one of the major pathogens causing BSIs, with more than half of the strains isolated being resistant to penicillin, erythromycin, cefazolin, and cefuroxime, whereas proportions of MRSA ranged from 30%-40%. 37 In another study, conducted in 26 public hospitals in Hong Kong between January 2010 and December 2012, an increasing rate of MRSA was reported. 38 In a recent meta-analysis report from Asia Pacific regions, the proportion of MRSA among all tested samples was reported to be up to 39% and the proportion of MRSA among all S. aureus isolates was reported to be up to 89%. 39 Multi-drug-resistant S. aureus, including MRSA, can easily spread from the hospital setting to the community and within the community and poses additional problems for infection control strategies. 40 However, infection control programs have been implemented recently in several countries. In the United States, Europe, and many other countries, multiple infection control "bundles" such as allotting single rooms for MRSA-colonized or infected patients, targeted admission screening for high-risk patients and health care workers at high risk for infection with multi-drug-resistant pathogens, molecular typing of all MRSA strains, and decolonization of MRSA carriers, have been initiated and tested to control the spread. As a result of these strategies, a decreasing rate of MRSA has been reported. However, the pattern of drug resistance still remains a great challenge. Empirical treatment of presumptive S. aureus diseases with an alternative to the anti-staphylococcal beta-lactams such as clindamycin and trimethoprimsulfamethoxazole, became widespread during the 1990s when community-associated MRSA was on the rise until 2000s. 41,42 However, due to the overuse of these antibiotics, an increasing resistance continued to be reported and currently the resistance to these antibiotics pose a great threat to the treatment of infections. 43,44 However, in a recent observational study on pediatric clinical cultures performed between 2005 and 2017 in the United States, a declining trend of MRSA from 41% to 27% over the study period, yet an increasing trend of clindamycin (from 21%-38% in MRSA and 5%-40% in MSSA) and trimethoprim-sulfamethoxazole (from 2%-13% in MRSA and relatively stable in MSSA) resistance were reported. 45 Moreover, other studies have reported an increased incidence of MRSA as well as antibiotic resistance. [46][47][48][49] Multiple factors have been implicated in the development of antibiotic resistance, such as over-and misuse of antibiotics mostly in developing countries; however, biofilmmediated drug resistance in bacteria is another major mechanism and it has been predicted that if the current treatment practice continues unchanged, the infections caused by antibiotic-resistant bacteria would be a major cause of death in 2050 where the expected number of deaths will be around 10 million every year. 50 To cope with these multi-drug resistance problems, several anti-staphylococcal drugs such as vancomycin, teicoplanin, linezolid, tedizolid, daptomycin, tigecycline, ceftaroline, ceftobiprole, oritavancin, and dalbavancin have been approved for treating the life-threatening infections caused by multi-drug-resistant S. aureus. Moreover, currently, in some countries vancomycin and teicoplanin are the most commonly used drugs to treat MRSA infections. 51 However, increased MICs and reduced susceptibility to these antibiotics, poor tissue penetration, and adverse reactions due to the use of these antibiotics, have been reported to cause a limitation of its use in clinical practice. 43,44,[52][53][54][55][56][57][58] Because of the emerging problem of resistance, the World Health Organization (WHO) has listed MRSA and recently emerged vancomycin-intermediate and resistant S. aureus (VRSA) as "high-priority" deadly bacterial pathogens. 59 To overcome the challenging situations in the management of multi-drug-resistant S. aureus infections, alternative therapeutic strategies are of utmost importance. Recent advances in therapeutic strategies The increasing resistance to conventional antibiotics is the most common health issue worldwide. To overcome this problem, many natural antimicrobial compounds have been attracting many researchers' attention in the development of novel therapies for infections caused by the multi-drugresistant organisms. Several such compounds with antimicrobial properties have been reported recently in many studies. Peptides (amino acids) and their drug-conjugated derivatives AMPs are small peptides of less than 50 amino acids with a net positive charge, possessing broad-spectrum antibacterial activity, and have attracted considerable attention. 60 These AMPs exert antimicrobial activity by pore formation in the cell membrane and disrupting the membrane integrity. Although they do not need a specific ligand to bind, they exhibit capability to inhibit the activity of certain enzymes and prevent the protein and nucleic acid synthesis in bacteria. 61,62 The antimicrobial activity of AMPs such as dicentracin-like peptide and moronecidin, against Gram-negative bacteria (such as Escherichia coli, Acinetobacter baumannii, P. aeruginosa), Gram-positive bacteria (such as S. aureus, Staphylococcus epidermidis), and Candida spp. (such as C. glabrata, C. tropicalis, C. albicans) was evaluated and high activity against S. aureus, S. epidermidis, and E. coli and a lower activity were found against other Gram-negative bacteria such as P. aeruginosa and A. baumannii clinical isolates. Moronecidin was found to exert more potency than dicentracin-like peptide against S. aureus including MRSA. 63 Another such peptide, Hecate conjugated with vancomycin (Van/Hec) was tested in vancomycin-resistant and susceptible strains of S. aureus, and the microscopic findings revealed the disruption of bacterial cell integrity leading to the killing of all tested strains including wild-type, MRSA, and VRSA which was not observed when vancomycin or Hecate was used alone. 64,65 Human cathelicidin (LL-37) and thrombocidin-1 (TC-1) have been found to synergize the activity of amoxicillin/clavulanic acid and teicoplanin against S. aureus. 66,67 Xanthones are a class of heterocyclic compounds possessing the oxygen moiety which is widely distributed in nature, including two major plant families, Guttiferae and Gentianaceae, and also in fungi and lichens. [68][69][70] The pharmacological activities of naturally occurring and synthetic xanthone derivatives have been described in several recent pieces of literature. [71][72][73] Antibacterial activities of synthesized xanthone conjugated amino acids were recently evaluated against Gram-positive organisms (S. aureus and Bacillus subtilis) and Gram-negative organisms (E. coli and Klebsiella pneumoniea) as well as against several fungi (Aspergillus niger, C. albicans, and Fusarium oxysporum). 74 Anti-staphylococcal phenolic compounds Anti-staphylococcal phenolic compounds such as polyphenols (flavonols and phenolic acids) have been found to exert antimicrobial activity against several bacterial pathogens by inhibiting the activity of bacterial virulence factors, possessing a capability to interact with cytoplasmic membrane, suppressing the formation of biofilms, and can enhance the antimicrobial activity of antibiotics. The antibacterial activity of polyphenolic compounds against staphylococcal strains has been evaluated and found to exert a promising activity either alone or in combination with antibiotics. 75 Anti-biofilm compounds Biofilm is a thick extracellular polysaccharide material produced by many organisms and its synthesis prevents many antibiotics from penetrating the bacterial cell and renders them resistant. It has been elucidated that more than 25% of infections are associated with the biofilm producing ability of the bacteria. Biofilm producing S. aureus develops the ability to grow within the biofilm and survive phagocytosis and antibiotic action. 76 Nanoscale materials such as silver nanoparticles have emerged as novel antimicrobial agents in combination with existing antibiotics and have shown the most effective antimicrobial activity in vitro. [77][78][79] Several recent studies have tested the efficacy of these silver nanoparticles in combination with antibiotics and they have been found to be a novel therapeutic strategy to treat infections caused by multi-drug-resistant organisms. [80][81][82] A synergistic effect increasing the antibiotic activity of penicillin combined with silver nanoparticles has been found against S. aureus including MRSA. [83][84][85] In a recent study, Manukumar et al described the efficacy of thymol-loaded chitosan silver nanoparticles (T-C@AgNPs) against biofilm producing MRSA using disc diffusion method. Using different concentrations of T-C@AgNPs from 10, 25, 50, 100, 200, and 250 μg/mL and comparing the concentration that produced 10.08±0.06 mm of zone of inhibition (ZOI) with the standard antibiotic ciprofloxacin (10 μg) that had 10.95±0.08 mm ZOI, a dose-dependent biocidal and antibiofilm activity was found. 86 Another recent study also described the antibacterial activity of benzodioxane midst piperazine decorated chitosan silver nanoparticles (BP*C@AgNPs). In the study, using well diffusion test by loading different concentrations of synthetic BP*C@AgNPs against biofilm producing MRSA, depicted the dose-dependent membrane damage leading to bacterial killing. The study also depicted the role of BP*C@AgNPs in the inhibition of biofilm synthesis leading to the decreased adherence of bacterial cells to each other. 87 Recent developments in active immunization Because antibiotic resistance has been found as the major issue in the treatment of infections caused by multi-drugresistant bacteria, vaccination could provide protection against the infections caused by antibiotic resistance as well as susceptible organisms. Primarily, the vaccine development focuses on the driving of antibody response which is able to block the toxins involved in the killing of immune cells as well as helping in the opsonization of bacterial cells. Therefore, several attempts have been made in the development of safe and effective vaccines (Table 1). However, some vaccine candidates failed to show significant protection and this may be because of overreliance on the antibodymediated protective response. 88 Capsular polysaccharides (CPs) as vaccine candidates Bacterial capsule is an extra-cellular material, which can be microscopically visualized using special techniques, covering the bacterial cells. Several bacteria have been found to possess the capsules such as E. coli, Neisseria meningitidis, Streptococcus pneumoniae, Haemophilus influenzae as well as S. aureus. Bacterial capsules are composed of long polysaccharide chains known as CPs. Capsules are the bacterial structure first recognized by the immune system, therefore, encapsulated bacteria have developed an immune evasion property which is exploited in the development of vaccines. 89 The CPs have been targeted as an effective vaccine candidate for the protection from many bacterial infections such as S. pneumoniae, H. influenzae, and N. meningitidis. 90 As many as eight different serotypes of capsules such as CP 1-8 (CP1 to CP8) have been found in S. aureus; however, the majority of the isolates causing diseases possess CP5 and CP8 which are the major effective vaccine targets. [91][92][93][94] The expression of these CPs can be dynamic during infection, therefore, additional protein antigens are required for adequate protection. 95 In 2002, the first S. aureus vaccine StaphVAX, developed by Nabi Biopharmaceuticals, consisting of CP5 and CP8 conjugated to recombinant P. aeruginosa exoprotein A, was used as a vaccine candidate in patients receiving hemodialysis in its initial phase III clinical trials. However, the study failed to show a significant protective effect compared with placebo in a follow-up period of 3-54 weeks postvaccination. It was suggested that it may be due to many reasons such as the population targeted, production of the suboptimal conjugate, or varying conjugate manufacture between trials; however, partial protection with a significant reduction in the S. aureus bacteremia number in the follow-up period of 3-40 weeks post-vaccination was found in a subsequent trial. 96 Based on this partial protection, Fattom et al conducted a similar study using StaphVAX in the same patient population receiving hemodialysis. The assessment of the protective efficacy in vaccine recipients vs placebo up to 35 weeks after receiving a single dose or up to 60 weeks after receiving one or two vaccine doses suggested no protection against S. aureus bacteremia. 97 The failure of this vaccine containing two single-antigens suggested that a multi-antigen vaccine containing several antigens might be successful. As a result, the first generation of multi-antigen vaccine containing threeantigens (S. aureus three-antigen [SA3Ag]) such as CP5, CP8 conjugated to the CRM 197 and ClfA was designed. 98 Recently, two types of vaccines namely, SA3Ag vaccine possessing CP5, CP8, and ClfA and S. aureus fourantigen (SA4Ag) vaccine possessing CP5, CP8, ClfA, and recombinant P305A developed from a lipoprotein manganese transporter C (MntC) have been successfully developed by the researchers, which have exhibited superior immunogenicity compared to previous vaccines. 96,[98][99][100][101][102][103][104] The studies have revealed that the previous vaccines generated anti-staphylococcal antibodies capable of binding with S. aureus leading to the uptake by phagocytic cells while the multi-antigen vaccines (SA3Ag and SA4Ag) are capable of inducing high level of antistaphylococcal antibodies that lead to the killing of S. aureus by increasing the phagocytosis of bacteria and were concluded to be safe with no significant increase in systemic adverse effects or local adverse effects in healthy adults. 96,104,105 The partial success of the first phase trial encouraged the researchers to design a novel multi-antigen vaccine (SA4Ag) containing CP5 and CP8 conjugated with CRM 197 (CP5-CRM 197 and CP8-CRM 197 ) together with MntC and ClfA antigens. 106 A multicenter phase I/II trial study conducted in the United States evaluated the immunogenicity, safety, and tolerability of SA4Ag vaccine in healthy adult volunteers of 18-64 years of age when injected as a single intramuscular dose. 106 The findings of a recent animal model study demonstrated that this vaccine could elicit cytokine production by naive peripheral blood mononuclear cells leading to the induction of antistaphylococcal antibodies and memory B-cell response. 107 A phase II/III study to evaluate the efficacy of the SA4Ag vaccine for the prevention of invasive S. aureus disease in patients between 18-85 years of age who have had elective spinal surgery is under way. 108,109 CP5-CRM 197 , CP8-CRM 197 , MntC, and ClfA (SA4Ag) Phase I Showed robust immune response, safe, and well-tolerated and phase 2b is ongoing 91,97 Alpha-toxin and Panton-Valentine Leukocidin Phase I Showed a good toxin neutralizing sero-positive response 103 EsxA and EsxB Preclinical Showed protection with improving survival of murine model 106 Surface protein A (SpA) Preclinical Showed protection in mouse model 110,111 D-alanine auxotrophic Staphylococcus aureus Preclinical Showed protection from the formation of abscesses and improved survival in immunized mice 113 AdsA Preclinical Showed protection in the immunized mouse model 114 Coa (Hc-CoaR6) Preclinical Showed a strong T-cell response and protection in mice against lethal dose of S. aureus 118 Staphylococcal enterotoxin B Preclinical Showed an efficient protection in BALB/c mice 122 This vaccine was shown to be safe and well tolerated in the early stage of clinical trials inducing high levels of bacterial killing antibodies. 101 Alpha-toxin and Panton-Valentine Leukocidin (PVL) S. aureus alpha-toxin is a highly conserved toxin that disrupts the tissue and endothelial barrier and enhances bacterial penetration. 110 PVL is a pore-forming protein exhibiting a cytotoxic nature which destroys leukocytes and causes tissue necrosis. 111 A reduced risk of sepsis in adult patients with invasive S. aureus infection has been found with a higher level of IgG antibody against alpha-toxin. 112 A recent phase I study was conducted by Landrum et al in healthy adults with an age range of 18-55 years old to evaluate the safety and immunogenicity of recombinant alpha-toxoid (rAT) and recombinant PVL (rLuks-PV) either monovalent or bivalent. The subjects were injected with monovalent form and followed-up on days 7, 14, 28, and 84 and those injected with bivalent form received a second dose on day 84 and were followed-up on days 98 and 112. A sero-positivity for toxin neutralizing antibody was found in a high proportion of subjects against rAT and rLuks-PV. As a result, both the rAT and rLuks-PV vaccine formulations were found to possess a favorable safety profile, were welltolerated, and had high immunogenicity with neutralizing antibody when administered either alone or in combination in healthy adults. 113 Secretory proteins EsxA and EsxB as a vaccine model The bacterial secretion system helps the bacteria to transport the virulence factors in the host cells. The type VII secretion system is the best-characterized system in S. aureus. Early secreted antigenic target-6 kDa (ESAT-6) secretion system (ESS) is a specialized secretion system similar to the Esx-1 secretion system described in Mycobacterium tuberculosis, also identified in S. aureus. ESS in S. aureus consists of 12 proteins including highly conserved EsxA and EsxB closely related with ESAT-6 and CFP-10 respectively of M. tuberculosis. 114 In 2005, these proteins were identified and verified to be secreted and implicated in the development and persistence of staphylococcal abscess formation in the murine model. 115 In a recent study, the attenuated Salmonella typhimurium SPI-1 T3SS was utilized to translocate the secretory proteins EsxA and EsxB fused with N-terminal domain of SipA (1-169 amino acids) into the host cells of BALB/c mice. The mice were immunized orally with three doses of S. typhimurium strains N19, N20, and vector control strain N106 on Day 1, Day 8, and Day 22 and 5×10 10 CFU of freshly cultured and PBSwashed bacterial cells, and the vaccinated mice were intravenously challenged with 5×10 7 CFU of S. aureus USA300 strain or Newman strains after 10 days of secondary booster dose. The immunogenicity study showed that the mice immunized with N19 strain generated a high level of EsxAspecific IgG1 and IgG2a antibody, indicating Th1/Th2-type immune response and a Th2-biased response against the EsxB antigen protecting the N20 vaccinated mice while improving the survival rate in N19 vaccinated mice. 116 Surface protein A as a vaccine candidate SpA is an abundant surface protein and a virulence factor which is released during normal cell division. SpA is able to interact with the Fc portion of IgG and suppresses the adaptive immune response by limiting the antibody production by B-cells whereas it enhances the immune response if it binds with B-cell receptor allowing the activation of B-cells. [117][118][119] Therefore, suppression of the IgG binding effects of SpA could be able to mount the immune response. In a study, Kim et al, when immunizing a mouse model with non-toxigenic protein A withg substitutions Gly 9 Lys, Gln 10 Lys, Asp 36 Als, and Asp 37 Ala in the D-domain of the Ig binding region (SpA-D KKAA ), found rising antibody titers and protective efficacy against MRSA and MSSA infection. 120 Another recent study depicted the efficacy of the combined vaccine containing recombinant S. aureus surface protein A (SasA) and the internal heavy chain translocation domain C-fragment of tetanus neurotoxin (TenT-Hc). The combined vaccine conferred complete protection to the mouse against lethal intra-peritoneal challenges with 3×10 9 CFU of MRSA USA300 strains. 121 D-alanine auxotrophic strain of S. aureus as a vaccine model D-alanine is an essential component of the bacterial cell wall polysaccharide. Lacking a gene involved in the D-alanine biosynthesis makes the strains attenuated. 122 In a recent study an attempt was made to assess the impact of D-alanine auxotrophy on protection from the parental strains. The S. aureus 132 strain lacking the gene involved in D-alanine biosynthesis was allowed to grow on media supplemented with exogenous D-alanine. The infection with D-alanine auxotrophic strain elicited a protective immune response and generated cross-reactive antibodies which provided protection following administration of different doses of its parental strain in immunized BALB/c mice. The D-alanine auxotroph vaccine exhibited a reduction in the measured bacterial load in vital organs such as kidney, spleen, heart, liver, and lung. The vaccine protected against the formation of abscesses and survival of the immunized mice was enhanced following infection with the parental strain. 123 AdsA AdsA is a cell wall anchored enzyme which plays an important role in immune evasion. 3 AdsA deficient strains have been found to be labile after engulfment by polymorphonuclear leukocytes, while wild-type strains remain stable. In a study, active immunization of 6-week-old female BALB/c mice with 25 μg of rAdsA protein by intramuscular injection and subsequent infection with S. aureus Newman or USA300 strain was performed. As a result, a high level of anti-AdsA IgG and a reduced abscess size with little or no dermonecrosis was seen in the mice vaccinated with rAdsA when compared with the control mice. The anti-AdsA antibody was found to promote the killing of S. aureus by immune cells and reduced the intracellular as well as the extracellular number of S. aureus in macrophages of mice. 124 Therefore, AdsA is an important antigen candidate for vaccine or therapeutic approach against the S. aureus infection. Coa as a vaccine model S. aureus Coa is a protein with enzymatic action which activates prothrombin to convert fibrinogen into fibrin threads via its N-terminal D1-D2 domain. The fibrin threads generate a protective shield on the surface of S. aureus through its C-terminal R domain. The monoclonal antibody against the R domain was found to promote the phagocytosis of S. aureus by immune cells, suggesting its role in the enhancement of bacterial killing and protection of the host. [125][126][127] Regarding these findings, a recent study evaluated the protective efficacy of the R domain of Coa (CoaR6) fused with the carrier protein (Hc), a 66 C-terminal fragment of the heavy chain of tetanus neurotoxin (TT) in a peritonitis mouse model challenged intraperitoneally with 2×10 9 CFU of MRSA252 or 1×10 9 CFU of USA300 4 weeks after the third immunization with Hc-CoaR6 combined with alum and CpG. The TT was used to increase the immunogenicity of the so-called Hc-CoaR6 vaccine. The results suggested that the Hc-CoaR6 vaccine could improve immunogenicity when compared with the immunogenicity elicited by the CoaR6 alone. The findings also suggested that a strong T-cell response and protection of mice against the lethal dose of S. aureus could be elicited by the Hc-CoaR6 vaccine model. 128 Staphylococcal enterotoxin B (SEB) SEB is a stable toxin which exerts powerful effects in humans at a very low dose. When inhaled, SEB can induce several symptoms ranging from headache, myalgia, increased heartbeat, coughing, enteric dysfunction (nausea, vomiting, and diarrhea) to life-threatening toxic shock syndrome. 129,130 A previous study used the formalin treated SEB toxoid vaccine, and although it demonstrated some degree of protection of the animal models, it was not approved for use in humans. 131 Owing to this protective efficacy, a recent study also evaluated the protection in a mouse model immunized with mutant SEB vaccine candidate produced by site-specific mutagenesis. A substantial level of toxin neutralizing antibody response was elicited, which provided efficient protection to the BALB/c mice against a lethal dose of SEB challenge. 132 Recent developments in passive immunization Anti-staphylococcal monoclonal antibodies as prophylactic agents for patients with a high risk of developing severe S. aureus infections are considered a novel antistaphylococcal approach. A potential advantage of increasing the effectiveness of the conventional antibiotic treatment has been suggested of the anti-staphylococcal antibody. As alphatoxin is expressed by the majority of S. aureus strains, the monoclonal antibody against the alpha-toxin may be effective in protecting against infections caused by S. aureus, including MRSA. Several studies have claimed the protective role of anti-alpha-toxin antibody from the S. aureus infections. [133][134][135] A phase II trial of the monoclonal antibody has evaluated the efficacy and safety of a single dose of the human antistaphylococcal monoclonal antibody against the S. aureus αtoxin under the project entitled "human monoclonal antibody against S. aureus α-toxin in mechanically ventilated adult subjects". However, the results of this study and whether this approach can have a positive impact on treatment of staphylococcal diseases remain to be evaluated. 136 In another recent study, an attempt was made to evaluate the efficacy of antistaphylococcal antibodies by injecting 200 μL of rAdsA immunized rabbit antisera into the tail vein of 8-week old BALB/c mice 24 hours prior to challenge with S. aureus. As a result, passive immunization with the AdsA-specific antisera reduced the S. aureus Newman or USA300 infection in the mouse model. The AdsA-specific antiserum was found to promote the killing of S. aureus by immune cells while decreasing the infection severity in a different mouse model. 120 In a study conducted by Varshney et al, the natural antibody against Staphylococcus protein A (514G3) was found to promote the opsonophagocytic killing of S. aureus by human blood cells, and protected the bacteremia mouse model from the lethal intravenous challenge of 3×10 7 CFU of MRSA. 137 The protective role of passive immunotherapy with polyclonal antibodies against recombinant autolysin (r-autolysin) was recently evaluated by Kalali et al. As a result, the addition of anti-r-autolysin was found to promote the phagocytosis of S. aureus and the number of viable bacterial cells was decreased over 66.5% after 90 minutes compared with the control group; and in the mouse model of sepsis, the addition of anti-r-autolysin IgG fraction significantly enhanced the survival of the animals. 138 The role of hemolysinalpha (Hla)-specific and Hla-leukocidin cross-neutralizing monoclonal antibodies was evaluated for their efficacy in protection from pneumonia. In the study, 6-8 week old female BALB/cJRj mice were intra-nasally challenged with a lethal dose of 8×10 8 CFU CA-MRSA clones USA300-0114 at 24 hours post-immunization with the monoclonal antibodies and survival was monitored daily for 10 days after post-challenge. The result exhibited a protective efficacy in the induced murine pneumonia model. 139 A similar study conducted by Stulik et al, also depicted the prophylactic efficacy of anti-Hla monoclonal antibody in a lethal rabbit pneumonia model challenged with MRSA and MSSA. 140 MRSA exhibits methicillin resistance which is conferred by the acquisition of a mobile genetic element, mecA, which encodes an altered protein involved in the cell wall synthesis (PBP2a). Active immunization of mice with recombinant PBP2a (rPBP2a) significantly induces specific antibodies. 141 It was assumed that the antibodies against rPBP2a might exhibit a protective activity if used for passive immunization. Naghshbandi et al conducted a study to elucidate the efficacy of passive immunization with anti-rPBP2a IgG fraction in MRSA challenged mice. In the study, the mice were passively immunized with 500 μL of IgG fraction 2 hours before and 24 hours after infection with a lethal dose of 5×10 5 CFU of MRSA, and were monitored for survival until 30 days after inoculation. As a result, passive immunization was found to play a considerable role in the protection which enhanced the survival of the experimental mice. 142 However, despite several vaccine candidate developments, there is a possibility of immune evasion. Recently it was described that the presence of the bacteriophage DNA encoding a TarP protein in MRSA can modify the bacterial cell wall polymers, inhibiting the recognition by the host adaptive immune response, which could make the bacteria resistant to being recognized by the antibodies. Thus, the evasion of bacteria of the immune system might be able to cause severe infections. 143 Conclusion The wide-spread infections caused by multi-drugresistant S. aureus have demanded priority in the development of an effective therapeutic approach. Although some vaccine candidates have shown protective efficacy in preclinical phase or early clinical phase studies, so far, no vaccine has been approved for human use. In addition to active immunization, the use of novel antibody-based passive immunization strategies might offer hope, as they have shown promising efficacy in the preclinical phase of evaluation.
2019-05-26T13:35:34.585Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "5947d0690c7f92b61735aeafdefdb7882fa0a7e3", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=49801", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5947d0690c7f92b61735aeafdefdb7882fa0a7e3", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255522588
pes2o/s2orc
v3-fos-license
The Pristine survey -- XX: GTC follow-up observations of extremely metal-poor stars identified from Pristine and LAMOST Ultra metal-poor stars ([Fe/H]<-4.0) are very rare, and finding them is a challenging task. Both narrow-band photometry and low-resolution spectroscopy have been useful tools for identifying candidates, and in this work we combine both approaches. We cross-matched metallicity-sensitive photometry from the Pristine survey with the low-resolution spectroscopic LAMOST database, and re-analysed all LAMOST spectra with [Fe/H]_Pristine<-2.5. We find that ~1/3rd of this sample (selected without [Fe/H]_Pristine quality cuts) also have spectroscopic [Fe/H]<-2.5. From this sample, containing many low signal-to-noise (S/N) spectra, we selected eleven stars potentially having [Fe/H]<-4.0 or [Fe/H]<-3.0 with very high carbon abundances, and we performed higher S/N medium-resolution spectroscopic follow-up with OSIRIS on the 10.4m Gran Telescopio Canarias (GTC). We confirm their extremely low metallicities, with a mean of [Fe/H] = -3.4 and the most metal-poor star having [Fe/H]= -3.8. Three of these are clearly carbon-enhanced metal-poor (CEMP) stars with +1.65<[C/Fe]<+2.45. The two most carbon-rich stars are either among the most metal-poor CEMP-s stars or the most carbon-rich CEMP-no stars known, the third is likely a CEMP-no star. We derived orbital properties for the OSIRIS sample and find that only one of our targets can be confidently associated with known substructures/accretion events, and that three out of four inner halo stars have prograde orbits. Large spectroscopic surveys may contain many hidden extremely and ultra metal-poor stars, and adding additional information from e.g. photometry as in this work can uncover them more efficiently and confidently. INTRODUCTION The most metal-poor stars still present in the Milky Way today are valuable portals to the early Universe and the pristine environments these stars were born in. They are thought to have formed from material enriched by the first generation(s) of stars, and their chemical abundances can be used to constrain the properties of the stars that came before them. Additionally, the dynamical properties of the most metal-poor stars teach us about the early formation of the Milky Way. Much can be, and has been, learned from very/extremely/ultra metalpoor stars with [Fe/H] < −2.0 (VMP)/−3.0 (EMP)/−4.0 (UMP) (e.g. Beers & Christlieb 2005;Frebel & Norris 2015), although they are exceedingly rare. ★ Email: anke.arentsen@ast.cam.ac.uk The metal-poor halo has been found to be a melting pot of many accreted structures. It is populated by the remnants of the larger mergers that the Galaxy experienced across its history, such as Gaia-Sausage/Enceladus (GSE, e.g., Belokurov et al. 2018;Helmi et al. 2018), Sequoia (e.g., Barbá et al. 2019;Myeong et al. 2019), Thamnos (e.g., Koppelman et al. 2019), and Sagittarius (e.g., Ibata et al. 1994). The plethora of recently discovered stellar streams are indicative of part of the later accretion events from dwarf/ultra faint galaxies and globular clusters (e.g., Ibata et al. 2021;Li et al. 2022;Martin et al. 2022a,b). Additionally, as much as half of the stars in the halo appears to be born in-situ, likely consisting of both an -rich splashed disk component (e.g., Bonaca et al. 2017;Haywood et al. 2018;Di Matteo et al. 2019;Gallart et al. 2019;Belokurov et al. 2020) and stars that formed in a hot and disordered pre-disk state (e.g., Belokurov & Kravtsov 2022;Conroy et al. 2022). The common picture from various cosmological simulations suggests that the VMP stars that inhabit the spatial inner region of the Milky Way, i.e., the bulge and the disk, are amongst the oldest stars (e.g., Starkenburg et al. 2017a;El-Badry et al. 2018;Sestito et al. 2021). These stars are therefore great tracers of the early Galactic assembly. On the observational point of view, many VMP stars have been observed with such kinematics, focusing on the bulge (e.g., Howes et al. 2014Howes et al. , 2015Howes et al. , 2016Arentsen et al. 2020;Lucey et al. 2022;Sestito et al. 2023) and the disk (e.g., Sestito et al. 2019Sestito et al. , 2020Di Matteo et al. 2020;Carter et al. 2021;Cordoni et al. 2021). The chemical properties of these populations indicate that the building blocks of the inner Galaxy consisted of a variety of objects -some stars appear to have formed in systems very similar to ultra faint dwarf galaxies, while others are consistent with being born in globular cluster-like systems (e.g., Schiavon et al. 2017;Sestito et al. 2023, and references therein), and finally there may also be a significant contribution of in-situ VMP stars in the inner Galaxy (Belokurov & Kravtsov 2022;Rix et al. 2022). Many low-metallicity stars have been found to be carbon-enhanced metal-poor (CEMP) stars, with frequencies of the order of 30 − 50% among stars with [Fe/H] < −3.0 (Beers & Christlieb 2005;Yong et al. 2013;Placco et al. 2014). There are two main types of CEMP stars. CEMP-s stars are thought to have become carbon-rich later in their life due to mass-transfer from a (former) asymptotic giant branch (AGB) star companion -these are typically in binary systems (e.g. Hansen et al. 2016b), are enhanced in s-process elements as well as carbon (a signature of AGB star nucleosynthesis), and are more frequent for [Fe/H] > −3.0. The CEMP-no stars are hypothesised to have been born from carbon-enhanced gas in the early Universe -they do not have s-process over-abundances, are less frequently found to be in binary systems (e.g. Hansen et al. 2016a, although still more than expected, see Arentsen et al. 2019), and mostly occur at [Fe/H] < −3.0. The exact frequencies of CEMP-no and CEMP-s stars as function of metallicity is still under debate , and may also vary with Galactic environment (e.g. inner vs. outer halo, bulge, dwarf galaxies, globular clusters). To build large samples of extremely metal-poor stars, many dedicated searches have happened in the past 40 years. Several different techniques have been used to identify metal-poor stars, such as following up high-proper motion stars with ultraviolet excesses (Ryan & Norris 1991), identifying objects with small Ca II H & K lines in large objective-prism surveys (Beers et al. 1985;Christlieb et al. 2008), or using metallicity-sensitive (narrow-band) photometry (Schlaufman & Casey 2014;Starkenburg et al. 2017b;Da Costa et al. 2019;Galarza et al. 2022;Placco et al. 2022). Very and extremely metal-poor stars have also been identified in greater numbers in large scale spectroscopic surveys such as the Sloan Digital Sky Survey (SDSS, York et al. 2000), the Large sky Area Multi-Object fiber Spectroscopic Telescope (LAMOST 1 , Deng et al. 2012), RAdial Velocity Experiment (RAVE, Steinmetz et al. 2006) and the GALactic Archaeology with HERMES spectroscopic survey (GALAH, Buder et al. 2021), see e.g. Lee et al. (2013), Li et al. (2018), Matijevič et al. (2017) and Hughes et al. (2022). These are often paired with dedicated follow-up efforts Allende Prieto et al. 2015;Bonifacio et al. 2015;Aguado et al. 2016;Placco et al. 2018;Li et al. 2022;Da Costa et al. 2022). In this work, we combine the strengths of metallicity-sensitive photometry and large spectroscopic surveys by cross-matching metalpoor candidates from the photometric Pristine survey (Starkenburg 1 http://www.lamost.org/public/?locale=en et al. 2017b) with the large database of spectra from LAMOST, with the goal of identifying new extremely or even ultra metal-poor stars. The Pristine survey uses metallicity-sensitive narrow-band CaHK photometry to derive photometric metallicities of millions of stars towards the Galactic halo, which is very efficient even for extremely metal-poor stars (Youakim et al. 2017;Aguado et al. 2019). However, the selection still suffers from some more metal-rich contamination. In this work we alleviate this by adding an extra step, namely by cross-matching candidates with [Fe/H] Pristine < −2.5 with the LAMOST spectroscopic database, and doing a dedicated analysis of all these (often low signal-to-noise) spectra. We select exciting candidates from this analysis, and follow them up using the OSIRIS spectrograph at the 10.4m Gran Telescopio Canarias (GTC) (Cepa et al. 2000) to obtain higher S/N observations, from which we can derive high-quality metallicities to confirm their extremely metal-poor nature. We describe our initial candidate selection from Pristine and LAMOST in Section 2, including some discussion about the success rates. The OSIRIS observations for 11 stars and the derivation of their radial velocities, stellar parameters, distances and orbits is described in Section 3. We present results for the OSIRIS sample in Section 4, discussing the presence of carbon-enhanced metal-poor (CEMP) stars, the orbital properties for the sample and a comparison with a new value-added LAMOST catalogue. We conclude in Section 5. SELECTION OF EMP CANDIDATES USING PRISTINE AND LAMOST The LAMOST archive contains low-resolution spectra (R∼1800) for millions of stars, but not all spectra have stellar parameters in the standard LAMOST catalogue tables. We discovered that many of the most metal-poor stars ([Fe/H] < −2.5) are missed by their standard pipeline (Wu et al. 2014), and also by the dedicated very metal-poor pipeline of Li et al. (2018). This is particularly severe for hotter stars and stars with lower signal-to-noise (S/N). Other dedicated analyses might be able to deal better with these spectra, and identify promising extremely metal-poor stars. At the time our selection was made (February 2021), the latest LAMOST release was DR6. To avoid having to analyse the full data release, which contains almost 10 million spectra, we made a pre-selection of promising extremely metal-poor candidates using photometric metallicities from the Pristine survey. We used the internal Pristine data release containing all CaHK observations until Semester 2020A, and adopted the CaHK + SDSS photometric metallicities (Starkenburg et al. 2017b). We queried the LAMOST archive for all stars in the Pristine survey with photometric metallicities [Fe/H] Pristine < −2.5 (from using either − or − ) and sdss < 18, and found ∼ 7500 cross-matches for ∼ 6000 unique targets. No other quality cuts were applied, which usually are included when we do dedicated target selection for Pristine follow-up immediately from the photometry (Youakim et al. 2017), to be as inclusive as possible. Preliminary ULySS analysis A first-pass analysis of these candidates was done with the ULySS 2 code (Koleva et al. 2009). ULySS is a full-spectrum fitting package that employs empirical spectral libraries to determine stellar parameters ( eff , log , [Fe/H], radial velocities, spectral broadening), and can be applied to stars of a wide range of stellar parameters and metallicities. We employed this code because we were interested in the types of contamination in the Pristine selection, which one cannot study with the dedicated metal-poor analysis described in the next sub-section. For the models, we adopted the empirical MILES library (Sánchez-Blázquez et al. 2006;Falcón-Barroso et al. 2011) and used the ULySS MILES polynomial interpolator originally built by Prugniel et al. (2011) and updated for cool stars by Sharma et al. (2016). The library has a resolving power of ∼ 2200, and the interpolator extends down to [Fe/H] = −2.8 (with the possibility to extrapolate, at ones own risk). The LAMOST spectra were fitted between 3750 and 5500 Å using a multiplicative Legendre polynomial of degree 15 for the normalisation. This degree is large enough to absorb some of the large mismatches between models and observations for carbon-enhanced metal-poor stars in regions of carbon-related molecular bands, which is necessary since the ULySS models do not include [C/Fe] as a free parameter, and large carbon features could mess up the normalisation. There is also an automatic masking routine in ULySS, which excludes outlier pixels iteratively and typically masks the wavelength regions of the largest carbon features in CEMP stars. The resulting Kiel-diagram and metallicity histogram from our ULySS analysis are shown in Figure 1, for all exposures of the ∼ 4900 unique stars that remain after removing fits with signal-to-residual ratios < 8, broadening > 400 km s −1 (which usually indicates a very bad fit), and duplicate LAMOST spectra for the same star. The metalpoor stars show a clear red giant branch (RGB) and main-sequence turn-off sequence, except for a small cloud of stars to the left of the RGB, which mostly consists of stars in the low S/N-tail of the sample without good fits. Most of the stars in our selection are indeed very metal-poor (keeping only the fit with the highest signal-to-residual ratio per star): 71% have [Fe/H] ULySS < −2.0 and 25% have [Fe/H] ULySS < −2.5. The latter goes up to 34% when using the FERRE metallicities described later in this section, which perform better in this regime than the ULySS metallicities, because the MILES library does not have many stars in this [Fe/H] range (especially for the turn-off region). There is a contamination of metal-rich stars with [Fe/H] ULySS > −1.0 of 16%. ULySS is also the main software used by the LAMOST team for the parameters in their public data releases (Wu et al. 2014). They use the interpolator based on the ELODIE library (Prugniel & Soubiran 2001;Wu et al. 2011), which has a more limited coverage of the parameter space compared to MILES, and extends only down to [Fe/H] = −2.5. Of the stars that have [Fe/H] ULySS < −2.0/−2.5 in our analysis, only 30%/17% have stellar parameters in the public LAMOST DR7 catalogue. This is likely partly due to the ELODIE library being less good at low metallicities, and partly due to more stringent quality cuts being applied for stars to make it into the LAMOST data releases. Success rates In our original selection we did not make any additional photometric quality cuts. The Pristine team developed several quality cuts to remove metal-rich outliers and improve the success rates of the spectroscopic follow-up of extremely metal-poor candidates. The cuts applied for the main Pristine follow-up campaign are discussed in Section 4.1 of Youakim et al. (2017). We apply very similar cuts to the No quality cuts were applied to the photometric metallicities in the selection. The results for the eleven stars that were followed up with OSIRIS (see Section 3) are highlighted with larger symbols (the two high and low log outliers are CEMP stars). Bottom: ULySS metallicity histogram of the same sample in black, and FERRE metallicity histogram for the VMP sub-sample in red. Pristine +LAMOST sample to see how that changes the metallicity distribution, keeping only the stars that have: • CASU flag = −1 or 1 • young stars flag = 0 • ( 0 − 0 ) > 0.6 • 0.25 < ( 0 − 0 ) < 1.5 and 0.15 < ( 0 − 0 ) < 1.2 • [Fe/H] Pristine < −2.5 (from using either SDSS − or − ) and ≠ −99 (−99 is assigned if the star falls outside of the parameter space for which the photometric metallicity assignment has a valid calibration) • instead of the PanSTARRS variability catalogue as in Youakim et al. (2017), we use the Gaia photometric variability to remove variable stars as in Fernández-Alvar et al. (2021) The uncertainties on [Fe/H] Pristine are not taken into account here, whereas they were in Youakim et al. (2017). After applying the above cuts, the sample goes from 4900 stars to 4100 stars again keeping the highest signal-to-residual spectrum per star). Of these, 78% have [Fe/H] ULySS < −2.0, and 28% have [Fe/H] ULySS < −2.5 (the latter goes up to 38% for the FERRE metallicities), compared to the previous 71% and 25% (and 34% for FERRE), respectively. The metal-rich contamination goes down to 12%. Doing the same only for stars with signal-to-residual ratios > 20 instead of our initial cut at > 8, the results are very similar. We conclude that, for the Pristine +LAMOST sample, the photometric quality cuts slightly improve the selection efficiency, but not by a lot. The success rate of previous Pristine follow-up for [Fe/H] Pristine < −2.5 was found to be 56% (Aguado et al. 2019). The lower fraction in this work (38% when applying the photometric quality cuts and adopting the FERRE metallicities) could be due to various reasons. For example, the dedicated Pristine follow- Youakim et al. (2017) and Aguado et al. (2019). Our results confirm that the success rates are high, and highlight some of the subtleties in deriving such success rates. Overall we conclude that our methodology to find hidden very and extremely metal-poor stars in the large LAMOST database is extremely efficient. Dedicated VMP FERRE analysis A dedicated very metal-poor analysis was performed for the subsample of LAMOST spectra with [Fe/H] ULySS < −2.0, with the aim of deriving better metallicities and carbon abundances for the most metal-poor stars and identifying potential ultra metal-poor candidates. We followed a similar methodology as in Aguado et al. (2017a,b), using the FERRE 3 code (Allende Prieto et al. 2006). The code interpolates between the nodes of a library of synthetic spectra and derives simultaneously the set of best stellar parameters ( eff , log , [Fe/H], [C/Fe]). For this preliminary analysis, we used the default Nelder-Mead search algorithm and linear interpolation. The dedicated very metal-poor synthetic models were computed with the ASSET code (Koesterke et al. 2008) and published in Aguado et al. (2017b) with the following parameter coverage: and a fixed [ /Fe] = +0.4 and [N/Fe] = 0. Both the data and the models were continuum normalised with a running mean filter with a 30 pixel window. We limited the fit to the wavelength range 3700 − 5500 Å, where most of the features for extremely metal-poor stars are present. The spectra were shifted to rest-wavelength using the ULySS radial velocities. The resulting metallicity distribution is shown in red in the bottom panel of Figure 1, without any additional quality cuts applied. The hard limit at [Fe/H] FERRE = −2.0 is due to the limit of the grid. 3 FERRE is available from http://github.com/callendeprieto/ferre The ULySS and FERRE distributions peak at roughly the same metallicities, but the FERRE distribution has a larger tail towards lower metallicities -as expected. We inspected the > 500 fits in the resulting FERRE-analysed sample with [Fe/H] FERRE < −3.0 by eye, and identified a number of (previously unknown) stars of interest that could potentially have [Fe/H] < −4.0 or that looked very carbon-rich and extremely metalpoor ([Fe/H] < −3.0). Practically none of our candidates had parameters in the public DR6 catalogue. Most of our candidates had relatively low S/N, so follow-up spectroscopy was necessary to confirm their extremely or even ultra metal-poor nature. The full list of EMP candidates that we inspected is given in Table 1, with figures for all the spectral fits provided in the online supplementary materials. 4 This candidate list with its derived parameters should not be used blindly since no quality cuts have been applied (on e.g. S/N, log( 2 ) or parameter uncertainties), but it could be used in combination with the figures to select other EMP stars for follow-up. Stars may occur multiple times in this list if they have more than one LAMOST spectrum. OSIRIS FOLLOW-UP OF EMP CANDIDATES We obtained GTC/OSIRIS observations for 11 of our most promising candidates (16.9 < < 17.9) in Semester 2021A. We used OSIRIS in longslit mode with the 2500U grating, a 1 arcsec slit and 2x2 binning, resulting in spectra covering 3440 − 4610 Å at a resolving power ∼ 2400 (providing an instrument profile with a FWHM of ∼ 125 km s −1 ). We aimed for a S/N of 40 at 4000 Å, corresponding to exposure times of 3000s for stars of magnitude ∼ 17.5. A summary of the observations is presented in Table 2. Individual exposures of 1400, 1600 and 1800s were executed for different targets. Radial velocities Radial velocities (RVs) are derived using the cross-correlation technique. We have a high-quality GTC/OSIRIS spectrum of a bright extremely metal-poor star G64-12 ( eff = 6463K, log (Aguado et al. 2017a(Aguado et al. , 2018, that we use as a cross-correlation template. The OSIRIS spectra of both our targets and the template star are normalized with the same method, using a running mean filter with a width of 30 pixels. We built the cross-correlation function (CCF) with our own IDL-based automated code in the spectral range 3755-4455 Å with a window of 3000 km s −1 . The main features in the template are the CaII H&K lines, the HI lines of the Balmer series, and the G-band in carbon-enhanced stars (see Fig. 2). The normalization method produces a shape of the CCF profile that mimics the shape of all balmer lines in the warm template EMP star, which does not resemble a gaussian shape. We thus fit the CCF profile with a parabolic fit using the closest 6 points to the CCF peak. The statistical uncertainty of the centroid of the parabolic fit is typically below 1 km s −1 , significantly below the pixel size of ∼ 0.57 Å/pixel (∼ 42 km s −1 /pixel). The results of the OSIRIS spectra show intranight RV variations with standard deviations below ∼ 7 km s −1 , but RV variations from different nights with standard deviations in the range 3 − 20 km s −1 . We also derive the RV for the same stars from their LAMOST spectra (which typically have much lower S/N than the OSIRIS spectra), using the same technique to check the consistency with our OSIRIS RVs. The LAMOST spectrum of G64-12 is used as cross-correlation template and all spectra are normalized using a running mean filter with a width of 15 pixels of ∼ 1.38 Å/pixel (∼ 81 km s −1 /pixel). The CCF is built from the spectral range 3755-6755 Å, which includes H and H , providing more stability to the CCF profile given the lower quality LAMOST spectra. We find a reasonable consistency when comparing to the OSIRIS results, with a mean difference of −4.4 km s −1 and a standard deviation of 15.9 km s −1 . For each target we adopt the weighted mean of the OSIRIS RVs derived from each individual spectrum and the corresponding error of the mean as the final RV. We apply an quadratically added uncertainty floor of 15 km s −1 to the RV uncertainties, which seems more realistic than the CCF uncertainties given the RV variations within and between different nights and the differences with the LAMOST RVs. This floor reflects the systematic RV uncertainty due to possible instrument flexures, pointing, guiding RV drifts, etc. Distances It has been widely demonstrated that simply inverting the parallax to infer the distance can lead to wrong results, and including additional priors and/or data improves distance estimates (e.g., Bailer-Jones et al. 2018Anders et al. 2022). This is especially the case when the parallax has poor measurements, i.e., < 0 and/or / < 20. We therefore use a Bayesian approach to infer the distances for the stars in our sample. The probability distribution function (PDF), or posterior, is inferred following the method fully described in Sestito et al. (2019). Briefly, the likelihood is the product of the Gaussian distributions for the parallax and photometry. The prior takes into (Dotter 2016;Choi et al. 2016), the knowledge that VMP stars are old (11 − 13.8 Gyr), low-mass (< 1 M ), and distributed with a given IMF-based luminosity function in the CMD diagram. The zero-point offset has been applied to the Gaia EDR3 parallaxes (Lindegren et al. 2021) using the python 3_ 6 package. This method, widely used for chemo-dynamical investigations of VMP stars (e.g., Sestito et al. 2019Sestito et al. , 2020Venn et al. 2020), produces low uncertainties on the distances even in case of large parallax uncertainties. This is because the isochrones limit the possible distances for a star with a given colour to two different solutions, a dwarf and a giant solution, and nothing in between. The parallax would then typically prefer one of the two, or, in case of a very poor parallax measurement, the two peaks would be given a different probability. We calculate the probabilities following Sestito et al. (2019). For seven of the OSIRIS stars the probability of the main peak is larger than 92 per cent. For two stars it is 86 per cent (LP1, although for this star we adopt the less probable solution, see Section 4.1) and 87 per cent (LP5), while for the remaining two stars it is 54 per cent (LP8) and 66 per cent (LP2). Orbital parameters The orbital parameters are inferred using G 7 (Bovy 2015). The code requires as input the inferred distances, the RVs, and the proper motions and coordinates from Gaia (E)DR3. The total fixed gravitational potential that we adopt is the sum of a Navarro-Frenk-White dark matter halo (Navarro et al. 1997, NFWP -), a Miyamoto-Nagai potential disc (Miyamoto & Nagai 1975, M N P ) and an exponentially cut-off bulge (P S P C ). All of the aforementioned potentials are usually invoked by the MWP 14 package. However, we adopt a more massive and up-to-date halo (Bland-Hawthorn & Gerhard 2016), with a mass of 1.2 × 10 12 M (vs. 0.8 × 10 12 M for MWP 14). For each star, we perform a Monte Carlo simulation with 1000 5 https://waps.cfa.harvard.edu/MIST/ 6 https://gitlab.com/icc-ub/public/gaiadr3_zeropoint 7 http://github.com/jobovy/galpy random draws on the input parameters to infer the orbital parameters and their uncertainties. In case of the proper motion components, we consider their correlation given the coefficients from Gaia EDR3, drawing randomly with a multivariate Gaussian function. The RVs (from the OSIRIS spectra) and coordinates are treated as a Gaussian. In order to account for possible systematics on the distances (e.g. due to the adopted isochrones and other assumptions), we assume a 15 per cent uncertainty on the distances. The integration time is set to 1 Gyr. The orbital parameters are inferred for both of the peaks in the distance PDFs. The output orbital parameters are the Galactocentric Cartesian coordinates (X, Y, Z), the maximum distance from the Milky Way plane Z max , the apocentric and pericentric distances (R apo , R peri ), the eccentricity , the energy E, and the spherical actions coordinates (J , J r , J Z ). Table 3 reports the main orbital parameters from the most probable distance, except for star LP1 where we adopt the less probable distance (see Section 4.1). The orbital parameters are discussed in Section 4.1. Stellar parameters The OSIRIS data were analysed with FERRE in a similar manner as the LAMOST spectra. For this analysis we use the more sophisticated Boender-Timmer-Rinnoy Kan (BTRK, Boender et al. 1982) global search algorithm and Bézier cubic interpolation. We use the same grid, except for the coolest star in the sample, for which we employ a similar grid that has been extended down to 4500 K (as used e.g. in Arentsen et al. 2021). Again we used a 30 pixel window for the running mean normalisation, suitable for OSIRIS resolution ( = / ∼ 2400). To avoid problems in the noisy blue region we only analyse the spectra in the range (3750 − 4500 Å). We found that for the warm stars in the sample (with eff > 5500 K, which is all stars except for LP6), the log values that FERRE finds are typically at the edges of the FERRE grid, e.g. at log = 5.0 or log < 2.0, see the black points in Figure 3. This is likely the result of not much log information being present in these extremely metal-poor stars in the available wavelength range. Previous work on metal-poor stars with FERRE has shown that systematically offset log values strongly impact the derived [C/Fe] (Aguado et al. 2019;Arentsen et al. 2021). Therefore we decided to adopt photometric log values for the warm stars, shown by the magenta points in Figure 3. These were inferred from the Stefan-Boltzmann equa- tion, which needs as input the dereddened absolute G magnitude (derived using the Gaia G-band magnitude, the 3D extinction map from Green et al. 2019 and the distances from Table 3), an estimate of the effective temperature, and the bolometric corrections on the flux (Andrae et al. 2018). We adopt the FERRE effective temperature and its inflated uncertainty (see last paragraph of this subsection) in the calculation. We perform a Monte Carlo iteration with 1000 random draws on the input parameters. Each of them is described by a Gaussian distribution. We run FERRE again for the warm stars, fixing the eff to the previously derived FERRE value and log to the photometric values values, while letting [Fe/H] and [C/Fe] free. The final spectral fits are shown in Figure 2 and a summary of the results is provided in Table 4. The differences between the original FERRE run and the run with fixed eff and log are small for the metallicities, with the adopted [Fe/H] being higher by 0.07 dex with a standard deviation of 0.06 dex. The differences for [C/Fe] are also small for the stars with original log > 4 and measured [C/Fe] (see next section), they are 0.05 on average, with a standard deviation of 0.09 dex. However, for the one star with measured carbon and FERRE log < 3 (LP11), the new [C/Fe] is 0.7 dex lower. There are three stars (LP4, LP7 and LP9) that have very high FERRE internal [Fe/H] uncertainties of 0.5 − 1.0 dex when calculated by inverting the covariance matrix (our original approach). This could be attributed to some negative/zero fluxes in blue end of the OSIRIS data. To avoid this issue we recalculated the internal FERRE uncertainties using a Monte Carlo simulation. We performed 50 experiments and use the dispersion on the derived [Fe/H] and [C/Fe] as the uncertainty following Aguado et al. (2017a). As a result of that the issue with the large uncertainties was fixed for the three problematic stars, and the uncertainties for the other stars remain the same within 0.01−0.02 dex. We adopt the Monte Carlo internal uncertainties. To provide the final uncertainties for the stellar parameters, we add estimates of the external uncertainties from a previous analysis of EMP stars with FERRE (Aguado et al. 2017a) to our internal FERRE uncertainties. These are 100 K, 0.1 and 0.2 dex for eff , [Fe/H] and [C/Fe], respectively. For [Fe/H] and [C/Fe] we adopt the internal uncertainties from the first FERRE run, because the second run does not properly reflect the real uncertainties since it fits only two of the four parameters. For log , we adopted the uncertainties from the photometric determination for the warm stars, and for the coolest star we quadratically added 0.2 dex of external uncertainties (Aguado et al. 2017a) to the internal FERRE uncertainty. The results are shown in Table 4. Carbon determination Deriving carbon abundance from low-resolution data of EMP stars is non-trivial. Our employed grid is suitable for the analysis of CEMP stars, since carbon-enhancement was not only considered in the spectral synthesis step but also in the ATLAS stellar models (Sbordone et al. 2007). This is crucial because high carbon abundances can significantly impact the stellar atmospheres. The grid of models has been used successfully to derive carbon abundances in several works (e.g. Aguado et al. 2017bAguado et al. ,a, 2019Arentsen et al. 2021Arentsen et al. , 2022, although there are some differences with other synthetic grids that can lead to systematic differences in derived carbon abundances . This is likely related to the use of different codes, line lists and assumptions (e.g. different [N/Fe] abundances). The ability of the FERRE code to detect -and successfully fitcarbon absorption features from low-resolution data strongly depends on a) eff (and log to a lesser extent), b) the carbon abundance, and c) the SNR of the spectra. In our sample there are three stars (LP1, LP6, and LP11) that fulfil the sensitivity criteria derived by Aguado et al. (2019) based on these parameters, all of them have eff < 6000 K and show strong CH absorption features. For these objects we derived [C/Fe] = +1.65/+2.21/+2.17 respectively, with reasonable uncertainties (∼ 0.2 dex). For the other stars we can only provide upper limits on the carbon abundances. The carbon results are summarised in Table 4. The object with the lowest eff in our sample, LP6, shows clear CN features at ∼ 3885 Å that our best fit is not able to reproduce, although the CH & G-band fit is good (see Fig. 2, red spectrum). The reason for this is that our FERRE synthetic spectral library assumes [N/Fe] = 0.0 for all stellar models. Querying the highresolution spectroscopy compilation in the JINAbase (Abohalima & Frebel 2018) for stars with −3.5 < [Fe/H] < −3.0, we find that all of those with measured nitrogen abundances have [N/Fe] > 0, and stars with [C/Fe] > +2.0 typically have 1.5 < [N/Fe] < 3.0. This is very different from the assumed [N/Fe] in the FERRE grid, and can explain why the CN band for LP6 is much stronger in the data than in the model fit. However, the fit reproduces quite well the Ca at 3933 Å and several other Fe , Ti , and Sr lines in the 4040−4080 Å region. Additionally, the majority of the carbon information is significantly concentrated around the G−band (4200-4330 Å) and our fit is good in that area. Therefore, we conclude that the CN absorption features in the blue are not significantly affecting the best fit for this object. The carbon abundance of evolved giants decreases with decreasing log due to mixing processes, especially in metal-poor stars (Gratton et al. 2000;Placco et al. 2014). We estimate the evolutionary carbon correction for the most evolved star in our sample (LP6, the only star that should be affected by this effect) using the web calculator 8 by V.M. Placco, and find it to be +0.24 dex. Figure 4. Orbital parameters. Three left panels: pericenter, eccentricity, and maximum distance from the Milky Way plane as a function of the apocentric distance. The grey-shaded areas denote the forbidden region in which the Z max > R apo or R peri > R apo . Upper right panel: Energy vs. rotational component of the action, J . Bottom right panel: Action space; the y-axis is the difference between the vertical and radial component of the action, while the x-axis is the rotational component; axes are normalised by J tot = |J | + J r + J Z . The inner (R apo < 11 kpc) and the outer (R apo > 15 kpc) groups are squares and circles, respectively. Green and magenta solid lines in the bottom right panel denotes the regions of Gaia-Sausage/Enceladus (Belokurov et al. 2018;Helmi et al. 2018) and Sequoia (Barbá et al. 2019;Myeong et al. 2019), respectively. Grey small dots in the background of all panels are VMP stars studied in Sestito et al. (2020), in which the orbital parameters have been inferred with the same potential as this work. OSIRIS SAMPLE RESULTS The derived properties for our 11 OSIRIS stars are summarised in Tables 3 and 4. In this section, we will use these parameters to study the Galactic orbital properties of our sample, to study the carbonenhanced metal-poor stars in our sample, and to make a comparison with a recent LAMOST catalogue that includes VMP stars. Orbital properties Here we discuss the orbital parameters for our EMP OSIRIS sample. We adopted the results for the most probable distance solution (see Section 3.2), except for LP1 for which the most probable solution leads to an unbound orbit -we therefore prefer the less probable distance solution for this star. The five panels in Figure 4 display the main orbital parameters typically used to classify the kinematic properties of stars. The three panels on the left-hand side show the pericentric distance, the eccentricity, and the maximum height from the plane as a function of the apocentric distance. The right-hand two panels display the energy vs. the rotational component of the action (top) and the action space (bottom). The sample appears to split into two broad populations in the Z max vs. apocenter and the E vs J panels -one that inhabits the inner region of the Milky Way (R apo 10 kpc) and one that reaches the outer part Milky Way halo (R apo 15 kpc). We mark these with black squares and circles, respectively. The first group is composed of four stars with apocentric distances of ∼ 7 − 10 kpc. Three of them (LP3, LP4, LP5) have pericentres that bring them into the spatial region of the Milky Way bulge (R peri < 3 kpc). The remaining one, LP8, has a higher pericenter (R peri ∼ 4.5 kpc) and is among the lowest eccentricity stars in the sample ( ∼ 0.3) -its Z max < 3.0 kpc and positive angular momentum indicate the star is moving in a prograde orbit relatively close to the plane of the Milky Way. All stars in this group are prograde, with the exception of LP4, which has a very high eccentric orbit ( ∼ 0.7), and almost no rotation ( / tot ∼ 0). These extremely metal-poor inner halo stars may be connected to very first Milky Way halo building blocks, the ancient Galactic disk and/or the chaotic (but slightly rotating) pre-disk Milky Way. The second group is composed of the remaining seven stars with orbits compatible with outer halo stars. Three of them, LP1, LP9 and LP10, have pericentric distances in the range 2.0 < R peri < 5.5 kpc, the other four, LP2, LP6, LP7 and LP11, have larger pericentric distances. From the action space of Figure 4, it is evident that none of our targets is clearly kinematically associated with GSE (green box) or Sequoia (magenta box). One of the stars, LP1 (sitting near the centre of the action diamond), could still have belonged to the GSE progenitor since it has high eccentricity (∼ 0.75) and is not far out of the GSE box. Previous works have associated some stars in this region with GSE (e.g. Yuan et al. 2020) or shown that in simulations there are GSE stars on a variety of orbits larger than the typical selection boxes (e.g. Naidu et al. 2021;Amarante et al. 2022). A possible association of LP11 (the most prograde star in the outer halo group) can be made with the Helmi stream (Helmi et al. 1999), as it is sits in a similar region of the action diamond and the E-space (see e.g. Yuan et al. 2020) and has strong vertical motion ( = 1084 kpc km s −1 ), consistent with the very polar orbit of the Helmi stream. Association with other halo-substructures (such as the dynamically tagged groups of VMP stars by Yuan et al. 2020 and others) is difficult due to the relatively large uncertainties on the orbital parameters for most stars. The majority of our stars were likely brought into the Milky Way in smaller accretion events. High-resolution spectroscopic observations would be needed to determine the detailed chemo-dynamical properties of the stars in this work. They would provide better RVs to derive more precise orbital parameters and more importantly detailed chemical abundances, from different nucleosynthetic production channels, which are needed to better characterise the formation sites and origins of the stars in our sample. CEMP stars Following the Aoki et al. (2007) definition of CEMP stars ([C/Fe] > +0.7), three of our stars can be classified as carbon-enhanced: LP1, LP6 and LP11. For two other objects (LP4 and LP9, with eff ∼ 6000 K but no clear features within the G band), we were able to provide an informative upper limit of [C/Fe] < +0.7, making these carbon-normal stars. The other six targets (LP2, LP3, LP5, LP7, LP8, and LP10) are relatively warm ( eff > 6100 K) and the absence of CH absorption features only allow us to provide upper limits that are larger than [C/Fe] = +0.7, according to the sensitivity criteria from Aguado et al. (2019). We do not derive the fraction of CEMP stars in our sample, since the preselection was strongly biased. Since we do not have estimates of any s-process element abundances for our sample 9 , we cannot constrain the types of CEMP stars in our sample using that method. However, CEMP-s and CEMP-no 9 There are two relatively strong lines of Sr and Ba in our wavelength cover- stars also have different distributions in their metallicities and carbon abundances (e.g. Spite et al. 2013;Bonifacio et al. 2015;Yoon et al. 2016). We can use this to make a preliminary classification of CEMP stars. Figure 5 presents the [Fe/H] − A(C) 10 diagram of the stars in our sample, together with a compilation of CEMP stars from Yoon et al. (2016). The two most carbon-rich CEMP stars in our sample (LP6 and LP11) are on the border between the CEMP-no and CEMP-s regions. The third (LP1) lies in the CEMP-no region of the diagram, as well as the other stars with [C/Fe] upper limits. All three CEMP stars have large apocentres (> 20 kpc), and the two most carbon-rich CEMP stars also have the highest pericentres in our sample (> 8 kpc). As discussed above, these are indications that they likely came into the Milky Way in a relatively small dwarf galaxy. Previous work has suggested that the fraction of CEMP-no compared to CEMP-s stars is larger in the outer halo than in the inner halo (Yoon et al. 2018;Lee et al. 2019), as well as in smaller halo building blocks (Yoon et al. 2019;Zepeda et al. 2022). This is additional indirect evidence that the two most carbon-rich stars in our sample are more likely to be CEMP-no. If LP6 and LP11 are CEMP-s stars, they are among the lowest metallicity CEMP-s stars known. If they are CEMP-no stars, they are among the highest-A(C) CEMP-no stars known. There are not that many literature stars in this region, so it would be interesting to do further higher resolution follow-up of these two stars to investigate their nature. LAMOST DR8 VaC comparison A new analysis of the LAMOST DR8 spectra was published in a value-added-catalogue (VaC) by Wang et al. (2022), employing neural networks to derive stellar parameters ( eff , log and [Fe/H]). age, but the combination of resolution, S/N and extremely low metallicities of the stars do not permit their detection. Ten out of our eleven OSIRIS stars have stellar parameters in the DR8 VaC (the only star absent is our most metal-rich star, LP8, with [Fe/H] FERRE = −2.9). We present the comparison between the DR8 VaC metallicities and the metallicities derived in this work in Figure 6. The very carbon-enhanced cool star LP6 has extreme metallicities in both the PASTEL and VMP catalogues, which is not unexpected since the spectrum is dominated by carbon features and this is not taken into account in the Wang et al. (2022) analysis. Focusing on the [Fe/H] VMP estimates, the other stars are all found to have systematically higher metallicities compared to our analysis, mostly between −3.0 < [Fe/H] W22 < −2.3. Since we are using spectra of much higher SNR and we are employing a dedicated analysis method for extremely metal-poor (and/or carbon-enhanced) stars, we conclude that some caution should be taken with the Wang et al. (2022) VMP catalogues for [Fe/H] W22 < −2.5. We further note that more EMP stars may be hidden in large catalogues, especially among stars with low S/N spectra. SUMMARY In this work, we employed the combination of metallicity-sensitive photometry from the Pristine survey (Starkenburg et al. 2017b) and the large low-resolution spectroscopic LAMOST database to identify promising ultra metal-poor and/or carbon-enhanced extremely metalpoor candidates. We analysed ∼ 7500 LAMOST spectra for targets with [Fe/H] Pristine < −2.5 and < 18, finding success rates of stars with [Fe/H] spec < −2.5 between 34% − 50%, depending on the applied quality cuts. We inspected all the fits with [Fe/H] spec < −3.0 to identify candidates for follow-up, and we release this full list together with figures of the best fits (see Section 2.3). We observed eleven of the most exciting candidates (mostly with low LAMOST S/N) using OSIRIS at the GTC. We analysed the higher S/N medium-resolution OSIRIS spectra ( ∼ 2400) using the FERRE code to derive eff , [Fe/H] and [C/Fe], adopting log from photometry. The metallicities for the eleven stars range from [Fe/H] = −2.9 ± 0.1 to −3.8 ± 0.2, with a mean [Fe/H] = −3.4. We set out to identify UMP stars, but none of the targets had [Fe/H] < −4.0 -such stars are indeed incredibly rare. Our selection of (carbon-enhanced) extremely metal-poor stars, however, was still very efficient. For three out of the eleven stars we were able to derive carbon abundances, for the others we derived upper limits -two of which are constraining and classify the stars as carbon-normal. Given their [Fe/H], A(C) and orbital properties, all three CEMP stars are likely part of the CEMP-no category, although the two most carbon-rich objects lie in an underpopulated region, where there are both CEMP-no and CEMP-s stars in the literature. Further follow-up is necessary to understand the physical processes causing the carbon-enhancement in these stars. We derive orbital properties using the OSIRIS radial velocities, Gaia proper motions and distances based on photometry and parallaxes from Gaia combined with MIST isochrones, integrating orbits in the MW-Potential14 with a more massive halo. We find that four of the stars have inner halo kinematics, with three of them on prograde orbits. The other seven stars have orbits more consistent with the outer halo. None of the stars in our sample are confidently associated with previously known substructures/accretion events, partly due to uncertainties on the orbital parameters. Ongoing and upcoming spectroscopic surveys are so large that it is crucial to have general automatic analyses of the spectra, but doing this well for extremely metal-poor stars is a challenge. They are only a small subset, hence pipelines are often not optimised for them, and their spectra are challenging to analyse due to weak spectral features and/or peculiar chemical abundances. It will remain important to do dedicated metal-poor analyses in the future. Adding additional information like metallicity-sensitive photometry as in this work could uncover hidden promising candidates at the lowest metallicities. edges funding from CNRS/INSU through the Programme National Galaxies et Cosmologie and through the CNRS grant PICS07708. ES acknowledges funding through VIDI grant "Pushing Galactic Archaeology to its limits" (with project number VI. Vidi.193.093) which is funded by the Dutch Research Council (NWO). PJ acknowlegdes support from the Swiss National Foundation. Based on observations made with the Gran Telescopio Canarias (GTC), installed at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias, on the island of La Palma. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council (NRC) of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. This work has made use of data from the European Space Agency (ESA) mission Gaia (https://www.cosmos.esa.int/ gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/ consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. DATA AVAILABILITY The LAMOST spectra used in this work are public. Our EMP candidate list is available in Table 1, and all relevant data for the OSIRIS stars is available in Tables 2 − 4. These tables will also be available at the CDS. The OSIRIS spectra will be shared on reasonable request to the authors.
2023-01-09T06:42:51.875Z
2023-01-05T00:00:00.000
{ "year": 2023, "sha1": "49591bbcd86bef6eafa3b7611b2fd185295fc642", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "49591bbcd86bef6eafa3b7611b2fd185295fc642", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
22876418
pes2o/s2orc
v3-fos-license
of accurate employment histories of patients admitted to units of internal medicine. M. Importance of ac curate employment historiesof patientsadmitted to units of internal medicine. Scand J Work Environ Health 1991;17:386-91. A study was undertaken to assess the importance of systematically recording occupationalhistories of patients admittedto an internalmedicine unit of a university hospital. Detailed informationon currentand past employment wasobtainedwithquestionnaires and in personalinterviews from 200inpatients over a 12-monthperiod. Twenty-one patients (10.5 070) wereconsidered to have a "primary illness" (conditioncausinghospital admission) probably (4.5 070) or possibly (6 070) relatedto their current or previousoccupation.From the 786primaryand secondaryillnesses and medicalantece dents diagnosedfor the 200 patients examined, 70illnesses of 55 patients were considered probably or possiblyrelatedto currentor previousoccupation.Thispilot studyemphasizes the needfor accurateoc cupational records for patients in an internal medicine ward. This task is best performed by an appro priately trained occupational physician. Despite all the sophisticated techniques available to detect functional and morphological organ disturbances, accurate records of a patient's medical history remain of paramount importance. An accurate history can yield essential information with which to formulate possible diagnoses, suggest complementary investigations, adapt the treatment, or assess the prognosis of the illness. The medical history is also the most valuable tool in identifying those environmental factors which cause or aggravate certain pathological conditions. However, the general practitioner and the internist usually limit their questioning to dietary, smoking, and drinking habits and drug consumption. The occupational history is not frequently assessed in detail because it is difficult to perform correctly, is timeconsuming, and is considered rarely productive. An evaluation of medical charts by Sokas et al (1) revealed that only half of the charts recorded employment status. It is not surprising that over 80 0,10 of the occupational diseases identified in two occupational medicine clinics in the United States were not correctly diagnosed by the primary physicians (2). But even when a job history is provided by the patient, it is often difficult to ascertain occupational risks for the following reasons: (i) occupational environments are complex and rapidly changing and physi-cians who are unfamiliar with modern industrial activities cannot always identify the occupational risks in a patient's job; (ii) patients cannot always provide the names of the chemical compounds present at their workplace; (iii) industrial confidentiality may prevent the disclosure of names or formulas of industrial chemical preparations; (iv) the long latency period between the onset of exposure and the symptoms of some diseases (eg, cancer) leads to forgetfulness about past exposure; and (v) the search for an occupational factor is often not considered if another etiologic agent has been identified (eg, tobacco consumption for a patient with lung cancer). The present pilot study was undertaken to assess the importance of systematically recording occupational histories of patients admitted to an internal medicine unit of a Belgian university hospital. The objective was not to determine the pattern of occupational diseases which may be diagnosed in Belgian hospitals, but to assess whether a detailed occupational record is justified in the medical work-up of hospitalized patients. Although the design of such a study cannot totally exclude the possible influence of observer biases, effort was made to reduce it as much as possible by having the assessment performed independently by four occupational physicians. The results of this preliminary study provide a clear indication of the importance of occupational risk factors in the pathogenesis of diseases diagnosed in general and internal medicine. Population and hospital characteristics The study population was a random sample of all the patients who were admitted to a general internal medi-cine unit in a 12-month period and who fulfilled the following criteria: (i) held paid employment for at least one year in a lifetime and (ii) able to answer a questionnaire and be interviewed for 1 h. Fifty percent of the patients lived in the city of Brussels (1 million inhabitants). The remaining 50 % resided in the Frenchspeaking area of Belgium (3.5 million inhabitants) . The hospital had no selection criteria at admission. It acted as a general hospital for the urban population of Brussels and as a referral hospital for the French-speaking area of the country. No Belgian hospital has an inpatient occupational medicine unit. A sample of 224 patients was initially selected, representing 10 % of the patients hospitalized in the internal medicine unit during the study period. Nineteen people were considered uncooperative and five patients who had filled out the forms were discharged before the interview could take place. Two hundred patients (157 men and 43 women) were included in the final study. Their ages ranged from 20 to 82 (mean 51.8, SD 14.6) years. One hundred and thirty-three were still occupationally active ; 59 were retired, and eight were unemployed at the time of the study . The ratio of blue-collar to white-collar workers was 85:115. The characteristics of the study population (age and cause and duration of hospitalization) were not markedly different from those of the total population admitted to the internal medicine unit during the same period. The proportion of women was lower in the study population (22 versus 45 0/0) since 56 % of the hospitalized women had never held paid employment and were not selected for the study. Occupational history At admission, each patient in the study population was given a detailed questionnaire adapted from that of Rosenstock et al (3). The form was designed to collect information on (i) all occupational activities, (ii) exposure to chemical, physical , or infectious agents at work and otherwise, (iii) use of personal protective equipment, (iv) personal hygiene practices, (v) second job, and (vi) hobbies. Two or three days later, personal interviews allowed the patient's answers to be explored in depth. The interviews were conducted by three occupational physicians. In several cases, additional information was requested from the employer or the plant physician. After the patients' discharge from the hospital, their medical files were examined and the following data were extracted: (i) diagnosis of the illness which motivated the admission (primary illness), (ii) other diseases identified during the medical work-up (secondary illnesses), and (iii) medical antecedents (ie, diseases which occurred in the past). Congenital and perinatal diseases, pregnancy, and accidents were not considered in the study. Occupations and diseases were coded according to the classifications of the International Labour Office (4) and the World Health Organ ization (5). The assessment of a relationship between diseases and occupations was based on data from the literature (ie, reports of an increased relative risk of specific illnesses in some occupations or following exposure to some chemicals), but it also took into account the intensity and the duration of exposure, the latency period from onset of exposure to evidence of disease, and the possible role of other exogenous factors (smoking , alcohol, hobbies, etc). Each current or past illness was assessed as probably related, possibly related, or not related to occupational factors. The criteria for the first category (probably related) were (i) the existence of a well-established association between job and disease and (ii) knowledge of an exposure of sufficient intensity to cause the disease (eg, pneumoconiosis in a coal worker). In the second category (possibly related), the criteria were (i) the existence of epidemiologic studies or case reports suggesting a possible association (eg, liver cirrhosis following long-term exposure to solvent mixtures) and (ii) the existenceof a definite association between exposure and disease, the exposure intensity however being considered low (eg, asbestosis in a worker sawing asbestoscement products intermittently). The final assessment was performed independently by four occupational physician s. In divergent cases the decision was made jointly by the team. Results Twenty-one patients (10.5 %) were considered to have a primary illness which was probably related (N = 9) or possibly related (N = 12) to their current or previous occupations (table 1). From the 786 primary and secondary illnesses and medical antecedents diagnosed for the 200 patients examined, 70 illnesses were considered probably related or possibly related to current or previous occupations. Therefore 55 of the 200 inpatients had illnesses resulting from employment. Table 2 presents the nosological distribution of these work-related diseases. Respiratory tract impairment represented the greatest proportion (25.7 %) of the 70 diseasesconsidered probably related or possibly related to occupation. Musculoskeletal diseases were the second most frequent work-related impairments. Seventy-three infectious diseases were identified (7 still active and 66 recorded in the medical history) . Fifteen percent of these diseases were related to the patient's occupational activities. There were eight cases of cardiovascular disease caused or aggravated by occupational activities (one diagnosis of right heart failure secondary to silicosis, four cases of angina pectoris occurring in patients exposed to carbon monoxide, and three cases of lower-limb varices in patients whose jobs required long periods of standing. Six (12 %) of the 50 neoplastic diseases were considered possibly related to previous occupational activities. Table 3 presents the distribution of patients with work-related illnesses (primary + secondary illnesses and medical antecedents) in the various occupational groups. It is interesting to note that all of the agricultural workers (N = 6) had work-related illnesses. Table 4 lists the physical, chemical, and biological agents which have been estimated to be probably or possibly responsible for the diagnosed occupational diseases. Discussion Before it can be concluded that a patient's illness has been caused or aggravated or accelerated by his or her current or past occupational activities, it is necessary that previous studies have reported the existence of an association between the illness and occupational risk factors and also that the circumstances of the patient's occupational exposure (intensity, duration) are compatible with a cause-effect relationship. In practice, for 388 the various reasons already discussed (see the Introduction), the assessment of the fulfillment of both criteria is difficult. Furthermore, this assessment may also be influenced by the medical context in which it is performed (general practice, work compensation) and the relationship between the patient and the physician (family physician, insurance company physician, etc). These possible biases did not influence the results of this study, which was performed independently of any socioeconomic constraint. Despite these shortcomings, several tentative conclusions can be drawn from the study, whose validity should be assessed by a broader epidemiologic study involving several internal medicine clinics located in various industrial, urban, and rural areas. One must, however, recognize that, in this study, most of the associations suggested between occupational exposures and diseases involved some subjec- Table 2. Primary and secondary illnesses and medical antecedents probably or possibly related to occupation. tivity since they were based on the interpretation of a questionnaire and an interview. They could not always be strengthened by objective data (eg, quantitative assessment of past exposure by environmental and/or biological data). Furthermore some reported associations are still the subject of controversy, such as the relationship between the development of coronary artery disease and long-term exposure to carbon monoxide (48). Respiratory tract impairments were the most frequent work-related diseases. This finding confirms that the lung represents the main target organ of many inhaled industrial pollutants. The finding is in agreement both with the observations of Cullen & Cherniack in the United States (2) and statistics of the BelgianWork Compensation Fund (49). In 1987, 30 % of the claims received by the Belgian Fund were related to occupationallung diseases. In the present study, the propor-tion of work-related infectious diseases (10 of the total 70) was higher than that (4 070) found among the compensation claims introduced to the BelgianWork Compensation Fund in 1987 (49), even though compensation claims for most of the diagnosed infectious diseases can be directly submitted for work compensation by the patient or his or her physician. One expects that most work-related infections rapidly heal and therefore do not always lead to a compensation claim, unlike respiratory and osteomuscular diseases. Doll & Peto (50) attributed 2 to 8 0J0 of the cancer deaths among the general population (active+ nonactive persons) in the United States to occupations. Our estimation (6 of 50 neoplastic diseases) is higher since it is based on occupationally active persons. In Belgium, the majority of occupational cancers is not reported to the health and safety inspectorate or the compensation board. Over the eight-year period 1979-1986, the average annual number of cancer cases compensated by the Belgian Work Compensation Fund was only 48, compared with an annual number of approximately 4000 compensated occupational diseases (51). In 1983, of 12 869 cancer cases registered for men by the Belgian Cancer Registry, 68 (0.5 0J0) were submitted to the Belgian Work Compensation Fund (51). Although hearing loss and skin lesions are frequent occupational health problems, very few patients with these complaints were found in the study. These patients would not normally be hospitalized. In summary, over 25 0J0 of hospital inpatients in general internal medicine who have held employment have or have had work-related pathologies. This finding justifies a detailed occupational record for all employed patients admitted to any other medical ward (eg, pneumology, oncology, hematology , nephrology, gastroenterology, neurology) . Although respiratory and orthopedic conditions are more frequently related to occupation than other diseases, any organ can be the target of an occupational hazard . Medical criteria alone are not sufficient to identify the patients whose illness may be work-related and for whom a detailed work history should be obtained by an expert physician. Accurate and systematic work histories must be obtained for each patient. Several authors (52)(53)(54)(55)(56)(57) have stressed the importance of training internists (and general practitioners) to consider work-related causes in their diagnoses. However, in,view of the complexity and rapidly changing pattern of occupational health risks, it is unlikely that internists will ever be able to assess in detail the present and past occupational history of their patients. Industrial position titles (as listed in table 1) do not frequently permit the assessment of potential occupational hazards . Furthermore current employment is not always representative of usual occupational exposure conditions (58). A detailed inquiry about each patient's past and present activities is required to pinpoint exposure to any occupational health risks. This task can only be performed by an occupational physician because it not only involves the description of current and previous occupational exposure conditions, but also the assessment of a possible link between the diagnosed pathology and the occupational activities. It should also be stressed that correct, careful, and accurate work records are not only central for deciding whether a patient presents an occupat ional disease, they may also contribute to the identification of new occupational health risks. Indeed , this information constitutes an essential tool for the proper design of epidemiologic studies.
2018-04-03T06:12:58.534Z
1991-12-01T00:00:00.000
{ "year": 1991, "sha1": "6edc7e771b4e4afe3246b4c689a391a1407d969a", "oa_license": "CCBY", "oa_url": "https://www.sjweh.fi/download.php?abstract_id=1689&file_nro=1", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "022d5cc4c85cdfca0d08d963667b384a81d8dd6e", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
19327521
pes2o/s2orc
v3-fos-license
Quality indicators for hip fracture patients: a scoping review protocol Introduction Hip fractures are a significant cause of morbidity and mortality and care of hip fracture patients places a heavy burden on healthcare systems due to prolonged recovery time. Measuring quality of care delivered to hip fracture patients is important to help target efforts to improve care for patients and efficiency of the health system. The purpose of this study is to synthesise the evidence surrounding quality of care indicators for patients who have sustained a hip fracture. Using a scoping review methodology, the research question that will be addressed is: “What patient, institutional, and system-level indicators are currently in use or proposed for measuring quality of care across the continuum for individuals following a hip fracture?”. Methods and analysis We will employ the methodological frameworks used by Arksey and O'Malley and Levac et al. The synthesis will be limited to quality of care indicators for individuals who suffered low trauma hip fracture. All English peer-reviewed studies published from the year 2000-most recent will be included. Literature search strategies will be developed using medical subject headings and text words related to hip fracture quality indicators and the search will be peer-reviewed. Numerous electronic databases will be searched. Two reviewers will independently screen titles and abstracts for inclusion, followed by screening of the full text of potentially relevant articles to determine final inclusion. Abstracted data will include study characteristics and indicator definitions. Dissemination To improve quality of care for patients and create a more efficient healthcare system, mechanisms for the measurement of quality of care are required. The implementation of quality of care indicators enables stakeholders to target areas for improvement in service delivery. Knowledge translation activities will occur throughout the review with dissemination of the project goals and findings to local, national, and international stakeholders. GENERAL COMMENTS The limitations are mentioned in the paper but not actually discussed. The proposed study deals with a very important issue of hip fractures among elderly patients and is therefore most relevant to healthcare practitioners, researchers and decision makers, however we think that several additional points should be addressed by the authors in order to further improve their intended publication. 1. We would have liked to see a more elaborate discussion over the choice of the research method for the study. Even though a scoping review format clearly has clear benefits in terms of the volume of reviewed material, more targeted approaches, such as Delphi questionnaires or a Consensus Conference among experts may potentially provide a more meaningful result. 2. We also find the presentation of the central issue of the articlefinding a quality of care indicators for hip fracture treatment somewhat limited, as there are no referenced examples presented where utilization of such indicators for solving a healthcare-related problem was beneficial. 3. There are also several issues related to the universality of considered parameters and the generalizability of studies results: • Some numerical parameters in healthcare research could be defined very differently by different experts or in different countries, especially when a continuous parameter is divided into groups. This could be a serious obstacle for comparison, therefore it's important to relate to this issue in the study protocol. • The study will survey publications on the subject in the last 14 years, however exactly in those years new trends in treatment of hip fractures have appeared. What is the strategy of the authors in regard to comparison between older and newer studies? • Our strongest concern is the idea that universal standards of clinical care could exist without minding the initial differences between the various healthcare systems. There is a vast difference between preferred methods of treatment, the reimbursement of hospitals for this treatment and the multiplicity of other factors, which create very different contexts for the quality indicators described in the published papers. Basically, the indicators for quality of care are inseparable from the standards of care established in different localities and thereforetend to be different between those localities. We think that this publication would greatly benefit from addressing the issues listed above and are thankful for the opportunity given to us to take part in this most important endeavor by writing this review. VERSION 1 -AUTHOR RESPONSE 1) "The limitations are mentioned in the paper but not actually discussed." Given the format required for BMJ Open, there is not a section entitled "discussion". Therefore, we have addressed this concern in our Methods section. English-language limitations have been addressed in the 3rd paragraph of page 4 of the manuscript. The scoping-review limitation (i.e., quality of evidence not evaluated) has been addressed on page 6, first paragraph. These changes are also stated below: Page 4, 3rd paragraph: and feasibility. Limiting the search to English-language only may result in bias in results towards English-language speaking countries. Page 6, 1st paragraph: This means that results from poor quality studies may be inaccurate and therefore have the potential to bias study findings. 2) "We would have liked to see a more elaborate discussion over the choice of the research method for the study…" We very much appreciated this thoughtful remark, as it guided us to recent work by Stelfox and Straus (2014). You will find that we have addressed this issue in two paragraphs on page 3 of the manuscript. These changes are also stated below: The development of quality of care indicators may occur from a deductive approach (i.e., indicators are derived from scientific evidence, followed by expert opinion if required) or an inductive approach (i.e., existing quality of care data is used to develop indicators)(29). Although there is no gold standard to guide quality of care indicator development, Stelfox and Straus 2014 suggest the approach depends on the strength of evidence for a given indicator as well as its potential impact on patient health(30,31). A national pre-consensus meeting was held in June 2013 to garner experts' opinions on possible (i.e., feasible) quality of care indicators for hip fracture patients (i.e., inductive approach). However, experts felt their suggested indicators were insufficient to appropriately measure the quality of care delivery, particularly across the entire continuum of care. More information with respect to the strength and breadth of scientific evidence, particularly for potential quality of care indicators was requested. 3) "…there are no referenced examples presented where utilization of such indicators for solving a healthcare-related problem was beneficial". We have addressed the more explicit purpose and use of quality indicators in the first full paragraph on page 3 of the manuscript. These changes are also stated below: Quality of care indicators are a widely accepted performance measure used to determine the deviation in actual performance from ideal performance (i.e., actual care delivery versus best practice care delivery) (25,26). The implementation quality of care indicators enables stakeholders to target areas for improvement in service delivery to improve patient outcomes and ultimately save costs(27,28)26,27. Examples of positive change resulting from the implementation of quality of care indicator(s) include hip fracture quality of care indicators in the United Kingdom, the World Health Organization's surgical safety checklist(29,30). 4-6) "Some numerical parameters in healthcare research could be defined very differently by different experts or in different countries, especially when a continuous parameter is divided into group"; "What is the strategy of the authors in regard to comparison between older and newer studies?"; "Our strongest concern is …. Basically, the indicators for quality of care are inseparable from the standards of care established in different localities and therefore-tend to be different between those localities". These are valid concerns that we anticipate having with the results of this scoping review. We have acknowledged these concerns on page 7 of the manuscript, within the "synthesis" subsection. These changes are also stated below: Due to the anticipated breadth of evidence that will arise from this scoping review, there is a likelihood that a given quality of care indicator, or potential quality of care indicator, is measured in a number of different ways, is context-dependent, and its applicability may change over the study time period (i.e., within the past 14 years) due to changes in best practice. The synthesis of results will ensure these differences in measurement are highlighted in order to determine potential areas of discussion amongst international experts (e.g., discussion of why certain measures are used, and the pros and cons of each measure). Although different healthcare contexts likely require different quality of care indicators (due to, for example, different funding policies), this synthesis enables discussion of the role of context, as well as any potential areas for international synergy, or at the very least international learnings (i.e., informs a consensus meeting). Trends in quality of care delivery for hip fracture patients have changed over the course of the study inclusion years (i.e., within the past 14 years). These changes will be discussed in brief within our synthesis, however priority will be given to results that are most recent as they are more consistent with the current healthcare context. This review will identify gaps in the literature as well as future areas for study either via primary research, consensus meeting, or systematic review.
2017-06-20T07:33:06.231Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "e3804d59a4af71e24f45347323c6be630781f212", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/4/10/e006543.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd343777cee79808e33b75eef288da90d9dc87f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54982619
pes2o/s2orc
v3-fos-license
Combined optical porosimetry and gas absorption spectroscopy in gas-filled porous media using diode-laser-based frequency domain photon migration : A combination method of frequency domain photon migration (FDPM) and gas in scattering media absorption spectroscopy (GASMAS) is used for assessment of the mean optical path length (MOPL) and the gas absorption in gas-filled porous media, respectively. Polystyrene (PS) foams, with extremely high physical porosity, are utilized as sample materials for proof-of-principle demonstration. The optical porosity, defined as the ratio between the path length through the pores and the path length through the medium, is evaluated in PS foam and found consistent with the measured physical porosity. The method was also utilized for the study of balsa and spruce wood samples. Introduction Tunable diode laser absorption spectroscopy (TDLAS) has been widely used for selective and sensitive gas detection in a multitude of contexts, e.g., combustion diagnostics [1,2], atmospheric trace gas monitoring [3,4], and biomedical applications such as human breath monitoring [5,6].A review paper regarding the application of the TDLAS technique can be found in [7].About ten years ago, the TDLAS technique was applied for the first time to assess the gas content in porous media.The technique, referred to as gas in scattering media absorption spectroscopy (GASMAS), is based on the much sharper absorption features of gas compared with solid materials [8].Some of the present applications of GASMAS include practical aspects -such as gas assessment in food packaging [9], wood drying process monitoring [10] and human sinus diagnosis [11] -as well as fundamental aspects of physicssuch as wall collision broadening in nanoporous ceramics [12,13].All media involved in GASMAS applications are porous and highly scattering.As known from the Beer-Lambert law, the absorption signal is dependent upon the product between gas concentration ( 0 C ) and path length through the gas/pore ( gas L ).In a conventional TDLAS application, the path length is always known, e.g., equal to the length of a well-defined gas cell.However, in a porous medium, the path length through the gas is unknown due to scattering, and it is highly dependent upon the optical properties of the medium. Extensive efforts have been devoted for the evaluation of the path length in a scattering medium.One approach is the so-called time-of-flight spectroscopy (TOFS) technique [14], which uses a picosecond pulsed laser to illuminate the porous medium and measure the time dispersion curve, i.e., the time-of-flight (TOF) curve [15].A mean optical path length (MOPL), m L , through the whole medium can be obtained using transport theory [16].However, the MOPL cannot specifically assess the path length through the gas-filled pores, since the scattered light does also pass through the matrix material of the porous medium.If we define the path length through the matrix material as solid L and the refractive index of the matrix material as as the gas absorption path length in the pores of the medium, only a relative gas concentration can be obtained, but not the absolute gas concentration in the pores.However, this relative gas concentration can still be interpreted as an average gas concentration distributed in the porous medium.On the other hand, the path length through the gas-filled pores can be retrieved if the gas concentration is known.Thus, a so-called optical porosity ( o ρ ) can be derived from the ratio between the path length through the pores and the physical path length through the whole medium, i. pharmaceutical tablets [17,18] and ceramics [19], which shows that the optical porosity gives significant information about the material properties of the porous media.However, there is a critical technical limitation for the combined method of TOFS and GASMAS: TOFS utilizes a picosecond pulsed light source which is inherently incoherent, while GASMAS uses a continuous and coherent light source (typically of 10-MHz linewidth), yielding an efficient integration of the devices difficult.The use of two parallel setups makes assessment of MOPL and gas absorption cumbersome and prone to measurement errors.Additionally, the TOFS system has a high degree of complexity and cost compared to the GASMAS system.Thus, finding a method to obtain the MOPL through the scattering medium and the gas absorption in the pores using a single and robust setup becomes desirable.Recently, a new attempt was reported in [20], where the frequency modulated continuous-wave (FMCW) technique -which is based on the beat signal in a Mach-Zehnder interferometer employing a frequency-ramped tunable diode laser -was used to assess the optical path length, and the GASMAS technique was used to measure the water vapor absorption signal around 937 nm.The whole measurements were performed on expanded polystyrene (EPS/PS) foams in a single setup using the same diode laser. In the present work, we demonstrate a new integrated method which utilizes the frequency domain photon migration (FDPM) technique to evaluate the MOPL through the medium, and GASMAS to evaluate the oxygen molecular absorption around 763 nm in the gas-filled pores.The FDPM method, which is based on the same principles as the well-known phase shift method in atomic physics [21,22], has been widely used, e.g., in biomedical applications to evaluate the optical properties of human tissue [23,24].Here the detected light signal from an intensity-modulated continuous-wave light source transmitted through a porous medium is phase shifted and its modulation depth is decreased due to internal multiple scattering [25]. By measuring the phase shift ( φ ∆ )and the modulation depth variations between the incident light signal and transmitted light signal, the optical properties and MOPL can be retrieved according to the transport theory [26].However, for technical reasons, in the present work only the phase shift is used to retrieve the MOPL.The basic requirement of the FDPM method is that the light source should be intensity modulated at high frequencies (typically around 100 MHz), which can be readily achieved using tunable single-mode diode lasers.Thereby a combination of the FDPM and GASMAS techniques into a single compact setup for MOPL and gas absorption evaluation becomes possible.As a proof-of-principle demonstration, five PS foam samples with extremely high physical porosity (97% -99% open pores) [27] were measured and the optical porosity was evaluated.Additionally, samples of balsa and spruce wood -which have been investigated previously in [28] using the GASMAS method -are here further studied. Instrumentation The experimental setup, including the FDPM and the GASMAS subsystems, is depicted in Fig. 1.In the present embodiment, the two subsystems can be switched manually.The FDPM subsystem utilizes a homodyne demodulation scheme, where the phase shift is measured directly at high frequency (e.g., 140 MHz) [29,30].In this mode the signal generator for the GASMAS subsystem is turned off.A constant current from the diode laser driver (06DLD103, Melles Griot) and an RF modulation signal generated by an RF source (SML01, 9 kHz -1.1 GHz, Rohde&Schwarz), are coupled via a bias tee circuit into the diode laser mount (TCLDM9, Thorlabs) to operate the diode laser (#LD-0763-0050-DFB-1, Toptica).The intensity modulated light is then collimated and guided to illuminate the sample.The transmitted light signal is detected by a photomultiplier tube (PMT, R5070, Hamamatsu), which is placed 80 cm away from the diode laser.The light source area and the detection area are separated by black boxes to avoid accidental interferences.Since the quantum efficiency of the PMT surface is very nonuniform -which could induce different rise times and thus phase errors -a 5.4-mm diameter pinhole is inserted after the sample in direct proximity to the PMT.It should be noted, however, that for the GASMAS measurement, a large pinhole size is preferred since it would conversely yield a better signal-to-noise ratio (SNR) for the gas signal.The output current of the PMT is amplified by a wide band transimpedance amplifier (C6438, Hamamatsu), and sampled by a digital oscilloscope (TDS 540C, Tektronix) with 25-GHz sampling frequency, which simultaneously samples the reference RF signal from the RF source.Both the reference signal and the detected light signal are averaged by 50 times in the oscilloscope, which significantly improves the SNR of the detected light signal.The digitized signal is then transferred to a computer via a general purpose interface bus (GPIB) and analyzed by a digital phase detector -an in-phase quadrature (IQ) demodulator.A detailed description of the working principles of an IQ demodulator can be found in [30].The advantage of the digital IQ demodulator is that it does not suffer from any amplitude and phase imbalance, thereby introducing less phase error.However, a drawback is that it requires an extremely high frequency sampling, e.g., via a broad-band oscilloscope.The frequency and power of the RF source are also computer controlled using GPIB.Fig. 1.Setup scheme: the rectangular dotted components are for the FDPM measurements and the rectangular rounded components are for the GASMAS measurements.The two subsystems can be switched manually. In the GASMAS subsystem, wavelength modulation spectroscopy (WMS) [31] is employed to pick up the weak oxygen absorption signal around 763 nm from the gas in the porous medium, while the RF source is turned off.The current of the diode laser is modulated by a 4 Hz triangle signal together with a 9.025 kHz sine signal.The transmitted light is detected by the PMT -which is operated at a proper voltage to avoid saturation -and then amplified by a high current-to-voltage ratio amplifier (DHPCA-100, Femto).An analog-todigital (AD) converter card (NI6132) with 400 kHz sampling frequency, first samples the voltage signal and then transfers it to the computer for analysis.The 2f absorption signal is picked up by a Fourier-transform-based digital lock-in amplifier [32,33]. Mean optical path length (MOPL) evaluation The light propagation in scattering media can be described by the radiative transport equation (RTE).Using the diffusion approximation with extrapolated boundary conditions -where a series of mirror sources are placed along the light incidence direction to eliminate the boundary effect between the sample and the surrounding medium (e.g., air) -an analytical solution of the transmitted light through a scattering medium with slab geometry can be retrieved.The parameters of the extrapolated boundary condition and the measurement geometry are illustrated in Fig. 2. d ξ ) for the measurement geometry illustrated in Fig. 2, is given by Eq. ( 1) [26,34], Here, ξ is the source-detector separation as given in Fig. 2 Here ' c is the light speed in the scattering medium, 1 / [3( ' )] . The coefficient A can be fitted according to Eq. (A3) in [35].The phase shift due to multiple scattering ( φ ∆ ) can be derived from the ratio between the real and imaginary part of the transmitted light wave as described by Eq. (1).' s µ and a µ can be evaluated by fitting φ ∆ at several different frequencies.The transmitted light intensity in time domain -the response of a pulsed light source -referred to as the TOF curve, can then be calculated from Eq. (3) [35]. Here sm n is the refractive index of the bulk material.A simulation is given in Fig. 3 to clarify the relationship between phase shift and modulation frequency.The optical properties used in the simulation are typical values for the PS foam, i.e., ' s µ = 3410, a µ = 0.17 and a refractive index of PS foam sm n = 1.01 [36].The sample thickness for this simulation is 30 mm, and the source-detector separation is 0. A linear approximation of the phase shift derived from the MOPL, as given in Eq. ( 5), is also illustrated in Fig. 3 Fig. 3 suggests that that the MOPL value as determined from the measured phase shift and evaluated using Eq. ( 5) is underestimated, and the difference becomes larger as the modulation frequency increases.The difference also depends upon the optical properties and sample thickness of the scattering medium.However, a good approximation of the MOPL is still possible to derive by using the phase shift method at one low modulation frequency, e.g., below 20 MHz, however, at the cost of decreased phase resolution.In the present work, the MOPL is calculated in two ways, one approach is using Eq. ( 4) according to the optical properties derived from the fitting of Eq. ( 3), another approach is to use the linear relationship between the MOPL and the phase shift at a low modulation frequency, as described in Eq. (5).shift ( 0 φ ) must be calibrated.However, the measured phase shift depends heavily upon the characteristics of the PMT.The rise time of the PMT is affected by the incident light intensity.Thus, the PMT will have different phase response for different incident light intensity -an effect often referred to as the amplitude-phase crosstalk [37][38][39].Different PMT voltages also influence the rise times, i.e., yield different phase response.In our measurements, in order to reduce the phase error, the PMT voltage and the detected light intensity are kept constant during the sample measurement as well as during the instrument response calibration.In order to further reduce the phase measurement error, a piece of scattering material -here white paper of thickness ≈0.1 mm -is placed in front of the pinhole when calibrating the instrument response, thereby insuring a flat intensity distribution on the PMT surface.The path length through the thin white paper is negligible [40].The measurement procedure is described below: Measurement procedure and post-measurement analysis a. Measure the integrated absorption signal through the sample and air using the GASMAS mode.c.Insert a piece of white paper before the PMT and use a variable neutral density (ND) filter to adjust the intensity of the detected light intensity in order to keep the same direct output voltage as for the sample measurement.Thereby the instrument response ( 0 φ ) due to the instruments and the passage in the air is measured. d. Switch to the GASMAS mode and measure the oxygen absorption signal through the air (80 cm path).We note, however, that the air path length is different in the measurement steps (b) and (c), and the difference corresponds to the sample thickness d .Thus, the phase shift due to the sample must be compensated for, i.e., ) is then calculated using Eq. ( 4), as well as from the phase shift at, e.g., 10 MHz according to Eq. ( 5). The 2f absorption signal through the air -i.e., measurement step (d) -is used as a reference to fit the 2f absorption signal through the sample and air -i.e., measurement step (a).The fitting model including a second-order baseline correction is given by Eq. ( 6 According to the Beer-Lambert law and the WMS theory, the 2f absorption signal is proportional to the product between gas concentration and path length when the absorption signal is weak.Considering the oxygen concentrations in the air and in the samples as being the same due to the open pores, the path length through the gas-filled pores in the sample can be given as ( 1) With a known refractive index of the matrix material ( solid n ) of the porous medium, the physical path length through the bulk medium is given by Eq. ( 7).Thus, the optical porosity can be calculated from the ratio / In the present work, five PS foam samples of thickness 19 mm, 24 mm, 29 mm, 35 mm and 39 mm, respectively, are used for proof-of-principle measurements to validate the method.Additionally, a 10.3-mm thick balsa sample and an 8.6-mm thick spruce sample are also measured to demonstrate the potential of this technique for the characterization of wood.The PS/EPS foam is usually made of pre-expanded polystyrene beads, i.e., its matrix material is PS.The refractive index of PS at 763 nm can be fitted according to Cauchy's approximation [41], to yield 1.58.Dry wood consists primarily of cellulose, hemicellulose and lignin [42].Cellulose and hemicellulose are carbohydrates and constitute 65%-75% of dry wood [43].Thus, the refractive index of cellulose is used here as the characteristic value for the matrix material of wood.By fitting according to Cauchy's approximation, the refractive index of cellulose is found to be 1.47 [41]. Physical porosity assessment In order to evaluate the performance of our method, the physical porosity of the PS foam is also measured.The physical porosity is defined as the fraction of void in the total volume.In the present work, the physical porosity is simply determined by measuring the total volume ( tot V ) and weight ( tot w ) of the samples.The weight of the samples is measured by an electronic reading balance (Libror EB-280, Shimadzu Cooperation).The volume of the matrix material of the sample with no pores is given by / tot m w ρ , where m ρ is the density of the matrix material with no pores.The physical porosity is then given by ( / ) / . The density of PS is 1.05 3 / g cm [44], and the density for the matrix material of wood is approximately 1.5 3 / g cm [42,45]. PS foam measurement results The recorded raw signals for a 39-mm sample at 50 MHz were given in Fig. 4, where the direct voltage components are filtered out.The relationship between m φ , 0 φ and φ ∆ are also marked.It is clear from the figure that, comparing with the instrument response signal, due to the internal multiple scattering in the PS foam the amplitude of the scattered light signal is decreased and a phase shift is introduced.The recorded phase shifts at 5, 10, 20, 30, 40 and 50 MHz modulation frequency for PS foam are shown in Fig. 5.As expected from the simulation results, the phase shift tilts down at higher modulation frequency, and the tilt increases as the sample becomes thicker.The optical properties as well as the MOPL calculated both from fitting according to Eq. ( 4) and using a 10-MHz modulation frequency are given in Table 1, where the refractive index ( sm n ) used in the fitting is 1.01.The effect of the large detection area of the PMT is not considered in the simulation, since it does not have a significant effect on the MOPL.As can be seen from Table 1, the MOPL values calculated at 10 MHz are close to the values calculated from Eq. ( 4) for the thinner samples.The difference between the two approaches becomes larger as the sample thickness increases, however, still being relatively small (<2%).It should be noted, that in the present measurements the RF signals were averaged 50 times prior to the analysis.This reduces the variation of the recorded phase shifts to less than 0.02°, thereby becoming negligible.The fitting errors are also very small, however, since repeated measurements were not performed for the same sample, this variation cannot be given.Another source of uncertainty is the system drift, which has not been investigated thoroughly in the present work but is typically less than 0.1° during the measurement for each sample.Thus an exact uncertainty value for each measurement cannot be asserted, however, the uncertainty for the MOPL measurements should be expected to be less than 1 cm (a phase shift of 0.1° at 10 MHz corresponds to 1 cm path length).The evaluated path lengths through the pores of the PS foams measured by GASMAS are shown in Table 2.The uncertainty of the gas absorption path lengths depends upon the accuracy of the calibration procedures, which is estimated to be around 1 cm.The MOPL calculated from Eq. ( 4) is used for the physical path length evaluation according to Eq. ( 7).The path lengths through the gas, the physical path lengths, as well as the optical and physical porosities are presented in Table 2. gas L and m L are very similar to each other, as the experimental results measured by the TOFS and FMCW techniques reported in [16,20], respectively.The optical porosities show good consistence with their corresponding physical equivalents.The 100% optical porosity for the 19-mm PS foam is mainly due to the uncertainty of the path length measurement (≤1 cm).However, the difference between optical porosity and physical porosity is still quite small (≤3%). Wood measurement results The recorded phase shifts of the balsa and spruce samples are given in Fig. 6.The refractive index of the wood samples used in the fitting procedure is assumed to be 1.40.The optical properties, the MOPLs, the path lengths through the gas, the physical path lengths, as well as the optical and physical porosities are given in Table 3.As we can see from Fig. 6, the deviation between the fitting values and the experimental values are much larger comparing with the polystyrene foam, but still lower than 0.1°.This is mainly due to the lower SNR which results from the much larger reduced scattering coefficients for the wood samples.However, the uncertainties of the measurements for wood samples are similar with the ones for the PS foams.The optical porosities of balsa and spruce are found to be only 63% and 54% of the physical porosities, respectively.The large discrepancy between the optical and physical porosities is mainly due to the optical properties and structure of the wood samples.The inhomogeneity of the material (i.e., spatial variance) can also contribute to this, however, to a lesser extent.Here we should note that the refractive index of cellulose (1.47) was used as the refractive index of matrix material of wood for calculation, i.e., solid n = 1.47, which may not be very accurate.In order to investigate the effect of solid n , the optical porosities of balsa and spruce are calculated at different values of solid n (from 1.20 to 2.00).The optical porosities of the balsa and spruce are found to vary from 55% to 65% and from 32% to 44%, respectively.However, these values are still much smaller than the physical porosity. Discussions and conclusion The combination of the FDPM and GASMAS methods described in this work is clearly feasible for parallel assessment of MOPL and gas absorption in gas-filled porous media.The optical porosities of PS foams are consistent with their physical porosities, while the optical porosities of wood samples are much smaller than their physical porosities.This implies that the optical porosity may not directly correspond to the value of the physical porosity for the wood samples.However, on the other hand, the difference between the optical and physical porosities also indicates the preference of light travelling in the solid material.The ratio between the optical and the physical porosities could be constant for the same type of samples, as discussed in [17].Thus, the optical properties could still provide information about the physical porosity, and be used in industrial applications, e.g., pharmaceutical manufacture, where the density of the solid material can be difficult to determine.For practical applications, the system can become more robust and compact if fiber optics are used to deliver and collect the light.For instance the application towards in situ porosity measurement inside wood samples could be envisaged.However, we should note that the detected transmitted light intensity decreases quickly as the sample thickness increases.In the present work, the optical porosity is evaluated by the ratio between the gas absorption path length and MOPL, therefore both influencing the accuracy of the optical porosity.The accuracy of the MOPL could be improved by increasing the modulation frequency and the signal-noise ratio of the transmitted light, while the accuracy of the gas absorption path length is difficult to improve due to interference fringes and weak absorption signal.The gas absorption path length should be much larger than 1 cm in order to obtain reliable and useful results.A comprehensive investigation of the measurement accuracy could be the topic of future work. Interesting future work could be the further study of the relationship between the optical and physical porosities, as well as the influence of such factors as, e.g., pore size, refractive index and optical properties.It should be noted that the refractive index of the matrix material of the porous medium must be known when assessing the optical porosity.However, in the case that the refractive index of the matrix material ( solid n ) is unknown, we suggest to use a relative optical porosity (ROP) which is given by / gas m L L , to be compared with / gas physical L L for the optical porosity.The ROP will contain more information about the optical properties of the porous medium. In the fitting procedure, the refractive indices of the PS foam and the wood samples are 1.01 and 1.40, respectively.We should note that the refractive index only affects the optical properties but not the MOPL, since we use the same refractive index to calculate MOPL in Eqs. ( 3) and ( 4).An extended application of the combined FDPM and GASMAS method is to provide information on the refractive index of the scattering medium (bulk material), i.e., sm n , which is important for understanding its optical properties [46].Since the light velocity in the medium is given by / ( / ) ) is known -like in the case of ceramics, PS foam and porous silicon-the refractive index of the bulk material ( sm n ) could be determined.As we can see in Table 1 and Table 3, the MOPL calculated from Eq. ( 5) and by fitting are quite close to each other.This shows the possibility to use a single frequency to evaluate the MOPL as demonstrated in [47], which significantly simplifies the system.However, before using a single frequency to retrieve the MOPL, we should make sure that the phase shift at this frequency does have an approximately linear relationship with MOPL as described in Eq. ( 5).The disadvantage of using only one low frequency is the reduced resolution and accuracy.The FDPM subsystem of the current setup is dependent upon the homodyne demodulation technique, which in our case utilizes a high-frequency sampling oscilloscope.Although, a digital IQ demodulator induces much less phase error, the dependence on an oscilloscope is less convenient and robust.The FDPM subsystem can also be implemented utilizing the heterodyne technique, which can make the whole system more compact.The modulation frequency of the system can thereby be increased, e.g., up to 200 MHz, which will significantly increase the MOPL resolution. as the physical path length ( physical L ) through the whole medium.If we use m L or physical L Fig. 2 . Fig. 2. Measurement geometry and parameters illustration of the extrapolated boundary condition.If the incident light intensity is modulated by a frequency 0 f ,and transmitted through a scattering medium with a reduced scattering coefficient ' s µ and an absorption coefficient of the initial isotropical scattering, d is the sample thickness, m z ± are the depths of the mirror sources, m ρ ± are the distances between the mirror sources and the position of light detection, m z ± , m ρ ± and coefficient α are given by Eq. (2), b. Switch to the FDPM mode and measure the phase shift m φ due to the combined effect of multiple scattering in the sample ( φ ∆ ) and instrument response ( 0 φ ) at, e.g., 5, 10, 20, 30, 40, 50 MHz modulation frequency. Eq. (1) using the refractive index of PS foam sm n = 1.01, the optical properties - ' s µ and a µ -can be retrieved.The MOPL ( m L Fig. 4 . Fig.4.Recorded raw signals: the reference signal is sampled from the RF source directly, the instrument response is detected without any sample, while the scattered light signal is detected after the sample. thus sm n -defined by the ratio of the light velocity the vacuum and light velocity in the medium -is then given by / m physical L L .If the refractive index of the matrix material ( solid n
2018-12-06T10:56:22.706Z
2012-07-16T00:00:00.000
{ "year": 2012, "sha1": "146562a0cecbec73dc477c8f0b92d82005653e3c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.20.016942", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "1d4ad68628e403280b9d57131918d9bd6d57facf", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
103557706
pes2o/s2orc
v3-fos-license
Influence of neutron emission on the charge, mass and kinetic energy distribution of final fragments from 235U(nth, f) reaction Using a Monte Carlo method we simulate an experiment that measures the mass, charge and kinetic energy of final fragments (after neutron emission) from the 235U(nth, f) reaction. As input data for the simulation, for primary fragments (before neutron emission), we assume (i) a distribution of mass and kinetic energy; (ii) an average number of emitted neutron as a decreasing linear function of kinetic energy and (iii) for each mass, a constant yield of charges as a function of kinetic energy, equal to that obtained by W Lang et al for the highest measured kinetic energy window (108.5 MeV) which corresponds to the lowest measured excitation energy, that corresponds to the so called cold fission. The output of the simulation is the distribution of mass, charge and kinetic energy of final fragments. From this output we obtain that, for a given mass, the charge that has the highest yield in cold fission region has a yield obeying an increasing function of kinetic energy in all other region. Conversely, the yield of the less probable charge in cold fission is a decreasing function of kinetic energy in all other region. Our results of simulation suggest that, for two primary isobaric fragmentations with similar Q-value, the preference for more asymmetric charge splits (called Coulomb effect), observed in cold fission, is valid in all region, but neutron emission shadows this property in final fragments distribution. Introduction Coulomb interaction between the complementary primary fragments from nuclear fission of actinides is responsible for most of the total kinetic energy of those fragments [1]. The knowledge of the distribution of charge, mass and kinetic energy of primary fragments (before neutron emission) is necessary to study the fission process (from saddle to scission point) [2]. That distribution would express shell effects [3] or other properties on scission configuration. However, only final fragments (after neutron emission) are detected. Due to emission of neutrons, the distribution of mass, charge and kinetic energy of final mass is different from the corresponding distribution of the primary fragments and consequently may lead to erroneous theoretical conclusions about fission process. Nevertheless, for the thermal neutron induced fission of 235 U ( 235 U(n th , f) reaction), there is a region of high kinetic energy where no neutron is emitted. In this region, called cold fission because it corresponds to the low excitation energy, the primary fragments are detected [4]. W Lang et al measured the mass and charge distribution for several windows of kinetic energy of final fragments, the highest of which corresponds to 108.5 MeV [5]. In this cold fission region, it was observed that for two isobaric fragmentations with the similar Q-value, the more asymmetric charge split (lower light fragment charge) occurs with higher probability. This property was named Coulomb effect [6,7]. The Coulomb effect is due to the fact that between two isobaric scission configurations, with similar Q-values and similar shape configuration, the more asymmetric charge split has a lower Coulomb potential energy (C) then a lower total potential energy, P=C+D, where D is the total deformation energy of the complementary fragments. A lower P permits to reach a more compact scission configuration which implies a higher Coulomb Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. energy, then a higher yield of charge for the higher values of kinetic energy [6,7]. Moreover, a higher free energy (F=Q−P) of a given configuration implies a higher probability for that configuration to occur. On the experimental data from 235 U(n th , f) reaction [5] we can also observe that the yield of the most probable charge is an increasing function of the kinetic energy of final fragment. In order to interpret this property, because fragments with lower kinetic energy emit higher number of neutrons, we will use a Monte Carlo simulation to study the influence of neutron emission on the charge yield as a function of mass and kinetic energy of final fragments. The Monte Carlo method has been used to simulate the emission of particles in various fission and fusion processes by several authors. A Monte Carlo simulation has been used to calculate the probability of fission in binary light nuclear systems [8]. The cascade neutron emission process from excited fragments was simulated to calculate the multiplicity of prompt neutrons and the neutron energy as a function of the mass and total kinetic energy of the fission fragments [9]. A Monte Carlo Method was used to simulate the emission of neutrons, even before the complete acceleration of fragments, to calculate the multiplicity of neutrons as a function of the total kinetic energy of the fragments [10]. The Monte Carlo Hauser-Feshbach approach was applied to calculate the characteristics of γ and neutrons to compare them with experimental data [11]. A Monte Carlo Hauser-Feshbach LILITA code was used to simulate the fusion-evaporation reaction related to neutron emission [12]. In this work, we use the Monte Carlo method simulation to just study the difference due to neutron emission between the distributions of charge, mass and kinetic energy of final and primary fragments, respectively. The distribution of final fragments produced as output of the simulation will be compared with the experimental data from the 235 U(n th , f) reaction obtained by Lang et al at the LOHENGRIN mass separator of the Institut Laue-Langevin (ILL) [5]. ( ( )) This input data were taken from [13] where they were used to reproduce experimental distribution of mass and kinetic energy of final fragments obtained by D Belhafaf et al [14]. Monte Carlo simulation As input data for the Monte Carlo simulation of the 235 U(n th , f) reaction, for primary fragments we will assume (i) the yield of mass Y(A); the neutron multiplicity as a function of mass number (ν(A)) and the average of kinetic energy as a function of mass number E A ( ( ))presented on figures 1(a)-(c), respectively; (ii) a standard deviation of kinetic energy (σ E ) as a linear function of mass with 4 MeV for A=80 and 6 MeV for A=118. For each mass number (A), we simulate the distribution of number of emitted neutrons (n) with an average n ( ) as a decreasing linear function of kinetic energy, and a standard deviation where ν(A) is the neutron multiplicity as a function of primary fragment mass, presented in figure 1(b) and β assumed to be 0.6. The final fragment mass (m) and kinetic energy (e) are calculated with the relations: These input data about primary quantities of the fragments and calculation of final fragment mass and kinetic energy were already assumed in [13] to reproduce the distribution of mass and kinetic energy of final fragments from the 235 U(n th , f) reaction obtained by D Belhafaf et al at the LOHENGRIN mass separator [14]. In this work, for primary fragments we suppose that the yield of charges as a function kinetic energy is constant, equal to the corresponding to cold fission (e=108.5 MeV) measured by W Lang et al for 235 U(n th , f) [5]. For final fragments, because no emission of charged particle is assumed, the final fragment charge will be equal to the primary one: Nevertheless, in this expression, due to neutron emission, the mass and kinetic energy of final fragment are different from the corresponding values of primary fragments. As the output of the simulation, for final fragments we obtain the distribution of mass (m), charge (z), and kinetic energy (e). Results In order to compare the output of the simulation with the experiment data from the 235 U(n th , f) reaction obtained by W Lang et al Conclusion Using the Monte Carlo method we have simulated the emission of neutrons by the primary fission fragments from the 235 U(n th , f) reaction, and the measurement of mass and kinetic energy of the final fragments as it was carried out by W Lang et al at the LOHENGRIN mass separator [5]. Assuming, for a given primary fragment mass, a constant yield of charge (equal to the measured by W Lang et al in cold fission) as a function of kinetic energy as the input of the simulation, the output shows that, for a given final mass, the charge that has the highest yield in cold fission region has a yield obeying an increasing function of kinetic energy in all other region. Conversely, the yield of the less probable charge in cold fission is a decreasing function of kinetic energy in all other region. This property is observed on experimental data obtained by W Lang et al [5]. The output of our simulation suggests that this property is due to neutron emission. As a consequence, the output of our simulation suggests that, for a given primary fragment mass, the charge yields are mostly independent of kinetic energy and close to the charge yield corresponding to cold fission region, where Coulomb effect (preference for more asymmetric charge split, between two isobaric fragmentations with similar Q-value) was observed. In other words, Coulomb effect would be valid in all other kinetic energy region. Unfortunately we do not know similar calculation of the yield of charge as a function of kinetic energy of final fragments to compare it with our simulation results.
2019-04-09T13:10:40.118Z
2018-08-16T00:00:00.000
{ "year": 2018, "sha1": "ce90c9c3b2a75faff8f5b20daf9ab04a417192df", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2399-6528/aac07b", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "42c86ae9198e9c099895caf75c61e86cce1bfcb6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
136650487
pes2o/s2orc
v3-fos-license
Vacuum-evaporable spin-crossover complexes : physicochemical properties in the crystalline bulk and in thin films deposited from the gas phase † ‡ Four analogues of the spin-crossover complex [Fe(H2Bpz2)2(phen)] (H2Bpz2 = dihydrobis(pyrazolyl)borate; 2) containing functionalized 1,10-phenanthroline (phen) ligands have been prepared; i.e., [Fe(H2Bpz2)2(L)], L = 4-methyl-1,10-phenanthroline (3), 5-chloro-1,10-phenanthroline (4), 4,7-dichloro-1,10-phenanthroline (5), and 4,7-dimethyl-1,10-phenanthroline (6). The systems are investigated by magnetic susceptibility measurements and a range of spectroscopies in the solid state and in thin films obtained by physical vapour deposition (PVD). Thermal as well as light-induced SCO behaviour is observed for 3–6 in the films. By contrast, thermal SCO in the solid state occurs only for 3 and 4 but is absent for 5 and 6. These findings are discussed in the light of cooperative and intermolecular interactions. Introduction The development of molecular switches opens up new applications in spintronics and data storage. 1,2An important aspect of this research area refers to spin-state switching, which is based on electronic bistability in spin-crossover (SCO) compounds. 3][9][10] In these compounds the iron(II) center is surrounded by bidentate ligands with the positive charge being compensated by two H 2 Bpz 2 -ligands (Fig. 1).Upon cooling from room temperature to 160 K, spin crossover from 5 T 2g to 1 A 1g occurs, and below 50 K light-induced spin state switching (LIESST) into a metastable T 2g state is possible. Recently, we performed valence-band photoemission studies on ultrathin films (B6 monolayers) on Au(111) of [Fe(H 2 Bpz 2 ) 2 (phen)] (2) deposited from the gas phase. 11Vacuum-UV light induced excited spin state trapping (VUVIESST) was observed at temperatures below 50 K.By additionally irradiating the sample with green light the steady-state spin-transition temperature at which g HS = g LS = 0.5 could be shifted from 37 K to 99 K.Moreover, mono-and submonolayers of 2 were prepared by thermal evaporation and investigated by high-resolution STMtopography at 5 K. Electron-induced excited spin state trapping (ELIESST) was observed for single molecules of 2 in a double layer on Au(111).STS indicated a change in the HOMO-LUMOgap from B2 eV in the low-spin state to a much smaller value in the high-spin state, in agreement with DFT calculations. 1,11The composition of the first layer of 2 on Au(111) was further investigated by thermal and angle-dependent near-edge X-ray absorption fine structure (NEXAFS).Importantly, an isotropic Fe L 2,3 XA spectrum was obtained that reflected a high-spin state over the full temperature range, and angle dependent nitrogen K-edge XAS indicated an orientation of 1,10-phenanthroline of about 161 with respect to the surface.This information and Fig. 1 The complexes [Fe(H 2 Bpz 2 ) 2 (L)] investigated in this work, based on functionalized 1,10-phenanthroline ligands (3-6). One possible strategy to prevent decomposition of [Fe(H 2 Bpz 2 ) 2 (phen)] (2) on Au(111) is to reduce the interaction of this complex with the surface.This may be achieved by attaching substituents to the phen and bipy ligands.In the literature [Fe(H 2 Bpz 2 ) 2 (L)] complexes with annulated bipyridyl co-ligands or bipy/phen ligands functionalized by diarylethene or p-radical ligands have been reported; 12,13 however, the properties of these systems with respect to thermal deposition have not been described.Herein, we investigate the influence of a chemical modification of the 1,10-phenanthroline ligands on the physicochemical properties of [Fe(H 2 Bpz 2 ) 2 (phen)] (2).In particular, we want to study how methyl and chlorine substituents on the 1,10-phenanthroline ligand of 2 affect its SCO properties in the bulk and in thin films (Fig. 1).To this end we have prepared four analogues of 2 containing functionalized 1,10-phenanthroline ligands; i.e., [Fe(H 2 Bpz 2 ) 2 (L)], L = 4-methyl-1,10-phenanthroline (3), 5-chloro-1,10-phenanthroline (4), 4,7-dichloro-1,10phenanthroline (5), and 4,7-dimethyl-1,10-phenanthroline (6).Information on the spin crossover behaviour of 3-6 is derived from temperature-dependent susceptibility measurements, Mo ¨ssbauer spectroscopy and single crystal structure determination.Moreover, vacuum deposited films of 3-6 have been fabricated by thermal evaporation, and the thermal SCO, LIESST and reverse-LIESST characteristics 14 are studied by temperature-dependent optical transmission spectroscopy.Finally, infrared and resonance Raman spectroscopy as well as synchrotron-based XA-spectroscopy are performed on microcrystalline powders and vacuum-deposited films.The results are discussed in the light of cooperative and intermolecular interactions which are present in the crystalline bulk material but absent in vacuum-deposited films. X-ray crystallography Following the method reported for the synthesis of 1 and 2 microcrystalline powders were obtained for compounds 3-6 with methanol as the solvent. 4,15Attempts to crystallize these complexes from other solvents did not yield suitable single crystals either; only for 6 single crystals could be obtained using a toluene-n-hexane mixture.Compound 6Á0.5 C 7 H 8 crystallizes in the space group P-1 with Z = 2 molecules in the unit cell.The asymmetric unit consists of one complex in the general position and one half of a toluene molecule, which is located on the center of inversion and shows a random orientation.At room-temperature the disorder could not be refined; therefore, additional datasets were measured at 200 K and 110 K (ESI, ‡ Table S1). In the crystal structure of 6Á0.5 C 7 H 8 the Fe(II) cations are coordinated by two H 2 Bpz 2 anions and one 4,7-dimethyl-1,10phenanthroline ligand within a slightly distorted octahedral geometry.The FeN-bond distances are between 2.222-2.170Å at 293 K and thus are in a range expected for Fe(II) in a highspin configuration.Upon cooling only small changes in the Fe-N distances are observed (2.2210-2.1790Å at 200 K and 2.220-2.1693Å at 110 K), indicating that no SCO occurs in this temperature range.The discrete complexes are arranged into dimers by intermolecular face-to-face p-p-interactions between 4,7-dimethyl-1,10-phenanthroline ligands of neighbouring complexes (Fig. 2a, b).The interplanar distance between these ligands amounts to 3.507 Å.Similar dimers have also been observed by Real et al. in the crystal structure of 2, which exhibits SCO with a T 1/2 of 164 K.However, in 2 this distance is B10% longer (3.936Å; Fig. 2c). 4 Interestingly, Halcrow et al. reported an interplanar distance of 3.485 Å and 3.459 Å between the phenazine ligands of [Fe(H 2 Bpz 2 ) 2 (dipyrido[3,2-a:2 0 ,3 0 (6,7,8,9tetrahydro)phenazine)]], which remains in the high-spin state between 300 K and 70 K, suggesting that a short dimer-dimer distance might be correlated with a lack of SCO behavior. 12The short phen-phen distance in 6Á0.5 C 7 H 8 appears to be the result of perfect stacking (Fig. 2b).In particular the methyl groups of one phen ligand are exactly positioned above the center of one C 6 ring in a neighbouring phen ligand such that the two phen ligands (and the attached complex units) get closely interlocked.In 6Á0.5 C 7 H 8 the angle f(N 1 -Fe-N 4 ) of 97.76 is significantly enlarged with respect to 2 (f(N 1 -Fe-N 4 ) = 92.47),leading to a distortion of the FeN 6 core.This may also be a result of strong intermolecular face-to-face p-p-interactions, leading to steric repulsion between a 4,7-dimethyl-1,10-phenanthroline This journal is © The Royal Society of Chemistry 2015 ligand and a neighboring complex (cf.Fig. 2a, red).A similar repulsion appears to be absent in 2 (Fig. 2c red) where the interplanar phen-phen distance is longer. While no single crystal data could be obtained for 3-6, all of these compounds were characterized by X-ray powder diffractometry (cf.ESI, ‡ S2).Not surprisingly, the XRPD pattern of 6Á0.5 C 7 H 8 does not correspond to that of 6.However, the XRPD pattern of compound 6 is very similar to that of 5, indicating that both compounds are isotypic.This can be explained by the fact that in compound 6 the two chloro substituents in 5 are exchanged by methyl groups which exhibit similar van der Waals radii (the so-called chloro-methyl exchange rule). 16ermal spin-crossover in the solid state In order to analyse the spin state of iron(II) centres in compounds 3-6 magnetic susceptibility measurements were performed.Plots of the product w M T vs. the temperature (T) are given in Fig. 3; the thermal transition temperatures T 1/2 are summarized in Table 1.Compound 4 shows a fairly steep spin transition from 0.19 cm 3 K mol À1 at 10 K to 3.54 cm 3 K mol À1 at 300 K with a T 1/2 of 151 K.For 3, a less abrupt spin transition from 0.21 cm 3 K mol À1 at 10 K to 3.54 cm 3 K mol À1 at 300 K with a transition temperature of T 1/2 = 165 K is found.Both 3 and 4 have about 5% high-spin contribution at low temperatures.Compounds 5 and 6, in contrast, are predominantly high spin; i.e., the susceptibility sharply increases in the range below 20 K and remains at values around 3.5 cm 3 K mol À1 for 5 and 6 upon further increasing the temperature to 300 K.The hightemperature w M T values of 3-6 are considerably larger than expected for pure S = 2 systems (w M T = 3.02 cm 3 K mol À1 ), which is caused by spin orbit coupling. To obtain further information on the spin-state of the iron centers, Mo ¨ssbauer spectra were recorded at 300 K and 80 K (Fig. 4 and Table 1).At 300 K the spectra of 3 and 4 with monosubstituted 1,10-phenanthroline ligands show a doublet with d = 1.00 mm s À1 , indicative of high-spin iron(II) centers.At 80 K the isomer shift decreases to d = 0.53 mm s À1 , typical for low spin iron centers.In both compounds an amount of B5% high spin species is observed at 80 K, in agreement with the magnetic data.Compounds 5 and 6 with difunctionalized ligands exhibit isomer shifts of d = 1.00 and 0.96 mm s À1 , resp., at 300 K and d = 1.15 and 1.09 mm s À1 , resp., at 80 K, indicative of HS configurations at both temperatures. The thermal spin crossover in the solid state thus can be summarized as follows: compounds 3 and 4 are typical spin crossover compounds like compound 2 (T 1/2 = 163 K). 4,7 In 4, the electron withdrawing effect of the chlorine group lowers the transition temperature to T 1/2 = 151 K while the transition temperature is increased to T 1/2 = 165 K in 3 due to the electron-donating effect of the methyl group.In the case of the difunctionalized compounds 5 and 6 the 5 T 2 state is stabilized down to 20 K; i.e., the spin transition of the parent complex 2 becomes largely suppressed in the crystalline bulk material. Spin-crossover in vacuum-deposited films All compounds can be evaporated in a vacuum to obtain films on quartz substrates.Infrared spectra of the films are found to be very similar to those recorded for the bulk material (ESI, ‡ S3 and S4); an example is given in Fig. 5 for compound 6.The symmetric and antisymmetric B-H vibrations, e.g., are detectable in the bulk as well as in the film at n asym (B-H) = 2416 cm À1 and n sym (B-H) = 2277 cm À1 , which clearly indicates a successful thermal deposition without decomposition.Similar observations apply to compounds 3-5.To monitor the spin crossover in the films temperature dependent UV/vis absorption spectra were measured.For comparison microcrystalline powders of 3-6 dispersed in KBr-pellets were investigated (Fig. 6). Table 1 Thermal transition temperatures & Mo ¨ssbauer fitting parameters The metal-to-ligand charge-transfer (MLCT) bands of 2 at 500-650 nm are more intense in the LS than in the HS state. 3,7or 3, the MLCT band centred at 550 nm at 300 K similarly evolves into a more intense three-band pattern with maxima at 526, 567 and 624 nm at 80 K, both in KBr and in the film.Similar observations are made for 4. Importantly, the films of 3 and 4 exhibit the LIESST effect; i.e., at 5 K the low-spin state can be converted back to the high-spin state by irradiation at 519 nm for 5 minutes.For 6, the MLCT band exhibits two maxima (522 and 568 nm) at 300 K in KBr, but the spectrum exhibits little change upon decreasing the temperature to 80 K.This is consistent with the lack of SCO determined by magnetic susceptibility measurements for this system (see above).Surprisingly, however, in the vacuum deposited film of 6 the MLCT band at 550 nm (300 K) evolves into a much more intense twoband pattern (maxima at 568 and 615 nm) at 5 K, indicating a transition from the high spin to the low spin state. By irradiation at 519 nm at 5 K, the low-spin state can be converted to the high spin state, and this spin-state switching can be reversed to a certain degree by irradiation at 810 nm (reverse-LIESST, see below).Compound 5 in KBr shows less intense bands at 596 nm and 685 nm, whereas in films the intensity at 600 nm and 685 nm increases to the same level as in KBr.In Fig. 7 the highspin fraction calculated by the method applied previously 7,17 is plotted vs. the temperature.In the bulk material of 5 and 6 in KBr the spin transition is largely suppressed.On the other hand, all films show thermal SCO behaviour and exhibit the LIESST-effect by irradiation at 519 nm for 5 min at 5 K.For all systems, g HS values of B82-96% can be achieved.The critical LIESST-temperatures are T C = 52-54 K, which are 8-10 K higher than that for 2. 5,7 The reverse-LIESST effect is demonstrated in 6 under irradiation at 810 nm for 30 min, leading to a decrease of g HS from 96 to 74%.The thermal spin transition of 4 is more gradual in the vacuumdeposited film than in the bulk material (cf.Fig. 3) which can be attributed to a decrease of cooperative interactions. 14,18This has already been noticed in our study of the parent compound [Fe(H 2 Bpz 2 )(phen)] (2). 7As a matter of fact, the thermal spin crossover of a film of 2 is very similar to that of 2 embedded in polystyrene (ESI, ‡ Fig. S5).The observation of a spin transition in the films of 5 and 6 also appears to be due to a reduction of cooperative or intermolecular interactions which apparently ''lock'' these systems in the high-spin state in the crystalline bulk material. 19n order to obtain more information on the electronic states of 6 XA spectra at the iron L 2,3 edges were recorded. 9,20Fig. 8 shows the temperature variation for a thin film (B4 ML) on HOPG (a) and a bulk material crimped in indium foil (b); roomtemperature spectra are plotted in red and 80 K spectra in blue.At 300 K, the iron L 3 edge exhibits a typical double-peak structure with maxima at 708.4 eV and 709.2 eV, indicative of HS-Fe(II). 9,20At 80 K, the intensity of the 708.4 eV peak decreases, and the peak at 709.2 eV shifts to 709.4 eV together with a satellite peak that shifts from 711.0 eV to 711.7 eV.There is still a significant contribution of the 708.4 eV peak, characteristic for the high-spin state.The thermal spin crossover thus is not complete.Nevertheless, the SCO is more pronounced in the thin film than in the bulk powder sample, in agreement with the results from optical absorption spectroscopy.High-spin fractions were determined by fitting measured spectra with theoretical spectra obtained by multiplet calculations (cf.ESI, ‡ S6).In the thin film of B4 ML, the HS fraction determined in this way decreases from B100% at 300 K to about 58% at 80 K, whereas for the bulk sample, the HS fraction is reduced from 94% at 300 K to 73% at 80 K. From difference spectra we also conclude that the change of g HS is B2 times larger in the film than in the bulk material of 6.It must be stressed that Mo ¨ssbauer and magnetic measurements (see above) showed no contribution of the LS state at 300 K.However, Moliner et al. reported a LS fraction of 15% in 2. 5 In Fig. 9 the N-K XA spectrum of the thin film of 6 on HOPG, measured at 300 K, is shown along with the spectrum of the bulk material finely scratched onto an indium foil.The two spectra closely resemble each other, again confirming the integrity of the evaporated compound. Raman spectroscopy of the bulk material and thin films To further investigate the spin transition, temperature dependent resonance Raman spectra with excitation wavelengths of l exc = 514 nm and 647 nm were measured.In Fig. 10 the Raman spectra of 3-6 dispersed in KBr at 300 K and 25 K are shown.At 300 K, the high-spin spectra show numerous peaks for metalligand and interligand vibrations of the pyrazole and phenanthroline units.At 25 K, these peaks get more intense; moreover, a broad and intense band with maxima at 411 cm À1 , 425 cm À1 and 444 cm À1 appears for 3 (green).Similar bands are observed for 4 (maxima at 408 cm À1 , 433 cm À1 and 447 cm À1 ), 5 (maxima at 452 cm À1 and 495 cm À1 ) and 6 (maxima at 449 cm À1 and 470 cm À1 ). Raman spectra of a similar quality could not be obtained for the vacuum-deposited films of 3-6, in contrast to the parent compounds 1 and 2. Temperature-dependent Raman spectra of 1 and 2 recorded for the bulk material dispersed in KBr are shown in Fig. 11a and b (upper three traces) along with the corresponding thin-film spectra recorded at 25 K (bottom traces).Small differences between the 300 K spectra (high-spin; red) and the 100 K (low-spin; blue) are detectable; a full analysis of these data will be presented elsewhere.At 25 K (black traces) broad intense bands emerge at 460 cm À1 for 1 and at B420 cm À1 (maxima at 401, 426 and 441 cm À1 ) for 2 (green), in analogy to compounds 3-6 (Fig. 10).Importantly, these features are absent in vacuum-deposited films of 1 and 2 (bottom traces of Fig. 11).We attribute these bands to electronic Raman transitions.In the case of compound 2 the electronic transition obviously combines with vibrational modes of the high-spin state; i.e., peaks appearing at 417 and 434 cm À1 in the room temperature spectrum (Fig. 11b red) are present as dips in the 25 K spectrum (Fig. 11b, black).For further clarification Fig. 11c shows a Gaussian profile centred at 417 cm À1 representing the electronic Raman transition.It is seen that the 25 K spectrum exhibits minima at positions where the room-temperature spectrum (enlarged) has maxima.Such antiresonance phenomena occur when transitions to a continuum of states are superposed with transitions to discrete levels. 21This appears to be the case for compound 2 as the electronic Raman band is much broader than the vibrational peaks.Similar considerations apply to compounds 3-6, exhibiting electronic Raman bands with complex band shapes as well (Fig. 10).For compound 1, on the other hand, no vibrational peaks are present in the region of the electronic Raman transition; therefore this band exhibits a conventional band shape without antiresonance dips (Fig. 11a, green). To explain the observation of an electronic Raman effect in the crystalline bulk material we assume that excitation through the Raman laser populates the high-spin state and an electronic transition occurs within the 5 T 2g state split by low symmetry into a |xi, |Zi and |zi state. 22Real et al. reported a compressed octahedral geometry for 1 and 2 based on crystal structure determination. 4 The effect of low-symmetry ligand fields and spin-orbit coupling on the magnetic behaviour of transitionmetals with the 5 T 2g ground term has been reported. 23Electronic Raman transitions between low-lying electronic states have been detected for lanthanide and transition metal ions before, especially tetraphenylporphyrinatoferrate(III) complexes. 24 Conclusions In the present study we investigated the influence of methyl and chloro substituents on the spin transition of [Fe(H 2 Bpz 2 ) 2 (phen)] (2) in the solid state and in films deposited from the gas phase.Importantly, thin film preparation is feasible for all compounds by thermal deposition, enlarging the library of iron complexes for physical vapour deposition. 1,2,7,20,25These systems thus offer the unique opportunity to investigate the thermal as well as light-induced SCO behaviour in the absence of solid-state effects such as cooperative or intermolecular interactions. A particularly spectacular example for this aspect is the behaviour of compound 6, which shows SCO in a vacuumdeposited film but remains in the high-spin state over almost the entire temperature range in the crystalline bulk material.Although we do not have direct information about the crystal structure of solvate-free 6, we assume that the persistence of the high-spin state in the solid state is due to the formation of dimers through p-p interactions between neighbouring 4,7-dimethyl-1,10-phenanthroline ligands which have been detected in the toluene solvate of 6 and in other [Fe(H 2 Bpz 2 ) 2 (L)] complexes. 12f sufficiently strong these interactions can ''lock'' the system in the high-spin state.In microcrystalline powders of 6, these interactions are reduced.This allows an incomplete spin-transition to occur, as evidenced by optical and X-ray absorption spectroscopic measurements.In a vacuum-deposited film of 6 of several 100 nm thickness a full spin transition is observed, suggesting that the intermolecular interactions between the SCO molecules now are absent.On the other hand, for a film of B4 ML of 6 on HOPG the spin transition again becomes incomplete.We believe that this is an effect of the surface on the spin transition of this complex, as observed for other systems. 10 second hallmark of the [Fe(H 2 Bpz) 2 (L)] systems is the emergence of an electronic Raman transition at low temperature, as observed for compounds 1-6.This phenomenon, however, appears to be restricted to the crystalline bulk material because for the parent compounds 1 and 2 the corresponding bands are absent in vacuum-deposited films.As decomposition of 1 and 2 can be ruled out in the films, 7 in analogy to compounds 3-6, we assume that the special packing of these complexes in the solid state and the emergence of electronic Raman transitions are connected with each other.Further investigation of this intriguing problem is underway. Experimental All reactions were carried out in dry solvents and under an inert atmosphere.Functionalized 1,10-phenanthroline, iron(II) perchlorate hydrate and solvents were purchased commercially and used as supplied.Potassium dihydrobis-pyrazolylborate K[H 2 Bpz 2 ] and the complexes were prepared according to literature methods.The method as reported for 3, using Single crystal structure analysis Data collections for 6Á0.5 C 7 H 8 were performed at three different temperatures using an imaging plate diffraction system (IPDS-2) from STOE & CIE with Mo-Ka-radiation (l = 0.71073 Å).The structure solution was prepared by direct methods using SHELXS-97 and structure refinements were performed against F 2 using SHELXL-97. 26All non-hydrogen atoms were refined anisotropic.The C-H and B-H H atoms were positioned with idealized geometry and refined isotropic with U iso (H) = 1.2U eq (C, B) (1.5 for methyl H atoms) using a riding model.The crystal structure contains an additional toluene molecule, which is disordered on a center of inversion.At room-temperature the disorder cannot be resolved and therefore, the data were corrected for disordered solvent using Squeeze in Platon but the toluene molecule was considered in the calculation of the molecular formula.At 200 and 110 K the disorder can be resolved and the toluene molecule was refined with a split model using restraints.CCDC 1054498 (6Á0.5 C 7 H 8 at 293 K), 1054497 (6Á0.5 C 7 H 8 at 200 K) and 1054497 (6Á0.5 C 7 H 8 at 110 K). XA-spectroscopy The measurements were carried out in situ at a pressure of 8  10 À10 mbar at the beamline UE56/2-PGM-1 of BESSY II.The photon flux at the sample position was about 10 13 photons per s per cm 2 , with the energy resolution of the beamline set to 200 meV.XA spectra were recorded at the magic angle of 54.7 between the surface and the k vector of the linearly p-polarized X-rays.The absorption was measured in the total electron yield mode, where the sample drain current is recorded as a function of photon energy.The XA spectra were normalized with respect to a gold grid upstream to the experiment, and to the background signal from a clean HOPG substrate.The HOPG substrate (12 mm  12 mm) with a mosaic spread angle (0.4 AE 0.1) was purchased from Structure Probe.A clean HOPG surface was obtained by cleaving away layers of the surface in a vacuum (10 À6 mbar) using carbon tape.The bulk sample was prepared by finely scratching molecular powder onto indium foil.The thin film (4 ML) was prepared by evaporating the molecular powder from a tantalum Knudsen cell at about 460 K onto the substrate held at RT.The thickness of the film was monitored using a quartz microbalance during the evaporation. Other measurements Elemental analyses were performed using a Euro Vector CHNS-O-element analyser (Euro EA 3000).Samples were burned in sealed tin containers by a stream of oxygen.IR spectra were recorded on a Bruker Alpha-P ATR-IR Spectrometer.The magnetic measurements were performed using a physical property measurement system (PPMS, Quantum Design) with a magnetic field strength of 1 T. Diamagnetic corrections were applied with the use of the tabulated Pascal's constants.Mo ¨ssbauer measurements were recorded using a self-assembled spectrometer using standard transmission geometry.XRPD of the bulk material were recorded on a X'PERT PRO PANalytical instruments with a Go ¨bel mirror and a PIXcel detector using Cu radiation.For Raman spectroscopic measurements a Dilor XY-Raman spectrometer (Horiba) was used with an Ar + /Kr + mixed gas laser (Spectra Physics GmbH) operating at 647 and 514 nm.The compounds were crimped in KBr or evaporated as films on quartz discs or on Au/Ti/glass substrates (thickness several 100 nm) under the same conditions as reported in ref. 7. Au/Ti/glass substrates with a 50 Å titanium base layer and a 1000 Å evaporated gold film were purchased from EMF Corporation (Ithaca, NY).UV/Vis spectra were recorded using a Cary 5000 spectrometer in transmission geometry.For temperature dependence a CryoVac Kryostat with liquid nitrogen or helium cooling was used.For illumination experiments 3 LED Luxeon Typ LXML-PM01-0080 (519 nm) and 1 LED Roithner Laser Technik APG2C1-810 (810 nm) was used from Sahlmann Photochemical Solutions. Fig. 5 Fig. 5 Physical vapour deposited film on a quartz disc of 6 at 300 K (a) and 80 K (b).FT-IR spectra (c) of the bulk material (black dotted) and the vacuum deposited material (red) of 6 at 300 K. Fig. 6 Fig. 6 Temperature dependent UV/Vis absorption spectra of 3, 4, 5 and 6 in KBr-pellets and film on a quartz disc.LIESST and reverse-LIESST are demonstrated in films under irradiation at 519 nm (green lines) or 810 nm (orange line). Fig. 8 Fig. 8 Temperature-dependent iron L 2,3 XA spectra of 4 ML of 6 on HOPG (a) and bulk powder sample crimped in indium foil (b) at 300 K (red lines) and 80 K (blue lines). Fig. 9 N Fig. 9 N-K XA spectrum of 4 ML of 6 on HOPG (black line) with respect to the bulk material finely scratched onto indium foil (grey dotted line) at 300 K. Fig. 11 Fig. 11 Temperature dependence of the resonance Raman (l exc = 647 nm) spectra of 1 (a) and 2 (b) at 300 K (red lines), 100 K (blue lines) and 25 K (black lines) in KBr-pellet and vacuum deposited film (l exc = 514 nm) on Au/Ti/glass at 25 K (black lines, bottom).Resonance Raman (l exc = 647 nm) of 2 (c) in KBr at 300 K (red line, 15 enlarged), 25 K (black line) and the Gaussian profile of the 25 K spectrum (grey area).The electronic Raman transition bands are coloured in green.
2018-12-11T13:09:08.635Z
2015-07-23T00:00:00.000
{ "year": 2015, "sha1": "a0e67787ae33c321cc3ac97f89283424538f50e8", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/tc/c5tc00930h", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a0e67787ae33c321cc3ac97f89283424538f50e8", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
5722684
pes2o/s2orc
v3-fos-license
Post operative pain management in shoulder surgery: Suprascapular and axillary nerve block by arthroscope assisted catheter placement. BACKGROUND Postoperative pain management is the part of shoulder surgery to improve patient satisfaction, start rehabilitation process rapidly and decrease for hospital stay. Various treatment modalities have been used for pain management, but they have some limitations, side effects and risks. Throughout intraoperative and postoperative period, nerve blocks have been used more popularly than others because of efficacy. For the regional nerve block, local anesthetic should be infiltrated close to the nerve for maximum effect. Consequently, aim of this study was to evaluate analgesic efficacy when catheters are placed with assistance of arthroscope to block suprascapular and axillary nerves in patients undergoing arthroscopic repair of rotator cuff under general anesthesia. MATERIALS AND METHODS 24 patients (5 males, 19 females; mean age: 54.3 years) who underwent arthroscopic repair of rotator cuff between June 2014 and September 2014 and were catheterized to block suprascapular and axillary nerves during shoulder arthroscopy were included in the study. Clinical outcomes were assessed using visual analog scale (VAS) scores preoperatively and at 0 h, 6 h, 12 h, 18 h, 24 h, and postoperative day 2. RESULTS Preoperative and postoperative 0 h, 6 h, 12 h, 18 h, 24 h, and day 2 mean VAS scores were 6.38 ± 0.77, 0.44 ± 0.42, 0.58 ± 0.42, 0.63 ± 0.40, 0.60 ± 0.44, 0.52 ± 0.42, and 1.55 ± 0.46, respectively. No statistical difference was found among 0 h, 6 h, 12 h, 18 h, and 24 h time points; however, comparison of postoperative day 2 and postoperative 0 h, 6 h, 12 h, 18h and 24 h VAS scores showed statistically significant difference (P < 0.05). All patients were discharged at the end of 24 h with no complication. The mean time (in minutes) required for blocking suprascapular nerve and axillar nerve were 14.38 ± 3.21 and 3.75 ± 0.85, respectively. CONCLUSION These results demonstrated that blocking two nerves with arthroscopic approach was an excellent pain management method in postoperative period. Accordingly, patients could recover rapidly and patients' satisfaction could be improved. on multimodal analgesic agents. 1 For pain management, numerous treatment modalities have been described to date; however, they have some limitations, side effects, and risks. Nonsteroidal anti-inflammatory drugs (NSAIDs) can cause reduced platelet function, prolonged bleeding time and gastric ulceration. 2 Opioids can lead to nausea, vomiting, sedation, constipation and intestinal ileus. Intraarticular (IA) local anesthetic injections alone might not be enough to reduce pain, and efficiency of IA local anesthetic or morphine remains controversial. 2,3 Although interscalene block (ISB) has been used for intraoperative anesthesia and postoperative pain management, it has serious side effects such as inadvertent epidural and spinal anesthesia, spinal cord injury, brain damage, brachial plexus injury and paralysis of the vagus and laryngeal recurrent nerves as well as cervical sympathetic nerve and pneumothorax. Effectiveness of ISB is correlated with the anesthetist's skill level. 4 Recently, suprascapular nerve block (SSNB) and axillary nerve block (ANB) are used for intraoperative anesthesia and postoperative pain management in shoulder arthroscopy, especially for rotator cuff repair; however, the procedure is technically challenging and the success rate varies widely. 5 Successful ambulatory surgery depends on analgesia and it is effective and has minimal adverse effects. 3 For blocking nerve effectively, local anesthetic should be infiltrated close to the nerve. Although various techniques have been described SSNB and ANB, none of them could achieve effective pain management. 3,4,6,7 Our technique has allowed blocking nerves and placing catheters as close to the nerve as possible. The aim of this study was to evaluate postoperative analgesic efficacy of suprascapular and ANBs in shoulder arthroscopy for patients undergoing arthroscopic repair of rotator cuff under general anesthesia. We hypothesized that suprascapular and ANB would alleviate postoperative pain and reduce requirement of analgesic drugs, thus decreasing side effects of medicaments and problems arising out of the technique. Hence, all these benefits would improve patient satisfaction and permit early postoperative shoulder rehabilitation. MAtEriAls And MEthods Twenty four consecutive patients who were diagnosed with medium or large cuff tear with retraction <2 cm were treated by shoulder arthroscopy with arthroscopy guided suprascapular and axillary nerve blocks between June 2014 and September 2014. The inclusion criteria were as follows: (1) Substantial pain (no posterior pain) and functional limitation, (2) retraction <2 cm, (3) history of more than 6 months and (4) failure of nonsurgical treatment modalities. Informed consent was obtained from all the patients [ Table 1]. Operative procedure Surgeries were carried out under general anesthesia in the beach chair position by the same senior surgeon. The arthroscopy was achieved using standard posterior "soft spot," lateral and anterolateral portals for evaluating glenohumeral joint and subacromial space. After joints were explored, an 18-gauge epidural needle (Smiths Medical ASD, Inc. Keene, USA) was advanced from the rotator interval to joint, with outside-in technique, and directed toward anterior part of inferior middle glenohumeral ligament and advanced 5 mm into the joint capsule for ANB and catheter was advanced through the needle. After location, the catheter was verified with arthroscopic approach (between 4:30 and 7 o'clock radius for right shoulder and between 5 and 7:30 o'clock for left shoulder), we gave half portion of solutions prepared, which were composed of 10cc of 0.5% bupivacaine hydrochloride (marcaine 0.5%, AstraZeneca Inc., London (UK), Turkey), for ANB. Figure 1a-d shows images obtained during arthroscopy for ANB. After glenohumeral joint was explored and axillary nerve (AN) was blocked through the posterior portal, the arthroscope was introduced into the subacromial space through the posterior portal. We used arthroscopic techniques to block suprascapular nerve (SSN), which was described in 2007 by Lafosse et al., 8 for releasing entrapment of SSN at the suprascapular notch with arthroscopic method. According to this technique after anteromedial bursa was removed to provide access to the suprascapular notch using shaver and radiofrequency (RF), the scope was introduced into the subacromial space through the lateral portal and shaver and RF device was introduced through the anterolateral portal to complete removal of bursal tissue. This step was done first due to swelling; subacromial decompression, biceps tenotomy or tenodesis, and rotator cuff repair were done after SSNB. First, coracoacromial ligament was identified, and its trace was followed down the base of the coracoid. Next, coracoclavicular ligaments (conoid and trapezoid) were identified with posterior and medial dissection. Medial border of those ligaments at the base of the coracoid defined lateral insertion of the superior transverse scapular ligament (TSL). The TSL was identified as the medial continuity of the conoid ligament above the scapular notch. The suprascapular artery was easily visualized superior to the ligament, and the SSN was identified as it travels underneath the ligament. Once TSL was adequately visualized, an 18-gauge epidural needle was advanced below the TSL and through medial border of transverse scapular notch to place the catheter for blocking SSN. When the epidural needle was oriented correctly, the catheter was advanced into the needle and the needle was drawn back slowly and catheter position was visualized immediately below the TSL and medial border of the coracoid. After the catheter was arthroscopically confirmed to be in the accurate location, we gave half portion of prepared solutions, which were composed of 10cc of 0.5% bupivacaine hydrochloride, for SSNB. Figure 1e-h shows the images obtained during arthroscopy for SSNB. After blocking of the two nerves, the rotator cuff tear was mobilized and repaired using suture anchors. All patients' tears were repaired double-row anchor techniques and four biceps tenotomy and two biceps tenodesis were performed. All patients have performed subacromial decompression because of fraying of coracoacromial ligament and impingement. To complete the procedure, the portals were closed with an absorbable subcutaneous suture. Eventually, remaining portions of prepared solutions were given through the catheters, which were placed for blocking of SSN and AN. Lastly, a velpeau bandage was used. Postoperative management We give 5cc of 0.5% bupivacaine hydrochloride through both catheters 6 hourly up to end 24 h in postoperative period. At the end of 24 h, we gave the last dose of 0.5% bupivacaine hydrochloride with 40 mg of methylprednisolone acetate (Depo-Medrol, Pfizer Inc., New York (USA), Turkey) and catheters were removed at the end of 24 h. We evaluated patients' satisfaction using VAS before additional bupivacaine hydrochloride doses were given. During this study, patient did not require extra analgesic dose for pain relief. Before patients were discharged, we prescribed NSAID for pain management, but no patient required drug for pain reduction. All patients were mobilized at postoperative first 3 h. Statistical analysis was performed using Mann-Whitney U-test. The confidence level was 95%, and significance was set to P < 0.05. Analyses were conducted using SPSS version 15 for Windows (SPSS Inc., Chicago, IL, USA) software. Table 3]. These results demonstrate that intraoperative blockage of two nerves provided excellent pain relief in postoperative period. All patients were discharged without any complication at the end of 24 h. And they were seen on postoperative day 2 to change dressing and to evaluate pain with VAS scores. discussion Postoperative pain management is the most important part of the shoulder surgery to facilitate convalescence, shorten hospital stay and start rehabilitation exercise earlier. 5,9 After rotator cuff surgery, Boss et al. 2 emphasized that severe postoperative pain was seen within first 48 h. NSAIDs, opiate analgesic drugs, patient-controlled analgesia (PCA), IA injections of morphine or local anesthetics, and nerve blocks such as ISB, SSNB, or ANB are commonly used for reducing postoperative pain. These treatment modalities can be used alone or in combination. Recently, regional nerve blocks have been a more popular technique than NSAIDs, opiate analgesic drugs, PCA and IA injections. Blocks reduce both intraoperative and postoperative pain efficiently in arthroscopic shoulder surgery. Complications such as vomiting, nausea, sedation, or unsatisfactory analgesic effects cannot be observed. 10,11 The ISB has turned into a preferred technique for intraoperative anesthesia and postoperative analgesia worldwide. Especially, continuous ISB block via a catheter after shoulder arthroscopy has reduced pain effectively in comparison with other techniques. However, this technique has been associated with potential side effects and complications, such as rebound pain, phrenic nerve palsy respiratory distress, or diaphragmatic paresis. [12][13][14] The combination of SSNB and ANB has been also used effectively for anesthesia in shoulder arthroscopy, 6 and these blocks have provided safe analgesia in intraoperative and early postoperative periods. However, landmarks of SSN and AN could not have been described accurately so far. The philosophy of regional nerve blocks is that the local anesthetic should be infiltrated close to the nerve to the maximum extent. 5 Therefore, the landmarks of the nerves should be identified precisely. Shoulder is innervated by SSN, AN, and lateral pectoral nerve. Posterior and superior parts of joint capsule are innervated by SSN. Anteroinferior part of joint capsule is innervated by AN. Anterosuperior part of joint is innervated by lateral pectoral nerve. The SSN and AN carries almost all sensorial impulses to and from shoulder. Hence, contribution of lateral pectoral nerves might remain unnoticed for rotator cuff surgery. 4,5,15 Accordingly, SSN and AN blocks provide effective management of pain in postoperative course of arthroscopic rotator cuff surgery. Anatomies and traces of nerves and location of sensorial branches of the SSN and AN should be well known to carry out block anesthesia in intra-and postoperative pain management. The SSN originates from the superior brachial plexus as a sensory-motor nerve, close to Erb's point. 8 It crosses the posterior triangle of the neck to the scapular notch, goes on deep to the trapezius and omohyoid muscles and then follows the suprascapular artery to the notch. The suprascapular notch is a bony depression medial to the base of the coracoid process with its superior aspect roofed by the TSL. The artery passes over the TSL, whereas the nerve passes underneath this ligament. 4,15,16 Rarely, both of them can pass underneath TSL. 17 At an average of 4.5 cm proximal to the TSL, a relatively large superior articular branch separates from the main stem and runs along with it to enter the suprascapular notch underneath the TSL at its most lateral aspect. Immediately after entering the suprascapular notch, the SSN turns laterally around the base of the coracoid process, to which it consistently releases small periosteal twigs and a small branch to the coracoclavicular ligaments. 15,18 The main articular branch then advances laterally in the interval between the dorsum of the coracoid and the suprascapular muscle, which is filled with fat and connective tissue and splits into 2 terminal branches. One of them descends to innervate the coracohumeral ligament and its adjacent capsular region, and the other splits into several small branches innervating the subacromial bursa and the posterior aspect of the acromioclavicular joint capsule. The main stem of the SSN traverses underneath the TSL into the suprascapular fossa and releases the main muscular branch to the supraspinatus muscle shortly after this passage, which takes off medially. At the level of the scapula spine, a relatively large constant inferior articular branch separates laterally and travels obliquely toward the posterior joint capsule. On its course, this inferior articular branch releases several small branches that deviate upward and downward to terminate where the tendon of the infraspinatus muscle merges with the posterior joint capsule and rotator cuff. The SSN then terminates by innervating the infraspinatus muscle. 8,15,16 According to these anatomic pictures, under the TSL is optimal place for blocking SSN because of initial point for separation of sensorial braches of joint. During arthroscopy, we placed the epidural needle underneath the TSL and advanced catheter into the needle near the SSN, so blocking was achieved. The AN originates from the spinal cord at the C5 and C6 level with occasional contribution from the C4 position. It is branch of the posterior cord of the brachial plexus, lateral to the radial nerve, and posterior to brachial artery. 4 Along its course across the subscapular muscle, the AN releases its first articular branch, which slowly separates itself from the main stem as it runs to the inferior-anterior joint capsule. As the AN enters the fat and connective tissue near the lower edge of the subscapular muscle, it splits into its 2 main branches. The medial branch mainly supplies branches for the scapular aspect of the inferior anterior capsule and parts of the axillary recess, whereas the lateral branch runs along the inferior edge of the subscapular muscle to finally innervate the humeral parts of the anterior capsule. The muscular branch, which innervates the teres minor, issues a small articular branch at the level of insertion of the long head of the triceps to the lateral axillary recess. 15,19 According to Uno et al.,20 the AN stayed in the middle third of the "capsular hammock" between the glenoid and humeral neck and it has an intimate relation with the shoulder capsule between the 5 and 7 o'clock (right shoulder) positions. Eakin et al. 21 reported that the nerve was closest to the glenoid at the 4:30 O'clock position. Price et al. 22 reported that AN lies closest to the glenoid at the 6 o'clock position, and the AN travels at a fixed distance from the inferior glenohumeral ligament throughout its course, and its average distance from the inferior glenohumeral ligament is 2.5 mm. The study of Bryan et al. 23 showed that AN average distance from the inferior glenohumeral ligament is 3.2 mm. According to these anatomic descriptions, anterior shoulder capsule between the 4:30 and 7 o'clock (right shoulder) positions is optimal place for blocking AN because of the initial point of separation of sensorial braches of joint. During arthroscopy, we placed the epidural needle to the anterior joint capsule between 4:30 and 6 o'clock position and advanced the needle 5 mm through the joint capsule and then advanced catheter into the needle to block AN. In this study, TSL was not resected because of having no retraced rotator cuff tears more than 2 cm and no posterior shoulder pain with special test described by Sahu et al. 24 Yamakado 1 reported that rotator cuff repair with placed pain catheter adjacent to the SSN via arthroscopically was highly effective in controlling postoperative pain. In that study, TSL release was performed on each patient during the surgery. Checcucci et al. 4 report that 20 consecutive patients underwent arthroscopic procedures for shoulder cuff diseases were performed combined SSNB and ANB using the identified landmarks; however, general anesthesia was not performed on any patients. According to this study, combined blocks were adequate for intraoperative anesthesia and postoperative analgesia for certain procedures of shoulder arthroscopic surgery. Our VAS results were similar in this study; however, our VAS score was lower. As emphasized in literature, 1,[3][4][5]7,9,18 the outcomes performed combined block of SSN and AN provide good pain relief for postoperative periods. Patient satisfaction is increased by this way. We performed blocking of SSN an AN during surgery by monitoring, so regional nerve blocks philosophy were performed as close to nerve as possible. In the literature, [2][3][4][5][6][7]18,25,26 there is no consensus about used kinds, mixtures and combination of local anesthetic agents and combinations with other drugs such as cortisone. First, we used 10cc of 0,5% bupivacaine hydrochloride for blocking SSN and AN in shoulder arthroscopy, after that we used 5cc of 0,5% bupivacaine hydrochloride in each catheter respectively during 6 h intervals up to 24 h. At the end of the 24 h, we used last doses with combined 40 mg methylprednisolone acetate and we removed the catheters. These mixtures and combinations of local anesthetic agent with cortisone provide effective analgesia after shoulder surgery. We did not need to give additional analgesic drugs such as NSAIDs, opioids or PCA. Our study has some limitations such as small case number and no control or comparison groups. Besides, learning curve was decreased with time (required mean time [minutes] for blocking of SSN and AN: 14.38, 3.75, respectively). We think that blocks should be done at the beginning of the surgery because of swelling of tissue. conclusion We obtained good comparable results with the literature about reduction of postoperative pain and provided rapid recovery and rehabilitation. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2018-04-03T04:18:09.551Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "7b9d58a9818b0b95b471f9eae0b5683d750cd334", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc5122251", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "84f344158de87be22ac03e04325af7e28b1a7895", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55351139
pes2o/s2orc
v3-fos-license
Survey Kinship Status in Human Cloning Considering the Laws in Iran Simulation, especially the human generation, is one of the most amazing technologies of Genetics. Human cloning after successful experiments in mammals’ simulation and scientists’ prediction regard ing its possibility in the current status of human science has raised up arguments in other fields than the experimental science such as ethics, religion, and laws. There are several perspectives regarding simulat ion lineage in Iran’s Laws: some believe there is no lineage since it ’s not through natural fertilizat ion. Some others believe that if the owner o f the cell is masculine, the father, and if it ’s femin ine, the mother is simulated. And some others consider the parents owner of the cell as simulated parents and the simulated will be twin sibling of the cell’s owner and the carry ing mother will be “Mother Rezaee” . Introduction After successful experiments of cloning mammals and the prediction of scientists regarding its possibility at the current state of human knowledge, human cloning has led to disputes in the domains other than empirical knowledge including ethics, religion, and law.The act of cloning in biology refers to duplication of liv ing creatures without sex, in which case, in contrast to the sexual rep roduction, the resulting cre atures do not possess the features of the male and the female, but are similar versions of the initial creature fro m wh ich they were created.It means that we can consider them as "Certified Copies" of the initial creature.The scientists also call the type of new creatures, wh ich are genetically (hereditary) analogous, "Clones" or "Similar".Therefore, the process of asexual creation of a group of cells, molecu les, or liv ing creatures, which are all hereditary similar to the same parent, is called "Cloning" In Iran there is not any legal rule regard ing kinship.Of course, the scholars in law field have stated some ideas about 'kinship' and have tried to explain it.Some claim that due to the lack o f natural imp regnation or the sexual cells of man and wo man in some cases, the kinship claims about them is not acceptable. But, some others believe in the presence of kinship in hu man clonings due to the commonsense understanding and the correctness of the application of being born in such a case. Still others not only believe in the existence of kinship, but also consider the simu lated person as the sister or brother of the cellu le owner. Survey Kinshi p Status Considering Regulations in Iran The thing that is going to be investigated in the present Survey is a discussion of kinship, regarding simu lated human beings.Different attitudes have been posed up to now about kinship in hu man cloning. Before starting to talk about issues related to kinship resulting fro m human human cloning, a short discussion should be presented regarding the meaning of kinship and its importance. following Shiat Fegh (religious scholarship), there is not any definition for kinship, but in 8 th book , 'kinship' has been defined as a minor tit le for 'ch ildren' and articles 1158 to 1167 are appropriated to this topic.The approval of kinship is deemed highly impo rtant." ‫من‬ ‫ق‬ ‫خل‬ ‫الذی‬ ‫و‬ ‫ه‬ ‫و‬ ‫قدیرا‬ ‫ربک‬ ‫کان‬ ‫و‬ ‫صهرا‬ ‫و‬ ‫نسبا‬ ‫فجعله‬ ‫بشرا‬ ‫الماء‬ " (And he is the God who has created human being fro m the water of sperm and devised kinship and relation based on marriage among them and your God is ab le to do whatever he wants) (Forghan,54).Therefore, in Islamic laws, there has been a specific attention to recognize the kinship and the very first right of a child conferred to it by God is kinship.One of the most challenging discussions regarding law discussions about human cloning is kinship (Izadifard & et al, 2009, 37). To investigate about the status of simulated human kinship first it should be stated that: according to article 957 of civil law, pregnancy is considered as one of civil laws, if the ch ild is born live (Mansour, 2007, P: 162).Now let's suppose that the results of medical researches have shown to be useful and an infant is born using human cloning method, and based on civil law it should benefit fro m the laws conferred.The infant is privileged by civil laws as included in civil law such as : kinship, fostering, alimony, inherence, … . The first step is to identify the intimacy and kinship of an infant because after it beco mes evident, the other items peripheral to kinship should be made clear. To identify the kinship we should refer to civil law and based on this law kinship is div ided into three groups of descendants, in-law kinship, and foster kinship.But in laws in Iran there is silence about the legal status of simu lated infants and also its descendants. Regarding article 167 of the constitutional law, it has been emphasized that: "the judge should try to find the verdict for each claim in civil laws and if there is not any one, he can refer to documents in valid Islamic resources or the valid 'Fat wa' (religious order) to find out the resolution and can not avoid dealing with the claim and avoid making decisions due to the silence or faults or controversy in devised civil laws" (Mansour, 2008, P: 101). Also the article 3 of the civil judiciary law, has been stated that: "if the related la ws are not complete or clear or have controversies, or if there is not any legal law regarding the issue, the verdict can be issued by documenting the valid resources or valid 'Fatwa' and legal p rinciples" (Mansour, 2009, P: 13). Therefore, we should refer to the ideas posed by scholars and Islamic literates (Foghaha) to identify the simu lated kinship. Before investigating this issue, it should be stated that human human cloning has some presuppositions and this makes it d ifficult to determine kinship. Now We Will Deal wi th Different Human Cl oning Presuppositions Below 1) The first state is when a couple has tested the opportunity to have child b irth but failed and decides to have a child through human cloning.In this case, the body cellule belongs to the h usband and his wife does have ovum and uterus. 2) Second state refers to the condition through which the body cellule belongs to the husband, but there are two wo men having ovum and uterus. 3) The third presupposition is related to the state through which the b ody cellu le belongs to the wo man and she owns ovum and uterus herself. 4) The fourth presupposition is related to the state through which the body cellule belongs to the woman and several wo men own ovum and uterus. 5) The body cellule belongs to a man and the owner of ovu m and uterus is one stranger woman. 6) The body cellule belongs to a man and the owner of ovu m and uterus is several stranger women The thing that can be seen in laws and Feghh as kinship refers to the natural method of reproduction that is the same as blood relation and the intercourse between parents, and the impregnation of the sexual cellu les of both.Meanwhile, there is not any sexual cellu le in hu man cloning and it seems that due to such a reason many religious scholars do not consider such an infant to have kinship. To describe what was posed above, it should be stated that some of relig ious scholars and scientists believe that: the human being who is born through human cloning does not have a father (because there is not any sperm) and a mother (because there has not been sperm integration), and any brother or sister among the relatives and has been grown up in an ovum that does not belong to his mother.Instead, the mother is an alternate.In summary, it is someone without kinship.So me others believe that marriage only happens between a man and a wo man and thus they are called resources for a family.In such a way legal kinship happens when the natural father and mother of the infant are man and wife.In this case, this type of legal kinship has law effects for the ch ildren (resulted fro m this kinship), and the existence of such symptoms is denied regarding simulated children (Bo jnourdi, 2008, P: 28). Of course, there are some counterarguments too.Although sexual cellu le is not considered in hu man cloning, the genetic map of the mature cellu le will lack any substitute nucleus within sexual cellule and in th is way it has the exact function of a sexual cellu le and the information regard ing all body tissues will be activated there and there is no privilege along with sexual cellule.It seems that the cellule has changed into sperm or sexual cellu le.Amerinia (2007, P: 203). With another reasoning method we can criticize lack of kinship for a stimu lated child.In art icle 1167 of the civil law it has been stated that the child born through an illegal act does not belong to the doer.But the verdict of consensus issued by the high court in the country proved the opposite of this.The cases reported were: 1-the consensus verdict of 29 th August 1994: although there is not any verdict issued in civil law in Iran about the fostering of illegal children, due to the article 167 of the constitutional law and article 3 of civ il court law and the commonsense and obvious trends and the spirit of civ il law and the clear 'Fatwa' on the part of Imam Kho meini regarding the obligation to donation, 'in its general meaning status', the natural bearing of a ch ild is important and it means that the result of the natural kinship of a child to a father and a mother (lega l through religion or illegal regard ing the religion) would be considered as a criterion.By adulterer in art icle 1167 of civ il law, we mean either a man or a wo man who has committed it.Therefore, father and father's father, respectively, and then the natural mother of a child are responsible to afford for the child and the abandonment of this responsibility can lead to punishment. 2- The consensus verdict issued on 24 th June 1997: one of the responsibilit ies of Identification Card issuer Organization is to record the child b irth and to issue an ID card.The legislator does not make any d ifference between children born through a legal or illegal action. … but in some cases that a child is born through adultery and the adulterer does not try to get an ID ca rd, regarding co mmonsense and the application of what was pointed above and issue 3 and issue 47 o f judicial court regulations clarified by Imam Kho mein i (peace be upon him), the adulterer is considered as the commonsense father of the ch ild and as a result of it all responsibilities such as getting an ID card are conferred to h im and based on article 884 o f civil law, only the heredity issue between them is denied (Ghassemzadeh & et al, 2003, P: 414). Also among Imamieh Fogaha (Scholars in Emamieh sects), the late Mamghani, the author of Menhajolmottagin, has claimed that it is better to consider the ch ild born through adultery and the ordinary child the same in all kinship verdicts except heredity because there is a clear d ifference between a child born th rough adultery and ordinary children regard ing this issue.In other issues, it would be better to consider such a child as a son or a brother and so on and in verbal arts it is presupposed that the kinship should be considered dominant regarding a child born through adultery.Seyyed Mirza Hassan Mousavi-e-Bo jnordi, the writer of the book called Alghavaedolfaghiheh, has also been apparently in agreement with such a view (Safaee & Emami, 2007, P: 338). All that has been mentioned above leads us to the conclus ion that in Iranian judicial system, there is not any difference between the laws of a child born through adultery and the ordinary child and the illegal kinship is grouped in the same category as the legal kinship except in hered ity. There exists a second viewpoint regarding human cloning and it claims that the simu lated person has a kinship.This view is accorded with justice more than that of the previous one because as it was pointed out only a child born through adultery lacks kinship. The main reason of the second group to believe in the existence of a kinship is to understand commonsense.They claim that: the owner of the cellule is father if male and if it is female, is considered as the mother.Ayatollah Seyyed Mohammad Kazem Haeri stated that: "the issues related to father and mother is clear cut and due to the commonsense it is believed that father is the owner of the cellule and mother is the owner of ovu m.This means that in fact the ch ild birth is due to the sperm and ovu m of the father and moth er".Ayatollah Ezzaldin Zanjan i answered these questions: "what is the kinship of the child simulated?Is he the son of the owner of the cellule or twin brother of sperm?" in such a way that: although the person whose perm has created the child is not a conventional father and it has been simulated, the co mmonsense calls the birth o f it and this title is put both on the owner of the sperm and on the o wner of the ovu m o f the one who has grown it.As it was pointed out above, this group considers the reason to call kinship as the commonsense belief as birth for the child (Izadifard & et al, 2009, P: 28). Therefore, the belief of this group is based on commonsense.It means that if the co mmonsense is considered about who the father and mother of the child are?The co mmonsense does not doubt about this recognition and knows the owner of the sperm and ovum as father and mother of the child.But the reason to appoint such a responsibility to co mmonsense is due to the principal foundation that the identificat ion of kinship references and topical issues related to it is determined by co mmonsense regarding issues that have not been objected by the religion or basically have been identified through the observance of the accordance of laws and regulations.Accordingly, since kinship in hu man cloning is among issues whose constrains have not been identified clearly in holy religion the identification of the true nature of it is carried out by the commonsense to be judged.To respond this group, it was claimed that it is acceptable that commonsense can identify and conceive such issues.But the question is that whether the identification of this issue falls within the realm of the co mpetence of commonsense or it falls within the competence of the scholars and the specialists ? If it is claimed that there is not any difference between the two, and it can be recognized through principal fundamentals, there would arise this question: how does the understanding of this commonsense fall within the realm of novel and emerg ing topical issues that are reliable and documentary?(Izadifard & et al, 2009, P: 42). The reference nature of co mmonsense in comparing the concepts has a long history in 'Fegh' and a detailed discussion of it can lead us to do effo rts in vein.We only will refer t o evidence occurred in near to our time: Imam Kho mein i was someone who agreed on the institution called 'co mmonsense scholar' to co mpare the concepts and verdicts.He accepted the reference of scholars in co mpatibility in the most commonsense topics such as "scab" in fish let alone the co mplicated topics.Based on his order, a conference was held in Bandar-e-Anzali by a group of ecology scientists to investigate about the lawfulness of eating different fishes in the first half of the year 19983 and the result of the revision of scholars' ideas was that different types of fish in Khazar Lake have scab in some parts of their bodies especially on the top of their tails in the fo rm o f almond scabs.The scab does not really mean a fish is lawful to eat or not, bu t to achieve such an idea, the person can refer to himself or refer to the judging of the people or some scholars in different times.It should be noted that sometimes scab has clarity like other co mmonsense concepts and sometimes it is blurry and delicate and in sensitive and delicate issues, the idea posed by a scholar is more precise than commonsense. Ayatollah Makarem Shirazi answered a question in this way: "scholarly (Feghi) issues revolve around commonsense issues".A question was asked about whether the identification of necessity is a responsibility of commonsense or the person encountering a problem or scholars should express their ideas?He answered: "there are different cases.In simp le issues it is better to refer to co mmonsense and in comp licat ed issues, scholars should be asked to pose their ideas". Ayatollah Mousavi-e-Ardebily answered a question as follows: " kinship is a credential relationship and is extracted fro m the real and developmental issues such as the emergence and bearing of a ch ild naturally fro m its parents and does not require religious reality, this means that commonsense credits it and relig ion approves it (Izadifard & et al, 2009, P: 45).Therefore, kinship is among concepts that lack religious reality: it means that before Is lam, the source of evolution of a human being was a co mb ination of man's sperm and wo man's sperm was considered as the origin of kinship.This concept has only been denied regarding adultery action in Islam as it was pointed out in details in criticizing the viewpoints of the first group above (Izadifard & et al, 2009, P: 42). Anyway, commonsense is a way to understand realities.And the outlook of a scholar in adjusting commonsense concepts is prior to realities based on commonsense.The concept of kinship is among commonsense concepts and the scholars' understanding or the ideas of genetic engineers prove that the simu lated person belongs to the owner of the sperm that is the same as parents of the person that owns the cellule. A group of people that have claimed the simulated human being has kinship is divided into to subgroups: some consider the simulated person as the sister or brother of the cellule owner and consider the ovum as hired mother if she is not the same as the owner of the cellule.And some others have posed other viewpoints due to the mu ltip le segregations about human cloning mentioned at the start of the present survey. First we are going to deal with the v iewpoint of the first group that considers the simu lated person as the brother or sister of the cellu le owner. As it has been presented in scientific part , in sexual cellules couple nature plays an important role.This means that half of the chromosomes in a sperm belong to the man and half of it belongs to the woman's ovum.But in bodily cellu les, there is no couple nature, but a bodily cellule is copied and the ovum nucleus does not play any role in genetic function of fetus construction and the child born has more than 97 percent similarity to the cellule owner and inherits almost a reservoir of genetic info rmation of the cellule o wner.The more interesting point is that even regarding the age, the fetus scholars do not differentiate between cellule owner and colonized child.It means that unlike sexual cellu le representing that if the se xual cellule o wner is 25 years old and bears a child, the difference is 25 years it is not the case in bodily cellule.This means that if the nucleus of a 70 years old man is taken and simulated, the infant born will be 70 years old at the start of the birth. The reason to say that parents own the cellu le are considered as the parents of hu man cloning and cellule owner and simu lated person are considered as twins is that the parental kinship is resulted fro m coupling and the integration of genetic reserves.In hu man clon ing, coupling is approved indirect ly and with an intermed iary step, but coupling directly is co mpletely denied.In other words, the simulated child is more than 97 percent similar to the cellule owner due to explo iting genetic characteristics .They are even the same regarding the age.On the other hand the simulated person is made of cellu les resulted fro m the coupling of parents of the cellu le owner.This means that the bodily cellu les of each person inherits the characteristics and genetic informat ion of their parents and it shows that the so called simulated child has the half of the data of the father owner of the cellule and the other half has been inherited fro m the mother of the cellule owner in equal extents.There is more than 97 percent of the genetic informat ion of the cellule owner present.The cellule owner and simulated child are considered both brothers and sisters because both are formed by sexual cellules of another wo man and man who are considered as their parents ((Izadifard & et al, 2009). The proponents of this theory consider the mother owner of the ovum as the 'hired mother' for the state of human cloning in which the owner of sperm and the ovum are t wo separate women. They consider the wo man owner of the ovum as the 'h ired mother'.As if a wo man has milked a child of another wo man several times as a hired mother.In this case, since the child has been kept in ovu m and the child has been fed with the wo man and has grown, it can be stated that she has been the child's hired mother.It should be noted that hiring results in a mother and infant relationship in certain states among an infant who is fed by milk and the wo man who milks the infant in such a way that they become intimate because of kinship.Now, can't we consider a mother who has kept a fetus for 9 co mplete months in her uterus as a hired mother in co mparison with the laws accepted in Shiate sect?Especially because in h iring there should be milking and feeding the body of the infant, and this would be true regarding the wo man who has grown an infant in her uterus.Undoubtedly, there is such a priority here.Child nursery for 9 months is much more than the role of one day and night or 10 or 15 t imes milking a child.The p roponents of this theory consider the owner of u terus as the hired mother and it means that the child becomes intimate regarding the kinship with only this wo man because her uterus has been a resting place and the feed ing location and gro wth environ ment fo r the ch ild but is not intimate to her other ch ildren and can get married with any of them. Ayatollah Makarem-e-Shirazi answered a question as follows: since the titles such as father, brother, and sister can not be applied for such a person, the titles such as brother and sister are not considered as kinship titles for the child, but the woman whose uterus has been a place for growth and development for the child is considered as a hired mother who has fed and there has not been any fetus in this process.In such a case, he can not get married with that mother because she is like a h ired mother since the flesh and skin of that wo man have been mixed with the child's.Therefore, they are not intimate unless for more caution (Izadifard & et al, 2009, PP: 49-50). To define the priority of analogy, it should be stated that: we mean an analogy through which the reason for the verdict in peripheral is stronger than the main such as the statement of 'alas!' to parents that has been deemed as something to be abandoned and the verse requires to be follows because p arents should not be tortured.This reason is stronger and more in insulting and thus insulting parents would be considered as a sin either (Mohammad i, 2011, P: 196).Some others have a different idea regarding the approval o f kinship for the simu lated person regarding a state of human cloning through which the owner of the uterus and the owner of the ovum are t wo distinct wo men. Undoubtedly, children who are born through the use of human cloning are considered as twins whose parents are the same but there is a debate regarding that what would be the kinship relat ion between this child and the wo man who has grown it in her uterus and was not formed of her own ovu m (holder mother)? It can be said that: the holder mother is the relig ious mother not the owner o f the ovum because of the Quranic verse that stated: " ‫امهاتهم‬ ‫هن‬ ‫ما‬ ‫نسائهم‬ ‫من‬ ‫منکم‬ ‫یظهرون‬ ‫الذین‬ ‫ولدنهم‬ ‫الالئی‬ ‫اال‬ ‫امهاتهم‬ ‫ان‬ " (Mojadeleh, verse 2) (Those who think their wives are the same as their mothers, be sure that they are not their mothers.Their mothers are those that have born them) (Haeri, 2008, P: 39). Although this verse speaks about similarity and the rejection of the ideas of those in illiterate era of Arabs who thought wives to be as their mothers and avoided to get married with them, but it apparently refers to the fact that a wo man who bears a child is considered as his mother (Seyyedi-e-Bonabi & Rah impour, 2007, P: 69). Therefore, some scholars such as the writer o f the book entitled 'Jawaher' (the Jewelry), have considered being born fro m a wo man as a symptom of kinship of a ch ild to the mother (Sadeghi, 2005, P: 84).This idea is not comp lete because the kinship mother has a clear criterion regarding the wise men and it refers to the fact that the fetus is gained fro m the ovum of a wo man and this criterion is not present in human cloning for the woman who owns ovum.The verse does not deny this issue and if we refer to this verse we can clarify it.When a man thin king of the similarity tells her wife: you are like my mother to me and calls her 'mother' and considers intercourse with her as a sin, the Great God rejects his idea with this verse: Those who think their wives are the same as their mothers, be sure that they are not their mothers.Their mothers are those that have born them (Bo jnourdi, 2008, P: 30). In the impo rtant debate on rental uterus there is the imagination that due to the verse 2 of Mo jadeleh Surah, they consider the owner of the uterus as the mother not the woman that owns ovum. The important point here is that in rental uterus, which of these two relationships could be considered as the criterion for mother and child relationship.Regarding the medical knowledge approved, the origin of emergence and the builder cellule of a fetus is the mother and ovum of a wo man and there is no doubt about that. According to the recent medical data, the uterus of a wo man can play several ro les in feed ing, growth, … of a fetus.But we can not find in any medical book that the woman's uterus plays the initial role in emergence and even in evolution of a fetus.Thus, in fact the child is a product of a wo man's ovum.Therefo re, the criterion to be a mother regarding co mmonsense is the same as being a father.Co mmonsense considers a woman as mother who has a role in the very first stage of the creation and the emergence of a fetus.Of course, this theory is compatible with the outlooks of most of Shiate scholars such as Imam Kho meini.Also some of scholars in Sonnah caste such as Mostafa Zargaa and Yousof Gharzav i agree with such a viewpoint (Fo uladian, 2009, P: 123). In addition to these cases, we encounter with some main problems regarding some of human cloning presuppositions.For examp le, in third presupposition of the items mentioned for hu man cloning at the start of this section, the fact that the bodily cellu le belongs to a woman and the ovum and the uterus belongs to the wo man herself or in fourth presupposition that the bodily cellule belongs to a woman but the ovum and uterus belongs to several wo men, there would be surely a father fo r the simulated child in all these cases and this would be one of the challenges posed in relig ious (fegh) section of this article.Also in fifth presupposition where a bodily cellule belongs to a man and the owner of the ovu m and uterus is the same wo man and some different ones, due to the lack of the coupling relat ionship between these two there would be some problems in kinship of the simulated child.Accordingly, in sixth presupposition where the bodily cellule belongs to a man and the owner of the ovum an d uterus is several different wo men, there would be the same problems. Conclusion This research showed that a considerable number of ethical reasoning and even religious logics to avoid or approve this technology is related to the imag ination of hu man b eings' hu man cloning with all those scientific amb iguities. We can criticize an issue or approve that when it would surely happen.Meanwhile, scientists are still in doubts about human human cloning and also doubt about the safety of the simulated child.So, such an idea can not be completely approved or rejected. It can not be approved because in current human cloning status, the human knowledge has not been able to guarantee the safety in animal human cloning and it can not be applied in human being.Als o it can not be rejected because we can not put obstacles in front of scientific advances for any reason. Of course, it should be noted that the complete scientific success in this field can not be considered as a license to do so, because human cloning is not a solely scientific problem, but it has a close relationship with psychology, sociology, and law. It should be precisely investigated that if one day it becomes possible to do human hu man cloning, would the science branches mentioned let this action be carried out. But regard ing the status of simulated kinship, as it can be observed, there are plenty of ideas exp ressed regarding the identification of kinship and it seems that the reason for all these different ideas and the different opposing viewpoints lies in the fact that each of these ideas try in a way to appro ximate the relationship between bodily cellu le owner and the simu lated infant to the kinship intimacy and it is an incorrect conception.Because as it was explained before the criterion for kinship intimacy, is the impregnation of sexual cellules and there is not any sexual cellule in hu man cloning.Additionally, the criterion of kinship intimacy is compatib le with natural methods not with the new method of human cloning. To determine the type of relat ionship and intimacy between cellule owner and the simu lated child, we need to devise a new legal system that has not been existent up to now and of course it seems a problemat ic issue.This new law establishment can be considered in a way that th e lawyers divide kinship intimacy into two sections as: 1) The kinship relation resulting fro m the imp regnation of sexual cellu les 2) The kinship relation resulting fro m bodily cellule Of course this is a type of theory and can be rejected or approved.The reas on to oppose that bodily cellu le can not be categorized as the blood relationship and kinship intimacy and the reason to approve that when a bodily cellu le is placed in a sexual cellule without any nucleus, there is not any differentiation between fetus or sexual cellu le.Anyway, we should notice that the factor and the cause of creat ing a simu lated child is the cellule owner and thus the existence of a type of relat ionship and integration between these two seems inevitable. We should not ignore simulated child due to the novelty of the human cloning and belief in the natural method of reproduction and deprive him o f the laws in the society if he is born.It seems that the announcement of a simu lated child as someone without kinship is to avoid the problem; meanwhile we should try to express our ideas about this issue to avoid such differences in opin ions.The creation of a new term to refer to the type of the relationship between cellule owner and the simu lated child does not seem an absolute necessity beca use terms are conventional and credential and it is the human being that creates them.Thus, we can use the same terms of father o r mother fo r this type of relat ionship either. It is better to accept human human cloning (if the simulated child is completely safe regarding medical science) in conditional and constrained status; for examp le, we can approve that only those couples can use human cloning that suffer fro m infertility and are interested in having a child who can have a biologic kinship with them because this is not possible through fetus donation.In this case even it does not seem that the legal status of such a child encounters any problems.Only in this case we can avoid frequent controversies in ethical, religious scholarship, and lawful issues . In other wo rds, maybe the only reason to issue a human cloning license would be to help infertile couples regarding some conditions and this is the true resolution.
2018-12-07T03:25:27.821Z
2016-07-31T00:00:00.000
{ "year": 2016, "sha1": "c228547ad115c41e8c670cead1dc965346c2ec28", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/jpl/article/download/61925/33266", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c228547ad115c41e8c670cead1dc965346c2ec28", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Sociology" ] }
257583345
pes2o/s2orc
v3-fos-license
Geographic information system-based mapping of air pollution & emergency room visits of patients for acute respiratory symptoms in Delhi, India (March 2018-February 2019) Background & objectives: Studies assessing the spatial and temporal association of ambient air pollution with emergency room visits of patients having acute respiratory symptoms in Delhi are lacking. Therefore, the present study explored the relationship between spatio-temporal variation of particulate matter (PM)2.5 concentrations and air quality index (AQI) with emergency room (ER) visits of patients having acute respiratory symptoms in Delhi using the geographic information system (GIS) approach. Methods: The daily number of ER visits of patients having acute respiratory symptoms (less than or equal to two weeks) was recorded from the ER of four hospitals of Delhi from March 2018 to February 2019. Daily outdoor PM2.5 concentrations and air quality index (AQI) were obtained from the Delhi Pollution Control Committee. Spatial distribution of patients with acute respiratory symptoms visiting ER, PM2.5 concentrations and AQI were mapped for three seasons of Delhi using ArcGIS software. Results: Of the 70,594 patients screened from ER, 18,063 eligible patients were enrolled in the study. Winter days had poor AQI compared to moderate and satisfactory AQI during summer and monsoon days, respectively. None of the days reported good AQI (<50). During winters, an increase in acute respiratory ER visits of patients was associated with higher PM2.5 concentrations in the highly polluted northwest region of Delhi. In contrast, a lower number of acute respiratory ER visits of patients were seen from the ‘moderately polluted’ south-west region of Delhi with relatively lower PM2.5 concentrations. Interpretation & conclusions: Acute respiratory ER visits of patients were related to regional PM2.5 concentrations and AQI that differed during the three seasons of Delhi. The present study provides support for identifying the hotspots and implementation of focused, intensive decentralized strategies to control ambient air pollution in worst-affected areas, in addition to the general city-wise strategies. The national capital of India, Delhi, is amongst one of the most-polluted cities of the world 1,2 and is facing major environmental challenge 3 . In Delhi, per 10 unit increase in the concentration of sulphur dioxide and particulate matter (PM) 10 increases respiratory disease-related hospital visits by 83 and 0.21 per cent, respectively, at a previous lag of 0-6 days 4,5 . Ambient air pollutant levels of PM 2.5 PM 10 , sulphur dioxide (SO 2 ), nitrogen dioxide (NO 2 ), ozone (O 3 ) and carbon monoxide (CO) in Delhi have been reported to exceed Indian National Ambient Air Quality Standards 6 . Fine particles having diameter of ≤2.5 μm (PM 2.5 ) are more hazardous to human health than other pollutants and are used as a common measure for air pollution 1,7 . Smaller size PM is associated with higher fraction of redox-active components and is, therefore, highly toxic 8 . In 2017, Delhi reported having the highest annual population-weighted mean PM 2.5 concentration (209 μg/m 3 ) 9 much above the Indian (40 μg/m 3 ) 10 and WHO-recommended limit (10 μg/m 3 ) 2 . The alarming levels of PM 2.5 are regional problem and are significantly contributed by vehicular (20%) and industrial emissions (11%), cooking related emissions, biomass burning, construction activities, burning of Kharif (local term for monsoon or autumn crops) crop residue, windblown dust, Diwali fireworks, etc. 1,3,[10][11][12][13] , and meteorological factors (temperature, relative humidity and wind velocity, etc.) 14 . The major chemical components of PM 2.5 include secondary inorganic aerosol (16-28%), organic matter (13-20%), elemental carbon (4.6-6.3%), chloride (4.5-7.9%) and metals (14-24%). The levels of PM 2.5 are ~20-30 per cent higher in winter than in the summer months due to high total secondary aerosols and combustion-related total carbonaceous matter (elemental carbon + organic matter). However, the crustal matter is observed to be higher in summer (42%) than in winter (9%) 1,13 . Various time-series studies suggest that short-term exposure to PM 2.5 can induce acute respiratory symptoms such as cough and difficulty in breathing and aggravation of preexisting condition [15][16][17][18][19] while long-term exposure to PM 2.5 can result in the development of cardiovascular and respiratory diseases 7,15 . Several epidemiological studies have documented that ambient PM levels in Delhi are associated with increased respiratory morbidity and support for identifying the hotspots and implementation of focused, intensive decentralized strategies to control ambient air pollution in worst-affected areas, in addition to the general city-wise strategies. Key words Air pollution -AQI -ambient -children -Delhi -emergency room visits -geographic information system -PM 2.5 mortality 5,16,[20][21][22][23][24] . Air quality index (AQI) is a single index value used to provide information on daily air quality status and its associated health effects to public 15,25,26 . PM 2. 5 1,14 and AQI 27 can vary depending on seasons, time of the day and locations in the same city 26 . Various studies from India 11,14,28,29 and other parts of the world [30][31][32] have explored geographic information system (GIS)based tools, e.g., inverse distance weighting (IDW), kriging, etc., to estimate the spatio-temporal distribution of air pollutants and its associated heath impact. GIS is a powerful technique that can be used to accurately analyze the spatial and temporal patterns of respiratory morbidity 32 , chemistry of pollutants and localization of the area of potential threat 33 . Therefore, integration of GIS can aid in understanding environmental health modelling, infrastructure planning, transport monitoring, public transit planning, etc. A study from Delhi revealed that PM 2.5 concentrations had very high temporal and spatial variations 14 . However, there is no study available in Delhi that has used GIS technique to assess the spatial and temporal association of ambient air pollution with daily counts of emergency room (ER) visits of patients related to acute respiratory symptoms. Therefore, this study was aimed to determine the spatio-temporal relationship between variation of PM 2.5 concentrations and AQI with ER visits of patients having acute respiratory symptoms in Delhi, India, using GIS techniques. Clinical data: Daily counts of ER visits of four study hospitals of Delhi was recorded to obtain the data for acute respiratory ER visits during the study period. All children visiting the ER of AIIMS (South West) and Kalawati Saran Children Hospital (Central) and adults visiting ER of AIIMS, National Institute of Tuberculosis and Respiratory Diseases (South West) and Vallabhbhai Patel Chest Institute (North) were screened round-the-clock for enrolment. Eligible children (0-15 yr) and adults were included only if, on presentation, they reported acute onset (less than or equal to two weeks) of respiratory symptoms or an acute exacerbation of a pre-existing lung disease in the last two weeks and were currently residing in Delhi (staying continuously for at least four weeks). The patients who were not available because of investigations or procedures or did not provide informed written consent to participate were excluded from the study. Residential PIN code along with demographic and clinical data were recorded. All four participating hospitals obtained approval from their respective Institutional Ethics Committees. Material & Methods Air pollution data: Daily air quality, viz. 24 h average values for PM 2.5 , AQI and meteorological variables (temperature and relative humidity) were obtained from DPCC for 22 CAAQMS 16,35 . The description of 22 CAAQMS along with longitude, latitude and districts is presented in Table I. AQI was used to assess the air quality status of the city. AQI given by the Central Pollution Control Board in 2014 was calculated by transforming realtime hourly concentrations of various air pollutants into single index value 15,36 . AQI varied from 0 to 500; values were categorized as: good (0-50), satisfactory (51-100), moderate (101-200), poor (201-300), very poor (301-400) and severe (401-500). The higher the value of AQI, the greater the level of air pollution. PM 2.5 breakpoints were categorized as: good (<30 μg/m 3 ), satisfactory (31-60 μg/m 3 ), moderate (61-90 μg/m 3 ), poor (91-120 μg/m 3 ), very poor (121-250 μg/m 3 ) and severe (>250 μg/m 3 ) 15,36,37 . The time period of the study was divided into three seasons, viz. summer (March, April, May and June), monsoon season (July, August and September) and winter (October, November, December, January and February). In order to have the overall picture of air pollution levels in Delhi, the average PM 2.5 and AQI was calculated for every location in three seasons. The total study duration of 365 days was divided according to AQI categories in each of the three seasons 37 . Geographic information system mapping: GIS tools were used to study the spatio-temporal changes in PM 2.5 levels, AQI and the associated daily counts of acute respiratory ER visits for the study duration. CAAQMS at different locations of a city represents the ambient air pollution for a particular point. Therefore, inverse distance weighted (IDW) interpolation method was used to spatially predict PM 2.5 concentrations at unmeasured locations in the study area 31 . In order to locate spatially the number of enrolled patients coming from a particular area in a particular season, the numbers of enrolled patients were mapped by PIN code corresponding to each patient's residential address. The maps were plotted for winter, monsoon and summer seasons on the basis of breakpoints PM 2.5 pollutant and AQI. These maps were then compared in relation to air quality and number of acute respiratory ER visits of patients during different seasons and at different locations of Delhi. The analysis was done using ArcGIS software, version 10.3.1. (USA). Statistical analysis: Pearson's correlation analyses of PM 2.5 levels with temperature and relative humidity were performed. Linear regression models were built for three seasons to assess the relationship between: (i) total enrolled cases and PM 2.5 levels and (ii) duration of acute respiratory symptoms and indoor air pollution indicators (such as choice of cooking fuel, smoker at home, smoker and separate kitchen). The analysis was performed using "mgcv" package in R-software version 3.6 (https://cran.r-project.org/web/packages/ mgcv/mgcv.pdf). Results During the study period, a total of 70,594 patients attending ER were screened from ER of the participating hospitals. Of these, 18,063 were found eligible of having acute respiratory symptoms (less than or equal to two weeks) and residing in Delhi for the past four weeks. Table II presents with worst AQI were observed in winter season, while relatively lower concentrations of PM 2.5 with moderate AQI were noticed during summer, and minimum concentrations of PM 2.5 with satisfactory AQI were observed in monsoon. The daily ER visits of patients having acute respiratory symptoms mirrored this seasonality in pollution (most in winter, followed by summer and monsoon season). Maximum relative humidity was recorded in monsoon while it was at its minimum during summer. The PM 2.5 concentrations had significant negative correlations with temperature (r=−0.593, P≤0.001) and relative humidity (r=−0.249, P≤0.001). Of 365 days, 64 out of 151 days in winter were reported to have 'very poor' AQI. In summer, concentrations and AQI recorded from 22 CAAQMS in Delhi. During the study period, the lowest annual mean PM 2.5 concentration was observed at Sri Aurobindo Marg, whereas the highest PM 2.5 concentration was observed at Anand Vihar. The lowest annual mean AQI was in 'poor' category observed at Dr Karni Singh Shooting range station, whereas the highest AQI was in 'very poor' category reported at Anand Vihar. Figure 3 shows the spatio-temporal relationship between seasonal variations of AQI obtained from 22 CAAQMS of Delhi vs. enrolled cases in three seasons vs. monsoon, summer and winter. The seasonal mean AQI of 22 continuous monitoring stations ranges from 'poor' to 'severe' and the number of enrolled cases was relatively higher in winter months (Fig. 3C) contrasting with the relatively better AQI observed in summer (Fig. 3B) and monsoon months (Fig. 3A). As shown in Figure 3C, during winter season, the number of enrolled cases was high in the 'severely polluted' northwest region (n=1886, PIN code 110 086) of Delhi having high AQI (433) observed at Wazirpur station. The number of enrolled cases reporting at ER was low from the 'poorly polluted' southwest region (n<5, PIN code 110 038, Rajokari; 110 060, 66, 72, 97, etc.) of Delhi having comparatively low AQI (290 and 291) observed at Sri Aurobindo Marg and Najafgarh station, respectively. Discussion The study examined the spatio-temporal relationship between seasonal variation of PM 2.5 concentrations, AQI and related ER visits of patients having acute respiratory symptoms in Delhi. During the study period, the annual average of PM 2.5 concentration (120.6±87.0 µg/m³) in Delhi exceeded the Indian-recommended limits 10 . Such finding has been consistently reported by previous Delhi-based studies 3,6,9,14,37 . There was not a single day to register 'good' AQI 37 . AQI was 'poor' for most of the winter days compared to summer and monsoon reporting 'moderate' and 'satisfactory' AQI, respectively. The minimum annual mean PM 2.5 concentration was noticed at Sri Aurobindo Marg situated in south-west Delhi district 5 . The maximum PM 2.5 concentration was observed at Anand Vihar 37 , which is one of the most polluted areas in Delhi due to high traffic congestion and is affected by emissions from road dust, industries, commercial activities of hotels, etc. 27,37 . The variation observed in the PM 2.5 levels in the present study could be due to the type of emission sources 1,10-12 and meteorological factors 14 . Several Indian reports 5,12,14,[27][28][29] have recognized that there is significantly high seasonal and regional variation in ambient air quality and prevalence of respiratory symptoms. Low temperature and high relative humidity play a vital role in the formation and rise in PM 2.5 levels, thereby resulting in evident seasonal variation in air quality 3 . In the current study, a weak and negative correlation was found between PM 2.5 versus temperature and relative humidity 38 . GIS analysis showed that spatio-temporal association between air pollution and the number of ER visits for acute respiratory symptoms varied across different geographic areas of Delhi during three different seasons. In winter, ER visits of patients having acute respiratory symptoms were higher from the north-west region of Delhi exposed to very high air pollution recorded at Wazirpur and Jahangirpuri station. Furthermore, enrolled children had negative, whereas adults had a positive association with PM 2.5 levels. Adults are more likely to be exposed to outdoor environmental pollution and even for longer duration, for example, commuting to work place; hence, they might have different intensity of cumulative exposure than children. In contrast, the southwest region of Delhi was less polluted (Najafgarh and Sri Aurobindo Marg station) corresponding to the low ER visits of patients. These results were in line with a previous study from Kanpur, India, which concluded that individuals with respiratory disease were at greater risk of hospital visits than those residing in low polluted area 28 . The study had some limitations. First, the present study, we used a wide network of air quality monitoring stations spread around the city; however, the hospitals from each region of Delhi could not be included for collecting daily counts of acute respiratory ER visits. Second, adjustment for differential lifetime exposure to environmental pollutants of children and adults was not possible. Third, assessment of personal exposure at home and workplace to air pollution was practically not feasible in large sample, which could better illustrate the health impact of ambient air pollution 32 . Fourth, we could not study the sources of regional emission that could have helped us to know the reason for observed pattern. Fifth, we could not take into account the levels of indoor air pollutants and individual exposures to the pollutants that might have affected the quality of associations. Despite these limitations, the present study had several strengths. First, this multisite study had covered a large sample size in Delhi, India. Second, it was possible to study the role of seasonal variation in spatio-temporal association between ambient air pollution and acute respiratory ER visits in Delhi using novel GIS tools. Regional health impact was estimated based on PM 2.5 levels and AQI. Although AQI is not a refined tool, it is an easily understandable generic information tool and help drafting advisories issued to the public. Third, we obtained 24 hourly real-time air quality data of Delhi from 22 newly installed DPCC CAAQMS on the daily basis, which contributed to high spatial resolution and robust results. To strengthen the findings of the present study, systematic investigations are needed in Delhi to: (i) establish adequate monitoring system for air quality and health outcomes, (ii) demonstrate the causal relationship between air pollution and associated health outcomes and (iii) identify emission sources and their contribution to air pollution and economic evaluation of health impact of air pollution 1,29 . Severity of air pollution starts from post-monsoon and continues through winter. We acknowledge that indoor air pollution has a significant role to play in the type of investigation we conducted. In the present study, poor air quality was observed for majority of the days throughout the year in Delhi. Therefore to gain potential health benefits, effective measures to control air pollution should be executed throughout the year instead of focusing only during highly polluted winter season. In India, air pollution is one of the causes for producing damaging health effects and not the only cause as there are many aspects such as socio-economic issues, living conditions, location of emission sources, land use pattern, occupational exposure, food habit, other health ailments, etc. Growing evidence suggests that specific toxic compounds of PMs produce harmful effects on lungs and are carcinogenic and genotoxic. The exposure to PM may induce inflammatory response, oxidative stress, hormone dysregulation 39 as well as placental dysfunction 40 . Therefore, geospatial distribution of respiratory diseases associated ER visits may serve as a good tool for location-based prevention and control of outdoor air pollution. The present study findings provide reference for decisionmakers to improve air quality and related health outcomes at specific locations in Delhi. The study also provides, relevant information for public to modify their outdoor behaviour according to exposure to varying ambient air pollution. Overall, the acute respiratory symptoms related ER visits of patients were associated with PM 2.5 concentrations and AQI that varied by regions and seasons in Delhi. The study provides GIS-based scientific evidence for policy-makers to make adequate regional monitoring and emphasize localized improvement strategies for management of air pollution and associated respiratory health outcomes in Delhi.
2023-03-18T06:17:47.128Z
2022-10-01T00:00:00.000
{ "year": 2023, "sha1": "fabb3b0c212e173a390552235dac3df19254fb5f", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "6b211efda525a8c058fd052f730eafe919cdfca6", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
120372257
pes2o/s2orc
v3-fos-license
A new model of turbulent relative dispersion: a self-similar telegraph equation based on persistently separating motions Turbulent relative dispersion is studied theoretically with a focus on the evolution of probability distribution of the relative separation of two passive particles. A finite separation speed and a finite correlation of relative velocity, which are crucial for real turbulence, are implemented to a master equation by multiple-scale consideration. A telegraph equation with scale-dependent coefficients is derived in the continuous limit. Unlike the conventional case, the telegraph equation has a similarity solution bounded by the maximum separation. The evolution is characterized by two parameters: the strength of persistency of separating motions and the coefficient of the drift term. These parameters are connected to Richardson's constant and, thus, expected to be universal. The relationship between the drift term and coherent structures is discussed for two 2-D turbulences. Turbulent transport and mixing underlie wide range of phenomena from star formations [1] to coffee in a cup. However, the mechanisms of their significant deviation from molecular counterparts are not well understood yet even in such a simple case as relative dispersion of passive particles. In the inertial subrange, reflecting the scaling law of turbulent velocity fluctuation, well-known Richardson's t 3 law is realized [2,3]. For the probability density function (PDF) of separation r, P (r, t), quite a few models and theories consistent with Richardson's law have been proposed [4]. Besides recent progresses of particle tracking techniques and numerical simulations enable direct experimental investigations of the separation PDF [5,6,7]. Although accuracies of the experiments are not enough, most of their results are so far close to the prediction of Richardson's diffusion equation [2]: where K(r) ∝ r 4/3 in the Kolmogorov scaling [3] and d is the spatial dimension. This closeness indicates Eq. (1) is a good basis for description of turbulent relative dispersion. Eq. (1) ought to be exact only if the relative velocity is δ-correlated in time [4,8]. However, there are spatio-temporal correlations in turbulent flows as implied by existence of coherent structures. This indicates relative velocity cannot be δ-correlated in time. To resolve the inconsistency, Eq. (1) has to be extended to include these correlations. As mentioned in detail below, we treat these correlations as those not in time but in scale-space. That is, we focus on correlations between scales r and ρr, where ρ is a scale multiplier. This treatment is appropriate for describing self-similarity. Employing multiplescale consideration with correlation in scale, we derive a telegraph equation with scale-dependent coefficients, Eq. (11), and obtain a similarity solution, which never coincides with Richardson's one markedly in the tail part even in the long time limit. Coherent structures observed in turbulence must share their origin with finite correlation and self-similarity, so that particles appear to be advected according to coherent structures. For example, in two-dimensional (2D) inverse cascade (IC) turbulence, particles separate step-bystep through nested cat's eye vortices with scattering by stagnation points [7,10]; in 2D free convection (FC) turbulence, particles separate through advection by stretching and folding plumes ( Fig. 1) [9]. Clearly, separation processes in these two systems are different. However, despite these differences, effects of coherent structures on relative dispersion appear in the same way, i.e., persistent separation: persistent expansion and compression of a relative separation. Sokolov et al. model such motions based on Lévy walk. Their model consists of persistent separation ceased by probabilistic turn in direction [8,11]. They introduced also the persistent parameter P s , the ratio of the correlation length to the scale. The telegraph model was introduced to implement (I) a finite diffusion speed, and (II) a finite correlation time to Shading represents intensity of vorticity. The scale denoted by the straight line is 100η θ and in the inertial subrange, where η θ is the thermal Kolmogorov scale [9]. the diffusion process [12,13]. It has been widely applied to various diffusion phenomena from molecular diffusion [13] to population dynamics [14]. To satisfy the self-similarity of turbulence, (I) and (II) are extended to be scale-dependent according to the following scaling assumptions: v(r) = Ar 1−g and T c (r) = r/v(r) = A −1 r g , where v(r) and T c (r) are relative velocity and characteristic time, A a constant [18], and g a scaling exponent [19]. First, we focus on a scaler in the inertial subrange. The correlation length and time for a separation to be expanded or compressed persistently are defined as P ± sr and P ± s T c (r), respectively. We assume P ± s is the order of unity, and then, ρ out = 1 + P ± s . We call this scale outer. We consider the evolution of probability density of the separation in a small region aroundr where its extent is much smaller than P ± sr . Since P ± sr and P ± s T c (r) are regarded as constants, we can apply the approach deriving the telegraph equation to this small region [12]. We call this scale inner. We divide the inner region into shells defined as [r n , r n+1 ) where r n =rξ n , n is integer, and ξ is close to unity, that is, ρ in = ξ n . Since ρ in − 1 ≪ ρ out − 1, we deal with two scales in scale-space. To take into a finite speed, pass-through time τ n of the n-th shell, the time for a relative separation to expand (compress) through the n-th shell, is defined as follows: where γ = log ξ (γ ≪ 1), and v(r n ) is the relative velocity at a spatial scale r n . This relation means O(τ n ) = O(γ), which is the key difference from the diffusion equation case including Richardson's equation. We introduce probabilities Q ± n : the probability for a relative separation to be expanding (Q + n ) or compressing (Q − n ) in the n-th shell. Transition probabilities ∆p ± (r n ), the probability from expansion to compression (+) and the opposite (−) during τ n , must be self-similar in outer scale. Therefore the simplest form of ∆p ± (r n ) is where P ± s T c (r) are correlation times depending on the direction and λ ± = 1/P ± s . Note that P ± s T c (r) are outer scale and, thus, constants in the inner region. This form was first given by Sokolov et al. [11]. Unlike their model, we assign different values to λ ± to represent difference of persistency between expansion and compression. In the limit of infinite speed and δ-correlation, the first term of Eq. (11) disappears, which is the same form as Palm's equation (see p.575 of [3]) [20]. Non-Richardson terms, the first term in the l.h.s. and the last one in the r.h.s. of Eq. (11), describe effects of persistent separation. The last term in the r.h.s. of Eq. (11) is a drift term consistent with the scaling assumptions. The drift velocity is −σv(r) and, hence, the direction is determined byσ which consists of the "scaling-determined" part, d − 2g, and the "dynamics-determined" part, δ. On the other hand, in the maximum separation regime, the frontal edge of the PDF is abrupt at r max . The functional form of the edge is approximated to the first order by We call this limit the telegraphic regime. From this expression, it is clear that if λ < d, P (r, t) → ∞ as r → r max . That is, most of particle pairs are accumulated at the frontal edge, where relative separations expand away without changing their directions. However, this situation is unrealistic in real turbulence, so that λ is considered to be greater than d. Control parameters of our model are λ andσ. The functional form of the PDF, F (ξ), is determined by these parameters:σ controls mainly the diffusive regime, and λ does the telegraphic regime. As λ = 1/P + s + 1/P − s , it represents the strength of persistency of moving direction. On the other hand, asσλ −1 is the coefficient of the drift term,σ represents total effects of persistent separations and probabilistic transitions. Because persistent motions model the advection by coherent and self-similar flows,σ seems to characterize the average effects of flow structures such as coherent structures on dispersion processes. In order to calculate the value ofσ, we have to estimate "dynamics-determined" part of it, δ. To estimate δ from direct numerical simulations (DNS), we use the PDF of exit-time [6,15]. Exit-time for particle-pairs experienced many turns form an exponential tail (for details see [6]). In our case, the slope is evaluated with Eq. (11) in the limit of infinite time, i.e., Palm's equation, where the slope is related to δ [23]. We can also estimate δ directly from the separation PDF around r ≪ r 2 1/2 . In the 2D-IC case, Goto and Vassilicos estimated α = 2/(g − δ) by fitting the similarity solution of Palm's equation to the PDF [7]. Table I shows the estimated values of δ andσ for 2D-IC and -FC turbulences. These results indicate that the drift term of (11) enhances diffusion in the 2D-IC case but suppresses diffusion in the 2D-FC case; Compression of relative separations in 2D-IC turbulence but expansion of them in 2D-FC turbulence are comparatively restricted, respectively. This remarkable feature is considered to be induced by the difference in flow structures between 2D-IC and -FC turbulences: "cat's eye in a cat's eye" structures [7] and string-like structures [9]. We, therefore, expect thatσ can characterize coherent structures. In Fig. 2, the similarity solution F (ξ) obtained numerically for various values of λ is shown. A cut-off scale corresponding to r max can be seen. Our similarity solu- tion approaches Palm's one as λ gets larger. However, even for large λ, the difference between them in the tail part is so evident that effects of persistent separation are not negligible. In summary, we derived a telegraph equation with scale-dependent coefficients, into which finite separation speed and self-similarity are incorporated, by employing multiple-scale consideration in scale-space. Then we obtained a similarity solution of it. In the diffusive regime, ξ ≪ 1, the similarity solution coincides with that of Richardson's diffusion equation with the drift term, i.e., Palm's equation; in the telegraphic regime, ξ ∼ ξ max , the finiteness of separation speed is realized and the separation PDF is abrupt at r max . Therefore, finite separation speed is crucial for description of the tail part of the separation PDF unless relative velocity is δ-correlated in time. The drift term of Eq. (11) is induced by the deviation of the difference of persistency between expansion and compression of relative separation, δ, from "scalingdetermined" value, 2g − d. The direction of the drift is determined by the sign of −σ = (2g−d)−δ. We estimated the value for two 2-D turbulences, inverse cascade (IC) and free convection (FC), and found that positive and negative drift is imposed in the 2D-IC and -FC cases, respectively. We conjecture that this remarkable difference corresponds to the different types of coherent structures in the background flow. We need more precise investigations to obtain an evidence for this. We neglected two significant effects: distribution of separation speed and intermittency. However, intermittency of relative velocity is negligible in the two 2D turbulences dealt with in this letter. Besides, not the distri-bution but the finiteness and self-similarity of separation speed is crucial for the existence of the cut-off of the separation PDF. We are expecting experiments which can resolve the tail part of the separation PDF. This work is supported by the Grant-in-Aid for the 21st Century COE "Center for Diversity and Universality in Physics" from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. Numerical computation in this work was carried out on a NEC SX-5 at the Yukawa Institute Computer Facility.
2019-04-18T13:02:36.962Z
2005-10-21T00:00:00.000
{ "year": 2005, "sha1": "b067a4532ff84994cfb621ee512baca04abde1f5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "25a3ee371d5a454df23b47b38453204e60e75c86", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
239632739
pes2o/s2orc
v3-fos-license
Carriage of Streptococcus Pneumoniae In Unvaccinated Toddlers At The Time of Pneumococcal Conjugate Vaccine Implementation Into The National Immunization Program In Poland We investigated pneumococcal carriage among unvaccinated children under ve years of age at the time of conjugate polysaccharide vaccine (PCV) introduction into the national immunization program (NIP). Paired nasopharyngeal swab (NPS) and saliva samples collected between 2016 and 2020 from n=394 children were tested with conventional culture and using qPCR. The carriage rate detected by culture was 25.4% (97 of 394), by qPCR 39.1% (155 of 394), and 40.1% (158 of 394) overall. The risk of carriage was signicantly elevated among day care center attendees, and during autumn/winter months. Among strains cultured, the most common serotypes were: 23A, 6B, 15BC, 10A, 11A. The coverage of PCV10 and PCV13 was 23.2% (23 of 99) and 26.3% (26 of 99), respectively. Application of qPCR lead to detection of 168 serotype carriage events, with serogroups 15, 6, 9 and serotype 23A most commonly detected. Although the highest number of carriers was identied by testing NPS with qPCR, saliva signicantly contributed to the overall number of detected carriers. Co-carriage of multiple serotypes was detected in 25.3% (40 of 158) of carriers. Results of this study represent a baseline for the future surveillance of effects of pneumococcal vaccines in NIP in Poland. Introduction Streptococcus pneumoniae is the common cause of invasive bacterial disease [1,2]. Incidence of invasive pneumococcal disease (IPD) is highest among infants and toddlers and in older adults [1]. Despite available vaccines, in 2015 pneumococcus was responsible globally for approximately 300 000 deaths of children under 5 years of age [2]. IPD is manifested by meningitis, sepsis and/or bacteremic pneumonia. S. pneumoniae also causes milder infections manifested across all ages as sinusitis or non-bacteremic pneumonia, and in children as acute otitis media. The primary virulence factor of pneumococci is the polysaccharide capsule [3] and currently available pneumococcal vaccines are all based on capsular polysaccharides as antigen. While there have been almost 100 capsular types (serotypes) described [4], marketed vaccines target only a subset of ten to twenty-three serotypes (24 in total). Pneumococcal conjugate vaccines (PCV) recommended for children [2], target ten (PCV10) and thirteen (PCV13) serotypes common in paediatric disease prior to PCVs introduction. PCV10 and PCV13 have been commercially available in Poland since 2009 and 2010, respectively. In 2017, PCV10 was introduced into the National Immunization Program (NIP) for all children born after 31st of December 2016, and PCV10 was chosen as the refunded vaccine in all consecutive annual NIP tenders. Children are vaccinated at 2, 4, and 13-15 month of life (2 + 1 schedule). However, it is estimated that a quarter to a third of infants in Poland is vaccinated with PCV13 outside NIP [5]. Three years after PCV10 implementation into NIP, there was a signi cant decrease of IPD cases caused in Poland by PCV10 vaccine serotypes (VT) in children under 2 years of age (57% in years 2014-2016 vs. 31% in years 2017-2019) [6]. There was also a signi cant decline from 35-28% in PCV10-VTs IPDs in persons ≥ 65 years old. Similar herd effects were earlier observed in other countries [7][8][9]. Indirect effects are attributed to PCVs preventing VT strains carriage acquisition in vaccinees [10,11]. Since children are the main reservoir and the main transmitters of pneumococci, infants' vaccination with PCVs may have impact on carriage and disease in unvaccinated individuals in the same population [3,12,2]. Consequently, effects of PCVs can be monitored not only via surveillance of disease but also of carriage [13]. Since 1997 the National Reference Centre for Bacterial Meningitis (NRCBM) collects isolates causing IPD from the whole country and also conducts molecular diagnostics of IPD cases. Although surveillance of IPD in Poland is well established, the data on S. pneumoniae and pneumococcal serotypes carriage is limited. Over the past twenty years there have been only two studies published on pneumococcal carriage conducted in Poland [14][15][16][17]. To ll the gap, we investigated the pneumococcal carriage in unvaccinated children under ve years of age. Our goal was to map the carriage of S. pneumoniae serotypes in paediatric population before the nationwide immunization of infants with PCV10 may result in substantial indirect effects. With this, we were aiming to establish a baseline for the future surveillance studies. The gold standard for pneumococcal carriage detection is isolation of live S. pneumoniae from a culture of a deep trans-nasal nasopharyngeal swab (NPS) [18,19]. There is evidence that other samples, oral uids in particular, may substitute for NPS [20,21]. Compared to NPS, saliva is much easier to collect, can be even self-collected, and, except for the youngest children, it does not require a designated collection kit. It has been also reported that application of molecular methods allows higher sensitivity of S. pneumoniae and pneumococcal serotypes carriage detection [22][23][24]. Hence, our second goal was to compare results of carriage detection by testing saliva versus NPS, and using molecular methods versus conventional diagnostic culture in order to develop a procedure tailored to our future studies [25][26][27]. To our knowledge, there are no published studies comparing saliva testing with the gold standard in toddlers. Results Altogether 405 children have been enrolled in the study. Of these, nine children were excluded from further analysis for either being too young (n = 4), too old (n = 4), or being diagnosed with lower respiratory tract infection on enrolment day (n = 1). Two children were enrolled twice in the study and results of the rst sampling were the only included. We report results for paired NPS and saliva collected once from 394 children. Study population Frequency of inclusions declined over the study years ( Table 1) with half of children enrolled by the 14th month (September 2017) of 44-months long project. The number of 12-23-months old children (n = 147 of 394) was signi cantly higher compare with any other age group (Fisher Exact, p < 0.001), and the number of 24-35 months old (n = 102) was signi cantly higher compare to 36-47 months and 48-59 months olds (n = 71 and n = 74, respectively, p < 0.01). (Fig. 1b). With 94 of these 97 children positive in NPS and six in saliva, NPS was far superior to saliva in the sensitivity of detecting pneumococcal carriage by isolation of live S. pneumoniae from a child (McNemar's, p < 0.000001). All 394 saliva samples yielded a colony growth on GENT-agar and all these plates were harvested whereas 68 (17.3%) of 394 NPS cultures were negative for any growth. We considered these 68 NPS to be negative for S. pneumoniae also by molecular method. Samples from 155 children (39.1% of 394) have been identi ed as positive for pneumococcus with qPCRs ( Fig. 1b). It included all samples from which S. pneumoniae has been cultured except for three NPS samples positive by culture for non-typeable (NT) pneumococci (Fig. 1a), and a single saliva sample from which serotype 24F strain has been cultured (Fig. 1c). Similar to diagnostic culture, with molecular methods the number of positive results was higher for NPS compared to saliva (121 or 30.7% versus 93 or 23.6%; McNemar's, p < 0.01) (Fig. 1b) Table 1). RR was also elevated during autumn and winter months. Being a sibling was associated with a higher risk of carriage in the study (RR1.52(1.04-2.89), p < 0.05), but only when S. pneumoniae was detected by culture of NPS and was driven primarily by an effect observed in 12-23 month-olds (RR2.40(1.28-4.51), p < 0.001) and not in other age groups (24-35 months old, p = 0.70; 36-47 months old, p = 0.17; 48-59 months old, p = 0.66). After correcting for DCC attendance an impact of the siblings has become insigni cant (RR1.02(0.76-1.38), p = 0.89). RR was higher in children from households without a smoker (Chi-square, p < 0.05), but only when carriage was detected by culture of NPS. There was no effect of age or gender on prevalence of carriage detected by a particular method or overall (Table 1). Serotype carriage Serotypes of strains cultured from children and detected with qPCR in culture-enriched samples in the study are all listed in Table 2. Altogether 99 strains have been isolated from 97 children, as S. pneumoniae strains of two different serotypes have been cultured from NPS collected from two individuals. Ninety-three of these 99 isolates represented 26 different serotypes and the remaining six were classi ed as non-typeable pneumococci. The most common serotype among cultured S. pneumoniae strains were 23A and 6B isolated from 10 children each, followed by 15BC, 10A and 11A isolated from seven children each, and by serotypes 23B and 35F isolated from six children each. Strains of these seven serotypes constituted 53.5% of 99 strains cultured in the study. The coverage of PCV10 and PCV13 was 23.2% (23 of 99) and 26.3% (26 of 99), respectively. Among 158 children classi ed as carriers of S. pneumoniae by any method, the most common serotype/serogroup detected with qPCRs were: serogroup 15 (n = 24 or 15.2% of 158 children), serogroup 6 (n = 22 or 13.9%), serogroup 9 and serotype 23A (n = 17 or 10.8%, each). For NPS, numbers of samples positive for serotype or serogroup by culture correlated strongly with the number of samples positive for the same serotypes by qPCR (Spearman's rho = 0.855, p < 0.0001) (Fig. 2a). There was also correlation between the number of serotype carriers detected with qPCR in NPS and in saliva (rho = 0.667, p < 0.005) (Fig. 2b), as well as between numbers of serotype-carriers detected with qPCR in NPS or in saliva versus overall cultured in the study (rho = 0.771, p < 0.0001) (Fig. 2c). Although the number of VT strains cultured from children attending DCC (n = 17 of PCV10-VT and n = 20 of PCV13-VT) was signi cantly higher compared with the number of cultured children staying home (n = 6 of PCV10-VT and PCV13-VT, Chi-square, p = 0.034 and p = 0.01, for difference in prevalence of PCV10-VTs and PCV13-VTs, respectively), there were no differences in fractions of PCV10-VT strains among all cultured from DCC attendees compared with children staying home (17 of 71 versus 6 of 28, Chi-square p = 0.79), nor in fractions of PCV13-VT strains (20 of 71 for PCV10 versus 6 of 28, Chi-square p = 0.49). None of other demographic or environmental factors were associated with differences in serotype carriage. Co-carriage of multiple serotypes Presence of two or more serotypes was detected in 40 (25.3%) of 158 children identi ed as carriers by either culture or using piaB and lytA qPCRs ( [20,33]. NPS is the specimen recommended by WHO in pneumococcal carriage detection in children [19] and it has been reported that NPS is a more valuable material than OPS [35]. In our study, the culture of NPS was far superior to culture of saliva and also, 39.6% of carriers (61 of 154) identi ed by qPCR were detected by NPS only. However, with 21.5% (34 of 158) of carriers detected exclusively in saliva, testing oral uids substantially increased the number of carriers detected. In line with this nding, Korona-Glowniak et al. [16,17] also reported that testing OPS along NPS signi cantly increased the number of carriers detected, and that there was no difference between the number of carriers detected by culturing NPS compared with OPS [16,17]. Therefore, the best carriage detection might be achieved by testing from each individual multiple specimens, e.g. NPS, OPS and saliva, or a combination of any two of these. Molecular methods appeared to be superior to conventional culture in detecting co-carriage of multiple serotypes in this study (2.1% in culture versus 26.6% in qPCR). Wyllie et al. [33] and Kandasamy et al. [36] obtained similar levels of multiple serotype carriage using molecular methods. The higher sensitivity of any minority serotype detection in multiserotype carriage allowed for a more detailed analysis of the occurrence of serotypes. Since available qPCR assays did not cover all serotypes, and not always distinguished serotypes within a serogroup, the number of multi-serotype carriers still might be understated. Among our study limitations was a lack of molecular assays that would detect carriage of every circulating serotype. For example, strains of serotypes 24F, 28F, 35A, 35F, and 38 have been cultured from children, yet none of these serotypes were targeted with qPCR. Another limitation was low resolution of certain qPCRs not discriminating between serotypes within a serogroup, with 10 out of 27 qPCR assays targeting more than one serotype. A limitation was also the low sensitivity of conventional diagnostic culture. It concerns both sample types, but due to very rich bacterial growth, including many non-pneumococcal α-hemolytic colonies, it was particularly di cult to isolate live S. pneumoniae from saliva. With large numbers of serotype-carriage events detected exclusively with qPCR, and in the light of reports on non-pneumococcal streptococci expressing the pneumococcal capsular polysaccharides [28, 37], we paid particular attention to the speci city of assays. We addressed it by testing for serotype samples negative for S. pneumoniae and excluding the results of assays that generated a positive result (serotype 4 and serotype 5 speci c qPCRs). Nevertheless, we can't exclude that some of the results represent carriage of confounded non-pneumococcal bacteria detected with qPCRs. For example, when applied to oropharyngeal and saliva samples from adults, a diminished speci city of serogroup 9-speci c assay has been reported [26, 28] and serogroup 9 was the clear outlier when culture data was compared with qPCR results in our study (Table 2 and Fig. 2a and 2c). However, since we did not observe positivity in this assay among samples negative for S. pneumoniae, nor was there a difference between the number of NPS and saliva samples positive for this serogroup by qPCR, and we are not aware of any reports on the assay's poor speci city in NPS from children, we consider results for serogroup 9 as reliable. In summary, pneumococcal carriage rate detected with WHO's recommended method in Polish children was lower compared with studies conducted prior to the introduction of commercial PCVs in the country, yet we attribute it to differences in set-ups of studies rather than the effect of PCVs. On the other hand, the decline in prevalence of PCV10-VTs and PCV13-VTs carriage compared with the pre-PCV period suggests strong herd effects of commercial vaccination independent of NIP in Poland. According to the results obtained in our study, NPS was a more valuable material in carriage detection in children and qPCR was the more sensitive method in pneumococcal and pneumococcal serotype carriage detection. Also, information about carriage rate and serotype distribution among unvaccinated Polish children gained during this study can be used as a baseline in future carriage projects. The knowledge concerning the methods used in pneumococcal carriage detection gained during our study can be used for further research. Study design The study was performed between August 2016 and March 2020 among children age 12 to 59 months not vaccinated with any pneumococcal vaccine and attending a 'non-sick-visit' in hospital outpatients' clinics or community healthcare centers in cities of Warsaw and Wroclaw. First, parents (or child's legal guardians) were asked if family would be interested in participation in the study. If they responded positively, they were informed about the study goals and procedures and asked to give written informed consent for the child's participation. Next, parents were asked to ll-in the questionnaire and provide information on the child's age, gender, environment (number of siblings, day-care attendance, presence of smoker in child's household), and clinical information (pneumococcal vaccination, reason of doctor's o ce visit, occurrence of chronic diseases, symptoms of lower respiratory tract infections, antibiotic therapy in the past three months). The questionnaire was reviewed on site by the study personnel to exclude children that have been vaccinated with any pneumococcal vaccine, were treated within last 4 weeks with any antibiotic, have any immunode ciency or symptom of lower respiratory tract infections. Finally, a saliva sample and nasopharyngeal swab were collected from each child by the study medical personnel. Sheep blood (Graso, Poland) with 5 µg/ml gentamycin (Sigma-Aldrich, USA) (GENT-agar) and incubated for 18-24h in 35°C, 5% CO 2 as previously described [38,20]. Once pneumococcus-like colonies were re-plated, all remaining colony growth was harvested [20]. These harvests represented samples culture-enriched (CE) for S. pneumoniae [38]. Replated isolates were identi ed as S. pneumoniae based on susceptibility to optochin (BioMerièux, France) and bile solubility (Becton Dickinson, USA) [39]. Correlations between results of pneumococcal serotypes detected by conventional culture and molecular method (qPCR). Panels (a) and (c) depict correlations between number of cultured (X-axis) and number of samples positive in qPCR among serotypes or serogroups targeted by qPCR assays. Panel (a) depicts results exclusively for NPS samples. Panel (c) depicts results for all strains cultured from NPS or saliva versus detected in NPS or saliva in qPCR assays. Panel (b) shows correlation between NPS (X-axis) and saliva (Y-axis) for serotypes detected exclusively with qPCR assays. Serotypes not detected using a given approach have been assigned a value of 0.5. The rho, and p values have been calculated with Spearman's test.
2021-09-25T15:38:58.348Z
2021-08-27T00:00:00.000
{ "year": 2021, "sha1": "98d10ce39fe16376a7bf666b73e4c8b7cbb9cf21", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-810749/v1.pdf?c=1637263663000", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "cef12148eeef42d9ce4f21910ece22a2e5f2d16d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249252833
pes2o/s2orc
v3-fos-license
A Fast Multi-Scale Generative Adversarial Network for Image Compressed Sensing Recently, deep neural network-based image compressed sensing methods have achieved impressive success in reconstruction quality. However, these methods (1) have limitations in sampling pattern and (2) usually have the disadvantage of high computational complexity. To this end, a fast multi-scale generative adversarial network (FMSGAN) is implemented in this paper. Specifically, (1) an effective multi-scale sampling structure is proposed. It contains four different kernels with varying sizes so that decompose, and sample images effectively, which is capable of capturing different levels of spatial features at multiple scales. (2) An efficient lightweight multi-scale residual structure for deep image reconstruction is proposed to balance receptive field size and computational complexity. The key idea is to apply smaller convolution kernel sizes in the multi-scale residual structure to reduce the number of operations while maintaining the receptive field. Meanwhile, the channel attention structure is employed for enriching useful information. Moreover, perceptual loss is combined with MSE loss and adversarial loss as the optimization function to recover a finer image. Numerous experiments show that our FMSGAN achieves state-of-the-art image reconstruction quality with low computational complexity. Introduction Compressed sensing (CS) is an emerging information acquisition technique, which overcomes the Nyquist-Shannon acquisition theorem's limitations and implements signal sampling and compressing simultaneously [1]. The theory implies that when a signal x ∈ R n is compressible or sparse in a certain domain Ψ, it can compressed and measured by the measurement matrix Φ, and inferred accurately from y = Φx, where Φ ∈ R m×n with m n. The m/n is defined as the sampling rate. Due to the captivating sampling performance of CS, it is attractive for numerous applications, including video CS [2], singlepixel camera [3], snapshot compressed imaging [4] and magnetic resonance imaging [5]. The study of CS mainly focuses on the sampling pattern and recovery approaches at present. In terms of sampling, lots of approaches [6][7][8][9] have been developed and most of them perform well. Measuring images in the multi-layer transform domain is dubbed multiscale sampling, whereas measuring images in the original domain is dubbed single-scale sampling. With the intelligent utility of prior knowledge (structure, statistical dependencies, etc.), multi-scale sampling achieves better reconstruction quality than single-scale sampling but has received less attention [6,7]. Most scholars focus on single-scale sampling and have designed various measurement matrices [8,9]. Usually the well-designed or learned singlescale measurement matrix can acquire well-accepted reconstruction quality. However, these methods [8,9] suffer from aliasing artifacts for more attention to low-frequency information. designed or learned single-scale measurement matrix can acquire well-accepted reconstruction quality. However, these methods [8,9] suffer from aliasing artifacts for more attention to low-frequency information. Additionally, measuring and reconstruction are usually implemented separately, thus their performance is limited. The recovery of CS is treated as an inverse problem. For this, some classical algorithms have been proposed, including greedy algorithms [10,11], convex optimization algorithms [12,13] and iterative thresholding algorithms [14]. Greedy algorithms are easily affected by the local optimal solution, so recovery quality is limited. Convex algorithms and iterative thresholding algorithms usually implement multiple iterations for better recovery quality and are thus more time consuming. Therefore, while many works have been devoted to designing a fast method, reconstruction quality is lost [15,16]. Recently, deep neural networks have shown super performance in a variety of image processing tasks [17][18][19]. Some representative network structures, including convolutional neural networks (CNN) and generative adversarial networks (GAN) are also employed to image CS reconstruction. With the powerful learning ability of deep learning, these data-driven neural network models for image CS (DICS) have impressive reconstruction quality by directly learning the mapping from the compressed measurements to the raw image. We also notice that due to the alternating training of generator and discriminator, the image reconstructed by the method based on GAN is more authentic than that based on CNN [20]. DICS is obviously superior to classical methods in image recovery quality and speed. However, similar to the evolution of classical methods, recent DICS often exchange more time resources for less improvement in image reconstruction quality, as shown in Figure 1. This is mainly because DICS often stacks numerous of the same blocks to obtain highresolution images and each block cannot help recover images effectively. For example, in [21], the author proposes a serial structure based on CNN. Because the structure is relatively simple, the quality of image reconstruction can be further improved. In [20], the author develops a multi-scale residual block. The block can capture multi-scale image features, but it needs more time to process images and lacks the fusion of each channel feature. Therefore, there is an urgent need for efficient DICS to promote the application of image CS in high real-time scenes. To solve the above problems, a fast multi-scale generative adversarial network (FMS-GAN) is proposed. Specifically, there are two improvements in the FMSGAN: (1) inspired by [12], we propose a novel multi-scale sampling structure (MSS), which involves four convolution layers with different kernel sizes and a concatenated layer. The former three parallel convolution layers decompose images at each scale independently to obtain features with multiple resolutions. The later convolution layer is applied for sampling concatenated features. Our MSS can capture different levels of spatial features at multiple To solve the above problems, a fast multi-scale generative adversarial network (FMS-GAN) is proposed. Specifically, there are two improvements in the FMSGAN: (1) inspired by [12], we propose a novel multi-scale sampling structure (MSS), which involves four convolution layers with different kernel sizes and a concatenated layer. The former three parallel convolution layers decompose images at each scale independently to obtain features with multiple resolutions. The later convolution layer is applied for sampling concatenated features. Our MSS can capture different levels of spatial features at multiple scales and help improve reconstruction quality. (2) We propose a lightweight multi-scale residual block (LMSRB), in which only the 3 × 3 convolution layer and the concatenated layer are used. There are three bypasses in the LMSRB and the corresponding structures: one 3 × 3 convolution layer, two serial 3 × 3 convolution layers and three serial 3 × 3 convolu-Entropy 2022, 24, 775 3 of 16 tion layers, respectively. The serial convolution layers with a small kernel size have the same receptive field as a convolution layer with a large kernel size. So images of features at different scales can be learned by the LMSRB, thus enriching feature representation. Furthermore, a channel attention structure is applied to give different weights for every LMSRB output feature map to better enhance useful information. Because of the LMSRB and the channel attention structure, the FMSGAN is capable of high-resolution images and low computational complexity. Additionally, we introduce perceptual loss to refine the loss function. To verify the performance of our FMSGAN, we perform extensive experiments on three datasets, and the results show the merits of our model. The contributions are summarized as follows: (1) A fast multi-scale generative adversarial network is proposed for image CS. The generator and discriminator are alternate training to ensure the reconstructed images are more realistic. (2) A multi-scale sampling structure is proposed, which improves image reconstruction quality through joint training with the reconstruction network. (3) A novel lightweight multi-scale residual block (LMSRB) is proposed, which is combined with the channel attention structure to better tradeoff between reconstruction performance and efficiency. Due to the high efficiency of the LMSRB, the image is reconstructed at high speed. (4) Our FMSGAN achieves state-of-the-art performance on three datasets. Related Work Recently, compressed sensing has became a fascinating research area. It has a wide range of applications, especially in wireless sensor networks (WSN) and internet of things (IoT). In [22], a compressed sensing-based scheduling scheme was developed to conserve energy in WSN and IoT. The scheme firstly addresses the question of "how many sensor nodes should be activated to sense and transmit", then forces each sensor node to transmit only m n measurements to its next-hop node, for extraordinary performance in energy conservation. In [23], a compressed sensing framework is proposed for WSN and IoT. The authors demonstrate that the framework can be utilized to recover the compressible information data into a variety of information systems and will contribute to saving energy and communication resources. For reconstructing a diffusion field from spatiotemporal measurements, Mohammad et al. [24] exploit the intrinsic property of diffusive fields as side information and propose a diffusive compressed sensing method, which produces estimates of higher accuracy than that of classic CS. In [25], the authors consider powerhungry sensors, introduce compressed sensing and distributed compressed sensing to WSN and provide great energy efficiency. Hoover et al. [26] merge the CS process with existing methods of collecting spectral images and expand the stacked-color image sensor to use more colors or a wider range of wavelengths, which obtain a higher spectral resolution. There are more image CS works on the sampling pattern and recovery method. In the sampling process, researchers find that multi-scale sampling can extract different levels of image feature information [7,27]. By enriching the multi-level contents of the model, multi-scale sampling can enhance both sampling quality and recovery quality. As a simple implementation of multi-scale sampling, radial Fourier subsampling [28] is usually applied in bioimaging for its conversion characteristics between spatial and frequency domains but is not verified by more images. Flowers first decomposes images in the wavelet domain, then implements adaptive sampling of each wavelet sub-band independently and finally smooths the measurements to effectively obtain multi-scale information [6]. The W-DCS [27] applies wavelet transform for multi-scale compressed sensing. It is able to extract the measurements in multiple decomposed scales. For Kronecker CS, a multiscale sampling method is developed, which achieves high reconstruction quality and high computational complexity [7]. Despite these wavelet-based methods [6,7,27] improving image reconstruction quality, they require that the input image size meet the integer multiple of 2. More cases of multi-scale sampling are in [29][30][31]. In LAPRAN [29], a series of measurements at different resolutions are defined for a given sampling rate. Each group of measurements is fed into the corresponding reconstruction stage, thus multi-scale sampling is implemented. However, a heuristic measurement assignment is commanded for each rate. As a scalable network, SCSNet [30] creates multiple levels of reconstruction quality through a variety of stages of reconstruction. Its primary reconstruction module supports more low-frequency contents. However, SCSNet prefers to solve the adaptation sub-rate issue rather than devise a multi-scale sampling method. In MS-CSNet [31], a series of measurements are defined. The authors train the network with the obtained measurements corresponding to the smaller sub-rate and reuse them at the larger sub-rate, in which the low-frequency information is shared in the high-level recovery stage. However, MS-CSNet does not display the subjective reconstruction of images. Therefore, various rigorous studies on multi-scale sampling are required. In the recovery process, image CS infers the raw image from given measurements. For this, conventional CS approaches [10,[32][33][34] mainly depend on sparsity priors to iteratively optimize the sparsity-regularized problem. Examples of such approaches include orthogonal matching pursuit (OMP) [10], basis pursuit (BP) [32], the iterative shrinkage thresholding algorithm (ISTA) [33] and the alternating direction method of multipliers (ADMM) [34]. To further enhance recovery performance, researchers established more detailed structures based on wavelet tree sparsity [35], non-local information [36], minimal total variation [37] and simple representations in adaptive bases [38]. However, these conventional CS approaches are usually afflicted with high computational complexity caused by hundreds of iterations. Deep unfolding approaches usually integrate the deep networks with the iterative optimizers for image reconstruction. Metzler et al. [39] were the first to propose a learned DIT (LDIT), which combines the iterative DIT algorithm with a denoising CNN. Zhang et al. implement a set of deep unfolded versions of the ISTA algorithm, named ISTA-Net+ [9], OPINE-Net [40] and ISTA-Net++ [41], respectively. The difference is that ISTA-Net applies random measurement and recovery of the image block by block, the OPINE-Net designs a learning matrix and trains it jointly with the whole network and the ISTA-Net++ achieves multi-rate sampling and recovery in one model by a dynamic unfolding method. Moreover, based on the AMP algorithm, Zhang et al. [42] propose the AMP-Net to recover images with high quality and speed. The main limitation of such unfolding approaches is that they usually have the disadvantage of poor image recovery quality under a low sampling rate due to adopting a plain network structure. Deep straightforward approaches can directly learn the mapping between measurements and original images free from any constraints. Mousavi et al. [43] were the first to adopt a stacked denoising autoencoder (SDA) for image reconstruction while the applied fully connected network (FCN) results in numerous parameters. ReconNet [44] is the first approach to reconstructing the image from measurements via CNN, which has better recovery quality and fewer parameters. Subsequently, several CNN-based recovery approaches [21,45] are proposed. In MR-CSGAN [20], the authors adopt the generative adversarial network to recover images, whose generator and discriminator were alternately trained, so that the recovered image is more realistic. Recently, a novel block-based image CS network (BCSnet) [46] was proposed. By exploiting image intercorrelation, BCSnet achieves impressive performance. However, deep straightforward approaches often acquire limited performance improvement with many computational resources and are thus not suitable for high real-time applications. Methods In this part, we display the overall architecture of the FMSGAN, as shown in Figure 2. The raw image is sampled by the multi-scale sampling structure, and recovered by the generator, respectively. Both the raw image and the corresponding recovered image will be fed into the discriminator, in which the recovered image is distinguished from the raw image. Methods In this part, we display the overall architecture of the FMSGAN, as shown in Figure 2. The raw image is sampled by the multi-scale sampling structure, and recovered by the generator, respectively. Both the raw image and the corresponding recovered image will be fed into the discriminator, in which the recovered image is distinguished from the raw image. Multi-Scale Sampling Structure In the multi-scale sampling structure, the raw image is divided into multiple nonoverlapping blocks of size l × B 1 × B 2 , where l denotes the image channels. To obtain measurements, a set of convolutions are utilized to realize the multi-scale decomposition and sampling of the image block. The first-level decomposition can be formulated as: where * is the convolution operation, W l 1 1 denotes different convolution kernels in the first-level decomposition, l 1 ∈1,2,…,c 1 is the identifier of convolution kernels, x 0 denotes the image block with a size of l × B 1 × B 2 and x 1 denotes the output of the firstlevel decomposition. If the image is decomposed n times, the measurements are expressed as: x n = W l n n * x n-1 = W l n n * W l n-1 n-1 * ⋅⋅⋅ * (W l 1 1 * x 0 ) where x n ∈R l n ×m×b 1 ×b 2 , l n is the number of convolution kernels at n th -level decomposition, m is the number of output channels of every convolution and b 1 ×b 2 denotes the size of output features. For a given sampling rate r, there is The multi-scale sampling structure is shown in Figure 3. Firstly, three parallel convolutions-1 × 1, 3 × 3 and 5 × 5-are employed to decomposition image and output features. Convolution kernels with different sizes have different receptive fields, so different levels of feature information can be obtained. Then, the features are synthesized by the concatenated layer. Finally, a convolution layer with kernel size 32 × 32 and step size 32 × 32 is applied to output the measurements. Specially, all convolutions are no bias and activation. In experiment, n is set to 2 for fast sampling. Both B 1 and B 1 are set to 64 in the training phase. The test image is not forced to be segmented, as long as the size N 1 × N 2 meets N 1 × N 2 = 32k 1 × 32k 2 , where k 1 and k 2 are positive integers. Otherwise, image overlapping segmentation or image filling will be applied. Multi-Scale Sampling Structure In the multi-scale sampling structure, the raw image is divided into multiple nonoverlapping blocks of size l × B 1 × B 2 , where l denotes the image channels. To obtain measurements, a set of convolutions are utilized to realize the multi-scale decomposition and sampling of the image block. The first-level decomposition can be formulated as: where * is the convolution operation, W 1 l 1 denotes different convolution kernels in the first-level decomposition, l 1 ∈ 1, 2, . . . , c 1 is the identifier of convolution kernels, x 0 denotes the image block with a size of l × B 1 × B 2 and x 1 denotes the output of the first-level decomposition. If the image is decomposed n times, the measurements are expressed as: where x n ∈ R l n ×m×b 1 ×b 2 , l n is the number of convolution kernels at n th -level decomposition, m is the number of output channels of every convolution and b 1 ×b 2 denotes the size of output features. For a given sampling rate r, there is The multi-scale sampling structure is shown in Figure 3. Firstly, three parallel convolutions-1 × 1, 3 × 3 and 5 × 5-are employed to decomposition image and output features. Convolution kernels with different sizes have different receptive fields, so different levels of feature information can be obtained. Then, the features are synthesized by the concatenated layer. Finally, a convolution layer with kernel size 32 × 32 and step size 32 × 32 is applied to output the measurements. Specially, all convolutions are no bias and activation. In experiment, n is set to 2 for fast sampling. Both B 1 and B 1 are set to 64 in the training phase. The test image is not forced to be segmented, as long as the size N 1 × N 2 meets N 1 × N 2 = 32k 1 × 32k 2 , where k 1 and k 2 are positive integers. Otherwise, image overlapping segmentation or image filling will be applied. Generator Structure The generator can transform the measurements into a high-resolution image, which involves two processes: initial recovery and deep recovery. The architecture of the generator is shown in Figure 4. The initial recovery uses a deconvolution layer with kernel size 32 × 32 to recover images from the corresponding measurements. In the deep recovery process, we firstly apply a convolution with 64 channels to increase the number of feature maps. Then, nine LMSRBs combined with channel attention modules are adopted to deep recovered images in a single connection. The structure of the LMSRB is shown in the scribed part in Figure 4. The input features are processed by the LMSRB, in which multiple information at different bypasses is shared to capture image features at multiple scales. There are two of the same pyramid-like convolution structures in the LMSRB and each structure contains three parallel convolution groups, corresponding to one 3 × 3 convolution, two serial 3 × 3 convolutions and three serial 3 × 3 convolutions, respectively. The pyramid-like convolution can provide multi-scale feature representation and the serial 3 × 3 convolutions are able to decrease the number of operations while maintaining the receptive field. At the same time, the channel attention model is employed to acquire the contribution of each LMSRB output channel through learning and assigning different weight coefficients to each channel, so as to strengthen the important features. Moreover, the residual connection is used for the stability of network training. Subsequently, a concatenated layer connected to every channel attention model is adopted to enrich feature representation. A convolution layer with 3 × 3 is employed to decrease the number of feature maps and output the deep recovered images. Finally, the initial recovered image and the deep recovered image are added to acquire the reconstructed image. Discriminator Structure The design of the discriminator refers to [20], which contains convolution layers, batch normalization layers, Leaky Relu functions and sigmoid function, as shown in Generator Structure The generator can transform the measurements into a high-resolution image, which involves two processes: initial recovery and deep recovery. The architecture of the generator is shown in Figure 4. The initial recovery uses a deconvolution layer with kernel size 32 × 32 to recover images from the corresponding measurements. In the deep recovery process, we firstly apply a convolution with 64 channels to increase the number of feature maps. Then, nine LMSRBs combined with channel attention modules are adopted to deep recovered images in a single connection. The structure of the LMSRB is shown in the scribed part in Figure 4. The input features are processed by the LMSRB, in which multiple information at different bypasses is shared to capture image features at multiple scales. There are two of the same pyramid-like convolution structures in the LMSRB and each structure contains three parallel convolution groups, corresponding to one 3 × 3 convolution, two serial 3 × 3 convolutions and three serial 3 × 3 convolutions, respectively. The pyramid-like convolution can provide multi-scale feature representation and the serial 3 × 3 convolutions are able to decrease the number of operations while maintaining the receptive field. At the same time, the channel attention model is employed to acquire the contribution of each LMSRB output channel through learning and assigning different weight coefficients to each channel, so as to strengthen the important features. Moreover, the residual connection is used for the stability of network training. Subsequently, a concatenated layer connected to every channel attention model is adopted to enrich feature representation. A convolution layer with 3 × 3 is employed to decrease the number of feature maps and output the deep recovered images. Finally, the initial recovered image and the deep recovered image are added to acquire the reconstructed image. Generator Structure The generator can transform the measurements into a high-resolution image, which involves two processes: initial recovery and deep recovery. The architecture of the generator is shown in Figure 4. The initial recovery uses a deconvolution layer with kernel size 32 × 32 to recover images from the corresponding measurements. In the deep recovery process, we firstly apply a convolution with 64 channels to increase the number of feature maps. Then, nine LMSRBs combined with channel attention modules are adopted to deep recovered images in a single connection. The structure of the LMSRB is shown in the scribed part in Figure 4. The input features are processed by the LMSRB, in which multiple information at different bypasses is shared to capture image features at multiple scales. There are two of the same pyramid-like convolution structures in the LMSRB and each structure contains three parallel convolution groups, corresponding to one 3 × 3 convolution, two serial 3 × 3 convolutions and three serial 3 × 3 convolutions, respectively. The pyramid-like convolution can provide multi-scale feature representation and the serial 3 × 3 convolutions are able to decrease the number of operations while maintaining the receptive field. At the same time, the channel attention model is employed to acquire the contribution of each LMSRB output channel through learning and assigning different weight coefficients to each channel, so as to strengthen the important features. Moreover, the residual connection is used for the stability of network training. Subsequently, a concatenated layer connected to every channel attention model is adopted to enrich feature representation. A convolution layer with 3 × 3 is employed to decrease the number of feature maps and output the deep recovered images. Finally, the initial recovered image and the deep recovered image are added to acquire the reconstructed image. Discriminator Structure The design of the discriminator refers to [20], which contains convolution layers, batch normalization layers, Leaky Relu functions and sigmoid function, as shown in Discriminator Structure The design of the discriminator refers to [20], which contains convolution layers, batch normalization layers, Leaky Relu functions and sigmoid function, as shown in Figure 5. In particular, the convolution layer is added behind each batch normalization layer to enhance the discrimination ability of the discriminator by increasing the weight parameters. Note that there are some similar operations in the identification process. For simplicity, the single operation of dimension decrease and channel increase for the feature map is named DDCI. The recovered image and the corresponding original image generated by the generator is fed into the discriminator and then the probability of sample classification is obtained. Entropy 2022, 24, x FOR PEER REVIEW 7 of 16 Figure 5. In particular, the convolution layer is added behind each batch normalization layer to enhance the discrimination ability of the discriminator by increasing the weight parameters. Note that there are some similar operations in the identification process. For simplicity, the single operation of dimension decrease and channel increase for the feature map is named DDCI. The recovered image and the corresponding original image generated by the generator is fed into the discriminator and then the probability of sample classification is obtained. Cost Function Inspired by [47], the MSE loss, perceptual loss, and adversarial loss are combined as the cost function of our FMSGAN. The MSE loss often converges quickly but it is hard to reconstruct some lost uncertain high-frequency details, leading to poor visual quality. Recently, perceptual loss has outperformed MES loss in some computer vision tasks. It is capable of preserving structure and details, so was introduced into our model. The pixellevel MSE loss is formulated as: where G(⋅) represents the generator, G(I) i,j denotes the image created by the generator, I i,j is the input image, and H and V represent the number of pixels in the horizontal and vertical directions of the input image, respectively. The VGG19 loss is implemented for obtaining high-level perceptual information, which is expressed as: where ϕ x,y (⋅) represents the feature map captured by the j th convolution layer before the i th max-pooling layer in the VGG19 network. H x,y and V x,y denote the size of the respective feature maps in the VGG19 network. Here, the ϕ x = 5, y = 4 of the VGG19 network is chosen as the final output layer for the feature map. Through minimizing adversarial loss to optimize the parameters, more indistinguishable images created by the generator are applied to trick the discriminator, which also promotes the performance of the discriminator. The adversarial loss is as follows: where D(⋅) represents the discriminator, D(G(I)) denotes the probability that the recovered image G(I) is real and M represents the batch size during each training iteration. The final cost function is defined as: Cost Function Inspired by [47], the MSE loss, perceptual loss, and adversarial loss are combined as the cost function of our FMSGAN. The MSE loss often converges quickly but it is hard to reconstruct some lost uncertain high-frequency details, leading to poor visual quality. Recently, perceptual loss has outperformed MES loss in some computer vision tasks. It is capable of preserving structure and details, so was introduced into our model. The pixel-level MSE loss is formulated as: where G(·) represents the generator, G(I) i,j denotes the image created by the generator, I i,j is the input image, and H and V represent the number of pixels in the horizontal and vertical directions of the input image, respectively. The VGG19 loss is implemented for obtaining high-level perceptual information, which is expressed as: where φ x,y (·) represents the feature map captured by the jth convolution layer before the ith max-pooling layer in the VGG19 network. H x,y and V x,y denote the size of the respective feature maps in the VGG19 network. Here, the φ x=5, y=4 of the VGG19 network is chosen as the final output layer for the feature map. Through minimizing adversarial loss to optimize the parameters, more indistinguishable images created by the generator are applied to trick the discriminator, which also promotes the performance of the discriminator. The adversarial loss is as follows: where D(·) represents the discriminator, D(G(I)) denotes the probability that the recovered image G(I) is real and M represents the batch size during each training iteration. The final cost function is defined as: Entropy 2022, 24, 775 8 of 16 Experiments In this section, we first conduct a comparison with some state-of-the-art approaches to verify the performance of the proposed model. Then, the effectiveness of the MSS and the LMSRB are verified by ablation experiments. The discussion and interpretation of the experimental results are also provided. Datasets All experiments are adopted on five datasets: DIV2K [20], Set5 [45], Set11 [42], Set14 and BSDS100 [21]. DIV2K is a high-resolution dataset, which contains 800 color images and is our training dataset. Random clipping, translation and rotation are utilized to expand the training data. In particular, all images in DIV2K are cropped into sub-images with a size of 64 × 64. Set11 is employed to validate. Additionally, we use Set5, Set14 and BSDS100 as the test datasets. Implementation Details All experiments are performed using PyTorch 1.6 platform with 1 GeForce RTX1080Ti GPU. The Adam is used as the generator's optimizer and the initial learning rate is set to 0.0004. After every 180 iterations, the learning rate will be divided by 2. The SGD is used as the discriminator's optimizer and the learning rate is set to 0.0004. Assigning different optimizer and learning rates, updating strategies for the generator and discriminator, is beneficial for the stable training of the model. We use four sampling rates to sample images-1%, 4%, 10% and 25%-and choose 10, 41, 102 and 256 as the numbers of corresponding measure convolution output channels. We choose the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) as the evaluation index for recovery quality. Comparison to Other State-of-the-Art Methods We compare our FMSGAN with some state-of-the-art methods, i.e., ReconNet [44], ISTA-Net+ [9], SCSNet [30], CSNet* [21], OPINE-Net [40], ISTA-Net++ [41], AMP-Net [42] and MR-CSGAN [20], on three datasets, namely Set5, Set14 and BSDS100, to verify its recovery quality and running speed. The recovery quality comparisons are shown in Tables 1-3 and running time comparisons are shown in Table 4. In particular, we introduce the mean and standard deviation (SD) to compare reconstruction times in a statistical manner. PSNR and SSIM results show that our FMSGAN performs better. On the Set5 dataset, the FMSGAN almost achieves the highest PSNR and SSIM results. Specifically, at the four sampling rates, i.e., 1%, 4%, 10% and 25%, the proposed model achieves 0. Table 3. Compared with OPINE-Net, our model achieves 2.06, 1.52, 1.37, and 1.28 dB and 0.0527, 0.0337, 0.0242 and 0.0143 gains in PSNR and SSIM at four sampling rates. We find that the AMP-Net has a higher PSNR in image recovery at a sampling rate of 25%, which indicates that the performance of the FMSGAN needs to be further improved. We also notice that our FMSGAN and the suboptimal method MR-CSGAN demonstrate similar reconstruction quality on the BSDS100 dataset. This is because BSDS100 is a highresolution dataset, which needs a more complex affinity for image CS recovery. Due to the application of 3 × 3 convolution, our FMSGAN requires less computation; therefore, its learning ability decreased slightly. We assumed that the effect of recovery quality decreasing slightly is negligible compared to the decrease processing time. Later, we will analyze the computational complexity of the eight methods. For further comparison, we calculate the standard deviation (SD) of PSNR and SSIM of each model at four sampling rates on three datasets, as shown in Tables 1-3. Compared with deep straightforward approaches, deep unfolding approaches, i.e., ISTA-Net+, ISTA-Net++, OPINE-Net and AMP-Net, achieve higher values in both PSNR SD and SSIM SD. With high SD, one model can have a rich ability to deal with the measurements corresponding to different sampling rates. Benefiting from iterative thresholding algorithms, deep unfolding approaches usually have outstanding performance. PSNR SD and SSIM SD of our model on three datasetsare 4.9791, 3.9615, 3.1427 and 0.1144, 0.1313, 0.1340, respectively and are among the highest in deep straightforward approaches. This means that our model can maintain better recovery performance at a low sampling rate while achieving high SD, which remedies the deficiency of deep straightforward approaches. Subjective reconstruction comparisons are shown in Figures 6-9, from which can find that, compared with other methods, the FMSGAN is better able to retain more details and sharper edges. The optimal and suboptimal results are emphasized in bold and underlined, respectively. The optimal and suboptimal results are emphasized in bold and underlined, respectively. The optimal and suboptimal results are emphasized in bold and underlined, respectively. Figure 8. Comparison of visual recovery on man from Set14 at a sampling rate of 10%. Figure 8. Comparison of visual recovery on man from Set14 at a sampling rate of 10%. Figure 9. Comparison of visual recovery on building from BSDS100 at a sampling rate of 25%. Table 4 is the reconstruction time comparisons between different CS approaches for recovering a 256 × 256 image in the Set11 dataset at a sampling rate of 10%. We test ISTA-Net+, OPINE-Net, ISTA-Net++ and MR-CSGAN on our platform (1 GeForce RTX1080Ti GPU) with their original codes and the results of SCSNet, ReconNet, and CSNet are supported by [20]. In Table 4, we can see that the time to reconstruct a 256 × 256 image by our FMSGAN is only 0.0406 s, less than that of SCSNet, ISTA-Net++ and MR-CSGAN and nearly 1 3 ⁄ of that of the MR-CSGAN. The comparison results display that our FMSGAN is capable of fast image CS reconstruction. The MSS In this section, we evaluate the performance of the MSS. For a fair comparison, only the last convolution layer in the MSS is kept. Table 5 shows the PSNR comparison between w/MSS and w/o MSS tested on the Set14 dataset at four different sampling rates. It is easy to see that the MSS structure greatly facilitates recovery performance across all sampling rates, with the most obvious improvement up to 0.37 dB, which convincingly demonstrates the effectiveness of the MSS. [20] in the FMSGAN and carry out experiments. Reconstruction quality comparisons and running speed comparisons are shown in Figure 10 and Table 6, respectively. Figure 10 shows the PSNR of two models tested on the Set5, Set14 and BSDS100 datasets at different sampling rates. We observe that our LMSRB acquires a higher PSNR at sampling rates of 1%, 4%, 10% and 50%, the model with a MSRB has a higher PSNR at a sampling rate of 25% and there is a slight difference between the two models in image recovery quality. Table 6 shows the running time of two models tested on Set11. We find that the time to recover a 256 × 256 image by the FMSGAN is always evidently less than that of the model with a MSRB; this is because the number of feature maps in the LMSRB is the same as that of the MSRB, whereas the number of operations in the LMSRB is significantly less than that of the MSRB. The comparison results show the better performance of the LMSRB. Table 4 is the reconstruction time comparisons between different CS approaches for recovering a 256 × 256 image in the Set11 dataset at a sampling rate of 10%. We test ISTA-Net+, OPINE-Net, ISTA-Net++ and MR-CSGAN on our platform (1 GeForce RTX1080Ti GPU) with their original codes and the results of SCSNet, ReconNet, and CSNet are supported by [20]. In Table 4, we can see that the time to reconstruct a 256 × 256 image by our FMSGAN is only 0.0406 s, less than that of SCSNet, ISTA-Net++ and MR-CSGAN and nearly 1/3 of that of the MR-CSGAN. The comparison results display that our FMSGAN is capable of fast image CS reconstruction. 1. The MSS In this section, we evaluate the performance of the MSS. For a fair comparison, only the last convolution layer in the MSS is kept. Table 5 shows the PSNR comparison between w/MSS and w/o MSS tested on the Set14 dataset at four different sampling rates. It is easy to see that the MSS structure greatly facilitates recovery performance across all sampling rates, with the most obvious improvement up to 0.37 dB, which convincingly demonstrates the effectiveness of the MSS. [20] in the FMSGAN and carry out experiments. Reconstruction quality comparisons and running speed comparisons are shown in Figure 10 and Table 6, respectively. Figure 10 shows the PSNR of two models tested on the Set5, Set14 and BSDS100 datasets at different sampling rates. We observe that our LMSRB acquires a higher PSNR at sampling rates of 1%, 4%, 10% and 50%, the model with a MSRB has a higher PSNR at a sampling rate of 25% and there is a slight difference between the two models in image recovery quality. Table 6 shows the running time of two models tested on Set11. We find that the time to recover a 256 × 256 image by the FMSGAN is always evidently less than that of the model with a MSRB; this is because the number of feature maps in the LMSRB is the same as that of the MSRB, whereas the number of operations in the LMSRB is significantly less than that of the MSRB. The comparison results show the better performance of the LMSRB. Effect of cost function For further analysis of the proposed model, various settings of the cost function are concerned and the corresponding recovery performance is shown in Table 7. In particular, we maintain pixel loss as the main part of the cost function. From Table 7, one can clearly observe that setting (d) achieves the best reconstruction performance. Comparing setting (a) and setting (c), we notice that perceptual loss could promote the final recovery results. It seems that adversarial loss has little contribution to recovery performance if only concerning PSNR. Therefore, we display the image subjective reconstruction result in Figure 11. One can see that adversarial loss is capable of supporting better visual results and helps keep context details. Setting (c) Setting (d) Figure 11. Comparison of visual recovery on flowers from Set14 at a sampling rate of 10%. Furthermore, we also explore the impact of different coefficient combinations of cost function on reconstruction performance, as shown in Table 8. It can be seen that the Effect of cost function For further analysis of the proposed model, various settings of the cost function are concerned and the corresponding recovery performance is shown in Table 7. In particular, we maintain pixel loss as the main part of the cost function. From Table 7, one can clearly observe that setting (d) achieves the best reconstruction performance. Comparing setting (a) and setting (c), we notice that perceptual loss could promote the final recovery results. It seems that adversarial loss has little contribution to recovery performance if only concerning PSNR. Therefore, we display the image subjective reconstruction result in Figure 11. One can see that adversarial loss is capable of supporting better visual results and helps keep context details. Effect of cost function For further analysis of the proposed model, various settings of the cost function are concerned and the corresponding recovery performance is shown in Table 7. In particular, we maintain pixel loss as the main part of the cost function. From Table 7, one can clearly observe that setting (d) achieves the best reconstruction performance. Comparing setting (a) and setting (c), we notice that perceptual loss could promote the final recovery results. It seems that adversarial loss has little contribution to recovery performance if only concerning PSNR. Therefore, we display the image subjective reconstruction result in Figure 11. One can see that adversarial loss is capable of supporting better visual results and helps keep context details. Setting (c) Setting (d) Figure 11. Comparison of visual recovery on flowers from Set14 at a sampling rate of 10%. Furthermore, we also explore the impact of different coefficient combinations of cost function on reconstruction performance, as shown in Table 8. It can be seen that the Furthermore, we also explore the impact of different coefficient combinations of cost function on reconstruction performance, as shown in Table 8. It can be seen that the coefficient of perceptual loss has an obvious influence on the final reconstruction. Whether k is greater or less than 0.006, the reconstruction performance will be worse. This means that perceptual loss should be well coordinated with the whole cost function. For adversarial loss, we tend to verify its performance through visual results provided in Figure 12. From Figure 12, we find that the influence of v on the final reconstruction is nearly negligible. coefficient of perceptual loss has an obvious influence on the final reconstruction. Whether k is greater or less than 0.006, the reconstruction performance will be worse. This means that perceptual loss should be well coordinated with the whole cost function. For adversarial loss, we tend to verify its performance through visual results provided in Figure 12. From Figure 12, we find that the influence of v on the final reconstruction is nearly negligible. Setting (e) Setting (f) Figure 12. Comparison of visual recovery on baby from Set5 at a sampling rate of 10%. Discussion As far as we know, a lot of DICS methods have been proposed. Most of them are committed to improving reconstruction quality instead of reducing the running time of image reconstruction. We believe that reducing the time complexity of reconstruction is also of great significance, especially in some real-time scenarios, such as automatic driving. We introduce GAN to implement image CS. From Tables 1-3, we can see that the proposed FMSGAN almost achieves the highest PSNR and SSIM values on the three datasets, an exceptional reconstruction effect. This is due to the advantage of multi-scale information. In the FMSGAN, two main structures, a MSS and a LMSRB, are proposed. In the sampling stage, the MSS extracts multi-scale information through convolution kernels of different sizes. Convolution with different kernel sizes has different receptive fields, which can capture more correlation information between pixels. In the recovery stage, the LMSRB extracts and synthesizes multi-scale information through convolution kernels of multiple branches and different depths. After the LMSRB, the image has rich feature representations, but some of them are redundant Therefore, we introduce the channel attention module to filter invalid features and enhance useful features, so as to improve reconstruction quality. We also notice that our FMSGAN achieves a lower PSNR and a higher SSIM compared with AMP-Net at a sampling rate of 25%, which is mainly because the AMP-Net employs the added deblocking model. In the meantime, there is only the mean square error loss that is applied in AMP-Net's loss function and the mean square error loss tends to optimize pixel-level errors, so the AMP-Net acquires a higher PSNR instead of a balance between PSNR and SSIM. The reconstruction performance of various methods for different datasets is different and most of them achieve the worst reconstruction effect on the BSDS100 dataset. This may be because the BSDS100 dataset is the largest of Discussion As far as we know, a lot of DICS methods have been proposed. Most of them are committed to improving reconstruction quality instead of reducing the running time of image reconstruction. We believe that reducing the time complexity of reconstruction is also of great significance, especially in some real-time scenarios, such as automatic driving. We introduce GAN to implement image CS. From Tables 1-3, we can see that the proposed FMSGAN almost achieves the highest PSNR and SSIM values on the three datasets, an exceptional reconstruction effect. This is due to the advantage of multi-scale information. In the FMSGAN, two main structures, a MSS and a LMSRB, are proposed. In the sampling stage, the MSS extracts multi-scale information through convolution kernels of different sizes. Convolution with different kernel sizes has different receptive fields, which can capture more correlation information between pixels. In the recovery stage, the LMSRB extracts and synthesizes multi-scale information through convolution kernels of multiple branches and different depths. After the LMSRB, the image has rich feature representations, but some of them are redundant Therefore, we introduce the channel attention module to filter invalid features and enhance useful features, so as to improve reconstruction quality. We also notice that our FMSGAN achieves a lower PSNR and a higher SSIM compared with AMP-Net at a sampling rate of 25%, which is mainly because the AMP-Net employs the added deblocking model. In the meantime, there is only the mean square error loss that is applied in AMP-Net's loss function and the mean square error loss tends to optimize pixel-level errors, so the AMP-Net acquires a higher PSNR instead of a balance between PSNR and SSIM. The reconstruction performance of various methods for different datasets is different and most of them achieve the worst reconstruction effect on the BSDS100 dataset. This may be because the BSDS100 dataset is the largest of the three test sets. It contains a wide variety of high-resolution images, which require more complicated mapping during reconstruction. In Table 4, we find that the time to reconstruct a 256 × 256 image by the FMSGAN is only 0.0406 s, less than that by SCSNet, ISTA-Net++ and MR-CSGAN, and is nearly a 1/3 of that by MR-CSGAN. This is mainly because we apply concatenated 3 × 3 convolution instead of large-scale convolution in the LMSRB, which obviously reduces the number of operations. In SCSNet, the author achieves better reconstruction quality through a multi-stage reconstruction strategy, but needs high time complexity. It is necessary to design a more efficient network structure. GAN itself is prone to the problems of non-convergence and model collapse. In the design of the model, we try to keep the parameters of the discriminator and the generator in the same order of magnitude, and ensure that the parameters of the generator are slightly more than those of the discriminator, which can give full application to the discriminator's ability without affecting the reconstruction ability of the generator. In our experiment, the number of parameters of the generator are no more than twice that of the discriminator. Further, we assign different optimizers and learning rate update strategies to the generator and discriminator, respectively, so that our model can avoid falling into the problem of mode collapse. For model convergence, we design the cost function based on pixel loss, adversarial loss and perceptual loss. Pixel loss helps the model converge quickly, so we give it a large weight. Adversarial loss and perceptual loss are treated as the auxiliary parts of the cost function, which are assigned small weights. Taking advantage of the design of the function, the model can be trained stably. In the future, scholars can pay more attention to video compressed sensing. As an ordered image group, video has more redundant information available in the temporal domain and the spatial domain. Making full use of this redundant information will achieve higher-quality data compression, which is of significance. Conclusions In this paper, we present a generative adversarial network-based image compressive model. Specifically, a multi-scale structure is applied for capturing multi-level information to improve reconstruction. An LMSRB structure is applied for deep reconstruction. With the application of multiple 3 × 3 convolutions, multi-scale information of features is better acquired and the number of operations is evidently decreased, which is helpful for capturing detail and recovering images quickly. At the same time, perceptual loss is introduced to enhance the visual quality of the recovered image. Experimental results show that our FMSGAN achieves better reconstruction quality and fast recovery speed against some state-of-the-art methods on three datasets. Despite the superiority of the FMSGAN, further improvement can still be achieved in the reconstruction of DICS. With further in-depth research on deep learning, some novel networks with brilliant performance can be derived, which are capable of powerful information capture and feature extraction. Applying these structures, DICS will demonstrate more exceptional performance.
2022-06-02T15:25:00.635Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "c2d9097bb9b03d00f9b9bf792f15418a39efaaf4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/24/6/775/pdf?version=1653963498", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "68a9ad589a472d68967b57123da24e367107b60e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
14627913
pes2o/s2orc
v3-fos-license
Role of Caffeic Acid on Collagen Production in Nasal Polyp-Derived Fibroblasts Objectives Caffeic acids are known to have anti-oxidant, anti-inflammatory, immunomodulatory, and tissue reparative effects. The purposes of this study were to determine the effect of caffeic acid on transforming growth factor (TGF) β1-induced myofibroblast differentiation and collagen production, and to determine whether caffeic acid is involved in the antioxidant effect in nasal polyp-derived fibroblasts (NPDFs). Methods NPDFs were pretreated with caffeic acid (1-10 µM) for 2 hours and stimulated with TGF-β1 (5 ng/mL) for 24 hours. The expression of α-smooth muscle actin (SMA), collagen types I and III, and Nox4 mRNA was determined by a reverse transcription-polymerase chain reaction, and the expression of α-SMA protein was determined by actin ned by immunofluorescence microscopy. The amount of total soluble collagen production was analyzed by the Sircol collagen dye-binding assay. The reactive oxygen species (ROS) generated by NPDFs were determined using 2',7'-dichlorfluorescein-diacetate. siNox4 was used to determine the effect of Nox4. Results The expression of α-SMA and production of collagen were significantly increased following TGF-β1 treatment. In contrast, the level of expression of α-SMA and the level of production of collagen were decreased by pretreatment with caffeic acid. The activation of Nox4 and the subsequent production of ROS were also reduced by pretreatment with caffeic acid. The expression of α-SMA was prevented by inhibition of ROS generation with siNox4. Conclusion Caffeic acid may inhibit TGF-β1-induced differentiation of fibroblasts into myofibroblasts and collagen production by regulating ROS. INTRODUCTION The pathophysiology of nasal polyp formation is poorly understood. Previous studies have suggested that the proliferation of fibroblasts and differentiation into myofibroblasts has a role in the formation of nasal polyps [1,2]. The myofibroblasts produce extracellular matrix (ECM), such as collagen or fibronectin [3]. Transforming growth factor (TGF) β is a cytokine that stimulates the proliferation of fibroblasts and the differentiation of fibroblasts into myofibroblasts [4]. There is evidence that has shown increased expression of TGF-β1 in nasal polyps compared to normal mucosa and the essential function of TGF-β1 in the growth of nasal polyps [5,6]. A physiologic level of reactive oxygen species (ROS) is crucial for the proper regulation of cell function, such as intracellular signaling, transcription activation, cell proliferation, inflammation, and apoptosis [7]. ROS have been implicated in the pathogenesis of a large number of diseases, including bronchial asthma [8]. ROS are not only generated as by-products in aerobic metabolism, but are also produced by specialized enzymes, such as NADPH oxidases (Noxs) [9]. It has been reported that caffeic acid is a superior antioxidant compared with p-coumaric and ferulic acids in inhibiting lowdensity lipoprotein oxidation, as well as quenching radicals and singlet oxygen [10][11][12]. However, the effects of caffeic acid on nasal polyp-derived fibroblasts (NPDFs) have not been studied. In this study, the effects of caffeic acid on TGF-β1-induced myofibroblast differentiation and collagen production was determined. We also investigated the under lying molecular mechanisms. Induction of fibroblasts and cell culture To induce fibroblasts from nasal polyps, 6 patients (3 females and 3 males; 29.9±8.2 years of age) were recruited from the Otorhinolaryngology Department of Korea University Guro Hospital. The patients were non-smokers and not treated with anti-allergic agents for at least 2 months. Nasal polyps were obtained during surgical procedures. Fibroblasts were isolated from surgical tissues by enzymatic digestion with collagenase (500 U/ mL; Sigma), hyaluronidase (30 U/mL; Sigma), and DNAse (10 U/mL; Sigma). Briefly, following a 2-hour incubation in 5% CO2 at 37°C in a culture plate, the cells were collected by centrifugation, washed twice, resuspended in Dulbecco's modified Eagle's medium (DMEM; Invitrogen, Grand Island, NY, USA) containing 10% (v/v) heat-inactivated fetal bovine serum (FBS), 2-glutamate, 100 μg/mL of penicillin, and 100 μg/mL and streptomycin. The cells were allowed to attach for 4 days. Non-adherent cells were removed by changing the medium. Fibroblast was detached with a trypsin-EDTA solution. After the cells were washed, the cells were resuspended in medium and used for subsequent experiments. The fibroblast purity was >90% and used for NPDFs. Cells were used after passage five. This study was approved by the Institutional Review Institutional Review Board of Korea University College of Medicine (KUGGR-2010-013). Collagen measurements Total soluble collagen in cell culture supernatants is quantified using the Sircol collagen assay (Biocolor, Belfast, UK). For these experiments, confluent cells in 25 cm 2 culture dishes are incubated for 24 hours with 1 mL DMEM-5% FBS. One milliliter of Sirius red dye, an anionic dye that reacts specifically with basic side chain groups of collagens under assay conditions, was added to 400 μL of supernatant and incubated with gentle rotation for 30 minutes at room temperature. After centrifugation at 12,000 g for 10 minutes, the collagen-bound dye was redissolved with 1 mL of 0.5 M NaOH, and absorbance at 540 nm was measured by enzyme-linked immunosorbent assay (MRX; Dynex, Chantilly, VA, USA). The absorbance was directly proportional to the amount of newly formed collagen in the cell culture supernatant. Assay of intracellular ROS The production of intracellular ROS was also determined by fluorescent microscope using a fluorescent probe, 2', 7'-dichlorofluorescein diacetate (DCFH-DA; Molecular Probes Inc., OR, USA). DCFH-DA diffuses through the cell membrane readily and is enzymatically hydrolyzed by intracellular esterases to non-fluorescent DCFH, which is then rapidly oxidized to highly fluorescent DCFH in the presence of ROS. The stock DCFH-DA (2 mM) was prepared in absolute ethanol and kept at -70°C in the dark. Cells collected from a 12-well plate using 0.5% trypsin/EDTA were washed twice with PBS prior to the analysis. Cells were incubated with 20 µM DCFH-DA for 1 hour, then with maximum concentrations of CAPE for another 2 hours prior to treatment with TGF-β1 for 24 hours. After washing 3 times with PBS, cells were examined with fluorescent microscope (excitation [488 nm] and emission [520 nm], Olympus IX71; Tokyo, Japan). Transfection with small interference RNA For Nox4 RNA interference, a siRNA specific for human NOX4 (siNox4 sense, 5'-ACU GAG GUA CAG CUG GAU GUU-3'; and anti-sense, 3'-CAU CCA GCU GUA CCU CAG UUU) was used. As a control, the universal negative control siRNA (siCont; Invitrogen) was used. Individual siRNAs (100 nmol/L), lipofectamine, and Opti-MEM medium were mixed and incubated at room temperature for 30 minutes. siRNA-lipofectamine complexes were added to cells for 48 hours, after which siRNA-lipofectamine complexes were removed and cells were washed and placed in serum-free medium for 24 hours. Subsequently, cells were treated with or without TGF-β1 for the indicated time and harvested for RNA extraction. Statistical analysis Data were described as the mean±SE. Statistical analysis was performed using the Student t-test or analysis of variance (ANOVA), as appropriate. P<0.05 was considered statistically significant. To determine the inhibitory effect of CAPE in TGF-β1-induced myofibroblast differentiation (α-SMA protein), NPDFs were pretreated with CAPE and stimulated them with TGF-β1 for 48 hours. The addition of CAPE decreased the number of α-SMA protein-positive cells and it was significant in 5 μM (Fig. 2C). The effect of CAPE on collagen production in TGF-β1-induced NPDFs To determine the inhibitory effect of CAPE in TGF-β1-induced collagen types I and III mRNA expression, cells were treated with CAPE for 2 hours before stimulating the cells with TGF-β1 for 24 hours. The expression of collagen types I and III mRNA were increased by stimulation with TGF-β1, and were notably decreased by pretreatment with CAPE in 5 μM (Fig. 3A, B). To determine the inhibitory effect of CAPE in TGF-β1-induced soluble collagen production, cells were pretreated with CAPE and stimulated with TGF-β1 for 48 hours. The addition of CAPE The effect of CAPE on intracellular ROS generation in TGF-β1-induced NPDFs The anti-oxidant effect of CAPE in TGF-β1-induced intracellular ROS production was also confirmed on fluorescent microscopy. Pretreatment with CAPE inhibited TGF-β1-induced intracellular ROS production compared with TGF-β1 treatment alone (Fig. 4). Effect of CAPE on Nox4 expression in TGF-β1-induced NP-DFs Because TGF-β1 increased ROS production in NPDFs, we hypothesized that ROS production is mediated by increased expression of Noxs. Previously, we showed that up-regulation of Nox4 may be important for TGF-β1 effects on nasal fibroblasts (data not shown). To determine the inhibitory effect of CAPE in TGF-β1-induced Nox4 mRNA expression, cells were pretreated with CAPE and stimulated with TGF-β1 for 12 hours. The expression of Nox4 mRNA was considerably decreased by pretreatment with CAPE in 5 μM (Fig. 5A, B). Nox4 is required for α-SMA expression in TGF-β1-induced NPDFs Additional studies were performed to determine whether or not Nox4 is necessary for α-SMA expression in TGF-β1-induced NPDFs. We down-regulated the expression of Nox4 mRNA using transfection with small interference oligonucleotide RNA directed against Nox4 (siNox4). We already confirmed that siNox4 decreases the level of Nox4 mRNA in our previous study [13]. Nox4 expression at the same concentration. These findings support the notion that CAPE has antioxidant effects that are associated with the modulation of myofibroblast differentiation which occurs in the pathogenesis of nasal polyps. Further studies are necessary to determine whether CAPE shows inhibitory effects on the formation of nasal polyps as occurs in vivo. In conclusion, CAPE inhibits myofibroblast differentiation of TGF-β1-activated NPDFs and collagen production; the effects of CAPE on Nox4 and ROS are involved in the process. These results show that CAPE may play an inhibitory role in the development of nasal polyps. DISCUSSION In the present study, we showed that CAPE inhibits the expression of α-SMA, an indicator of myofibroblast differentiation and collagen production in NPDFs, and that CAPE inhibits ROS production and Nox4 expression. We confirmed that on transfection with siNox4, TGF-β1-induced expression of Nox4 mRNA was reduced, implicating that inhibition of phenotypic change of NPDFs by CAPE was mediated by the antioxidant effect. CAPE is an active phenolic compound that is present in propolis, which is the generic name of a resinous product derived from conifer bark and carried by honeybees to their hives [14]. CAPE has been shown to have several interesting biological properties, including antioxidant, anti-inflammatory, anti-viral, immunostimulatory, anti-angiogenic, anti-invasive, anti-metastatic, and carcinostatic properties [15]. Although CAPE has been shown to have antioxidant, anti-inflammatory, immunomodulatory, and anti-mutagenic effects, the effects on parameters related with nasal polyps have not been investigated. Although the etiology of nasal polyps and the pathophysiologic mechanisms leading to the formation of nasal polyps are poorly understood, a number of studies have suggested that the differentiation of fibroblasts into myofibroblasts and ECM accumulation are key processes [1]. TGF-β1, which is highly expressed in nasal polyp tissues, is thought to be involved in the structural modifications that characterize nasal polyp formation [6]. In our previous study, we confirmed that TGF-β1 significantly increased the expression of α-SMA, as well as the production of collagen types I and III in NPDFs, and Nox4 and ROS play an important role in phenotypic change and ECM formation of TGF-β1induced NPDFs [16]. There are several studies regarding the association between oxidative stress and nasal polyps. Dagli et al. [17] investigated the role of free radicals and antioxidants in nasal polyps and suggested that the levels of antioxidants were decreased and the levels of oxidants were increased. Cheng et al. [18] demonstrated that the mean level of tissue chemiluminescence in nasal polyps was significantly higher than control specimens, and the expression of superoxide dismutase 1 and 3 was higher in nasal polyp tissues. However, these studies focused on mucosal damage by ROS, while we focused on the role of ROS in signal mediators in TGF-β1-induced NPDFs. In summary, we have identified and demonstrated evidence that CAPE has inhibitory effects on TGF-β1-induced α-SMA expression of NPDFs. CAPE also inhibited ROS production and
2016-05-12T22:15:10.714Z
2014-11-14T00:00:00.000
{ "year": 2014, "sha1": "3608e1dc34148aa97f04e3f5b5355f7065b9d906", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3342/ceo.2014.7.4.295", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3608e1dc34148aa97f04e3f5b5355f7065b9d906", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
241415379
pes2o/s2orc
v3-fos-license
The Two-dimensional and Three-dimensional T2 Weighted Imaging-based Radiomic Signatures for the Preoperative Discrimination of Ovarian Borderline Tumors and Epithelial Cancer. Background: Accurate discrimination between ovarian borderline tumors (BOTs) and malignancies with imaging play an important role in management. Methods: A total of 95 patients with pathologically proven ovarian BOTs and 101 patients with malignancies were retrospectively included in this study. We evaluated the diagnostic performance of the signatures derived from T2WI-based radiomics in their ability to differentiate between BOTs and malignancies and compared the performance differences in the 2D and 3D segmentation models. The least absolute shrinkage and selection operator method (LASSO) was used for radiomics feature selection and machine learning processing. Results: The radiomics score between BOTs and malignancies in four types of selected T2WI-based radiomics models differed signicantly at the statistical level (p < 0.0001). For the classication between BOTs and malignant masses, the 2D and 3D coronal T2WI-based radiomics models yielded accuracy values of 0.79 and 0.83 in the testing group, respectively; the 2D and 3D sagittal fat-suppressed (fs) T2WI-based radiomics models yielded an accuracy of 0.78 and 0.99, respectively. Conclusion: Our results suggest that T2WI-based radiomic features were highly correlated with ovarian tumor subtype classication. 3D-sagittal MRI radiomics features may help clinicians differentiate ovarian BOTs from malignancies with high accuracy (ACC). Highlights The T2WI radiomics could achieve a higher accuracy in discriminating ovarian tumors. The 3D T2WI-based radiomics model showed the better performance than the 2D did. The 3D sagittal fat-suppressed T2WI radiomics showed the best performance. Background Ovarian borderline tumors (BOTs) account for approximately 10-15% of epithelial ovarian tumors, with an annual prevalence of 1.8-4.8/100,000 women worldwide (1). Compared with other ovarian malignant tumors, ovarian BOTs often occur in young patients with early-stage disease, and patients have a good prognosis with fertility-sparing conservative treatments (2,3). Therefore, preoperative identi cation of patients with ovarian lesions suspected of being BOTs may be helpful in their management. Magnetic resonance imaging (MRI) has many advantages in determining the etiology of ovarian masses and is widely used in clinical centers (4). MRI has high diagnostic performance in differentiating between ovarian benign tumors and malignant tumors (5)(6)(7)(8)(9). Considering the ability to discriminate BOTs from malignant epithelial ovarian tumors, conventional MRI varies with a sensitivity of 58% to 100% and a speci city of 61% to 100%, respectively (7, 10-13). Functional MRI scans (dynamic contrast-enhanced MRI, diffusion-weighted imaging(DWI), MR spectroscopy(MRS), etc.) showed a higher ability to distinguish a BOT from ovarian epithelial cancer than conventional MRI, such as T1-weighted imaging (T1WI) and T2WI, as shown in recently published studies (11,12). However, given that functional MRI acquisition is not routinely used in clinical scenarios, the scanning parameters are not presently standardized universally and may change across MRI machines or institutions. Gross morphological characteristic imaging features appreciated on T1WI and T2WI still have better applicability in the differentiation of BOTs from other malignancies. As a research hotspot, radiomics is de ned as a new 'data-driven' approach for extracting large sets of quantitative signatures from radiological images and shows its potential application in medicine (14,15). MR-based radiomic signatures has been shown to help to categorize tumor subtypes and assess tumor presence, spread, recurrence or response to treatment in female cancer patients (16)(17)(18)(19)(20)(21). To date, there have been limited MRI radiomics studies concerning ovarian BOT and epithelial cancer categorization. The purpose of this research was two-fold: rst, we planned to evaluate the diagnostic performance of the MRI radiomics model in discriminating ovarian BOTs from malignancies; second, we sought to clarify whether 3D MR-based radiomic signatures (of the whole lesion) could show better discriminative performance than 2D radiomic signatures (of the maximum lesion) could in the same study sample. Patients Our institutional review board (Gynecological and Obstetric Hospital, School of Medicine, Fudan University, Shanghai, China) approved this retrospective study, and the requirement for informed consent was waived for all participants. From January 2014 to December 2017, 438 consecutive patients with clinically suspected gynecological diseases were retrospectively retrieved from our institutional picture archiving and communication system (PACS, GE). The inclusion criteria were as follows: 1) patients with no previous pelvic surgery; 2) patients with no previous gynecological disease history; and 3) patients who had MRI examinations performed at our institution before pelvic or laparoscopic surgery. A total of 91 patients (average age, 39.8 ± 14.9 years) with pathologically proven ovarian borderline tumors and 105 patients with ovarian epithelial malignancies (average age, 51.9 ± 12.1 years) were selected as the study sample for signature selection ( Table 1). The information on pathological type, immunohistological staining results, and laboratory tests were collected through a hospital information system. MR image acquisition and lesion segmentation and radiomics feature selection MRI was performed using a 1.5-T MR system (Magnetom Avanto, Siemens) with a phased-array coil. The routine MRI protocols used to assess pelvic masses included axial turbo spin-echo (TSE)-T1WI, coronal TSE T2WI, and axial/sagittal TSE fat-suppressed T2WI (fs-T2WI). The detailed MRI acquisition parameters are listed in Supplementary Table 1. All lesion segmentation was performed by an experienced radiologist (H.Z.). The maximum lesion segmentation on MRI was manually outlined using ITK-SNAP software (ITK-SNAP, version 3.4.0, www.itksnap.org) (Figure 1). Two segmentation methods were used in this study: maximum lesion segmentation (2D) and whole-lesion segmentation (3D) on both sagittal fs-T2W images and coronal T2W images. In 2D segmentation model, we chose one slice with the largest lesion diameter in two protocols as the premium picture for segmenting the whole lesion. In 3D segmentation model, the entire lesion from both protocols was outlined and segmented slice by slice. After the tumor segmentation process, MR-based radiomics signatures were extracted from 2D/3D sagittal fs-T2W and 2D/3D coronal T2W images using Analysis Kit software( version 3.0.0, GE Healthcare) on a personal computer ( Figure 1). Image feature extraction and selection A total of 396 radiomics features from the volume of interest were extracted automatically using in-house software (Analysis Kit, version 3.0.0, GE Healthcare). Thereafter, the whole dataset was randomly divided into two parts: a training cohort and a testing cohort. The radiomics score (Rad-score)-based signatures were constructed with the LASSO method, which was used to select the most useful prognostic features in the training data set. A Rad-score was computed for each patient through a linear combination of selected features weighted by their respective coe cients. These Rad-scores were rst assessed in the training data set and then validated in the testing data set. Statistical analysis First, two-sample t-tests were performed to compare MR-based signature values between ovarian BOT and ovarian cancer. Next, the sensitivity (SEN), speci city (SPE), positive predictive value (PPV), and negative predictive value (NPV) were calculated when the performance of the two methods was evaluated for their ability to identify ovarian malignancies. Additionally, receiver operating characteristic (ROC) curve analysis was performed to evaluate various MR-based signature diagnostic values in discriminating BOTs from malignancies. A value of p < 0.05 was considered statistically signi cant. Clinical characteristics in both the training and testing data sets In this study, we included 91 ovarian borderline tumors and 105 ovarian malignancies (83 high-garde serous epithelial carcinomas, 7 mucinous carcinomas, 4 mixed carcinomas, 5 clear cell type carcinomas, 3 endometrioid carcinomas and 3 low-grade carcinomas, Table 1). There was no statistically signi cant difference found between the training and the validation data set in either clinical characteristics or pathological subtypes (Table 2). Identi cation results based on MRI-radiomics signatures The radiomics signature was weighted with the regression coe cients for the signature construction presented in the form of a histogram in Figure 2. A Rad-score system was calculated using the specialized formula after feature selection (Supplementary Table 2). Overall, there was a statistically signi cant difference observed in the average Rad-score between BOTs ( Figure 3) and malignancies in each of the selected MR-based radiomics models (p < 0.0001, Table 3). Table 4 illustrates the nal classi cation results of the training data set and the validation data set. The model was rst determined on the training data set based on the area under the ROC curve (AUC). Then, we evaluated the model on the validation data set. The coronal MR-based radiomics segmentation model yielded an ACC of 78.9% to 82.8%, while the sagittal model yielded an ACC of 77.8% to 100%. The 3D sagittal MR-based radiomics model yielded an ACC and an AUC of as high as 100% in differentiating between BOTs and malignancies in the validation data set ( Table 4). Comparison of the performance results between the 2D and 3D radiomics models Considering two acquisition protocols, both coronal and sagittal MR-based features showed competitive accuracy in discriminating BOTs from malignancies either in 2D or 3D segmentation mode (2D AUC: 0.82 versus 0.84 and 3D AUC: 0.79 versus 1.0, respectively). 3D sagittal fs-T2W images have the best performance compared to the other three methods in discriminating malignancies from BOTs, with an accuracy of 99% in the testing model. The ROC curve analysis with four kinds of segmentation methods in the validation group is summarized in Figure 4. Discussion Ovarian BOT is a type of low-potential epithelial tumor with a relatively good prognosis after treatment. Sometimes, it is di cult to discriminate BOTs from ovarian malignancies solely on imaging information due to some overlapping imaging ndings between the two (22). Our current results showed that the 3D MR-based radiomics signatures derived from sagittal fs-T2WI yielded an ACC of 100% in differentiating ovarian malignancies from BOTs and may help clinicians make a correct diagnosis before surgery. To the best of our knowledge, this is the rst reported study focusing on the diagnostic performance of MRbased radiomics signatures in ovarian tumor classi cation with 2D and 3D segmentation methods. In the present study, the 3D signatures showed better performance than the 2D signatures did. This result can be easily appreciated because the 3D model utilized information of the whole lesion, more truly re ecting the tumoral heterogeneity than the 2D model did. The current result is contrary to the previous CT radiomics study in which 2D radiomics features performed slightly better in non-small cell lung cancer prognostic estimation than 3D did (23). The authors concluded that the reason might be related to the various axial CT image resolutions in their study in which the training and validation cohorts in the study sample were selected from different institutions. Considering the two selected MRI protocols, the fs-sagittal sequence performed better than coronal sequence did on both 2D and 3D segmentation methods. Of note, the 3D-sagittal MR radiomics model yielded ACCs of 100% and 99% in the training and testing groups, respectively. This nding is in accordance with our previous study in which fs-T2WI was also superior to coronal T2WI in Type I and Type II ovarian cancer categorization (5). We believe that the sharp contrast between the lesion and the background on the fs MR sequence may play a role in the nal determination. However, the true mechanism is unclear, and this result should also be validated in a future study with a large study sample. Several radiomics studies using CT images have been reported for ovarian mass classi cation and prognostic estimation (24)(25)(26)(27) (25). In this study, we used the LASSO method to establish the radiomics features model during the radiomics signature selection step as well as during the machine learning process. The Lasso model is reportedly a suitable method for analyzing a small sample with high-dimensional features due to its advantage of avoiding over tting. A similar method was also reported in two recently published studies with promising results (18,28). There remains a limited number of studies on MR-based radiomics in ovarian tumor classi cation and posttreatment response prediction. In one study with 22 patients with advanced ovarian cancer, the authors found that apparent diffusion coe cient (ADC) values derived on the ADC map between primary ovarian cancer and metastatic sites differed signi cantly and may be used as response markers (29). In the present study, we did not include DW images in the texture analysis. The lesion resolution on DWI, especially with large lesions, is relatively low, which is sometimes di cult to precisely outline in postprocess software. Moreover, in our previous study, we did not nd that the ADC map could contribute more useful signatures in task classi cation than conventional MR images (T1W and T2W images) could (5). Compared with traditional MRI analysis in differentiating BOTs from malignancies, radiomics signature results show better performance. In a traditional MRI reading session, the imaging signs always overlap with each other to some extent (for example, large size, solid components, irregular and thick septa) and lead to an inaccurate diagnosis (27,(30)(31)(32). A recent study with proton MRS reported that the SEN and SPE were 91% and 100% for solid components, respectively; additionally, the SEN and SPE were 84% and 82% for cystic components, respectively (12). However, MRS scans are highly unit-dependent and time-consuming examinations and require operators with more experience than conventional methods do. From this point of view, radiomics signature analysis shows the potential clinical application owing to its simple segmentation step. The limitations of this study included the fact that we did not include contrast-enhanced (CE) MR images to establish the MRI radiomics model. The CE-MRI scan was not available for all included patients in the current study, and therefore, we did not select this protocol for analysis to diminish the selection bias. Furthermore, in the present study, we only used conventional T2WI to establish a radiomics diagnostic model, which is different from the clinical reading scenario (mostly including T1WI, T2WI and DWI). Further study is necessary to explore the difference between one acquisition sequence and multiple acquisition sequences as in the clinical setting. In addition, all segmentation procedures were manually outlined on T2WI showing the best of the lesion; however, it is still an operator-dependent procedure, and interoperator variation in segmentations may be emphasized, especially with multiple sequence images. Finally, all MR images were acquired in a 1.5-T MRI scanner, and a comparison study between 1.5-T and 3.0-T MRI machines should be validated in a large study in the future. Conclusions In summary, our results suggest that radiomics features that were extracted from T2W images were highly correlated with ovarian tumor subtype classi cation. 3D fs-sagittal MRI radiomics features may help clinicians differentiate ovarian BOTs from malignancies with high ACC. Consent for publication Figure 2 The radiomics signature was weighted with the regression coe cients for the signature construction presented in the form of a histogram The ROC curve analysis with four kinds of segmentation methods in the validation group is summarized
2021-08-27T16:02:33.700Z
2020-08-20T00:00:00.000
{ "year": 2020, "sha1": "a3e6f630597610ba01899049f99321589da13e58", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-57353/v1.pdf?c=1631867884000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "14a9d2f28105b7ad9b1e2a570d9469cb5d5f4e58", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
204957868
pes2o/s2orc
v3-fos-license
Endoscopic management of esophageal cancer Esophageal cancer (EC) generally consists of squamous cell carcinoma (which arise from squamous epithelium) and adenocarcinoma (which arise from columnar epithelium). Due to the increased recognition of risk factors associated with EC and the development of screening programs, there has been an increase in the diagnosis of early EC. Early EC is amenable to curative therapy by endoscopy, which can be performed by either endoscopic resection or endoscopic ablation. Endoscopic resection consists of either endoscopic mucosal resection (preferred in cases of adenocarcinoma) or endoscopic submucosal dissection (preferred in cases of squamous cell carcinoma). Endoscopic ablation can be performed by either radiofrequency ablation, cryotherapy, argon plasma coagulation or photodynamic therapy, amongst others. Endoscopy can also assist in the management of complications post-esophageal surgery, such as anastomotic leaks and perforations. Finally, there is a growing role for endoscopy to manage end-of-life palliative symptoms, especially dysphagia. The growing use of esophageal stents, debulking therapy and dilation can assist in improving a patient’s quality of life. In this review, we examine the multiple roles of endoscopy in the management of patients with EC. INTRODUCTION Esophageal cancer (EC) is an overarching term generally used to describe two separate malignancies, esophageal squamous cell carcinoma (SCC) and esophageal adenocarcinoma. Esophageal SCCs arise in the squamous epithelium (generally in the mid-proximal esophagus but can occur throughout the esophagus) whereas esophageal adenocarcinomas arise in the columnar epithelium and are generally found in the distal esophagus. The epidemiology of EC is evolving. Although ECs only make up one percent of all new cancer cases in the United States, they make up 2.6% of all cancer deaths. The overall incidence of all types of EC has remained steady over the past two decades with an estimated 17000 new cases annually in the United States. The 5-year survival rate of EC varies according to how advanced the tumor is at diagnosis. Those with localized disease have a 5-year survival rate of 45.2%, while those with distant metastases have a 5-year survival rate of only 4.8% [1,2] . Due to the aggressive nature of the disease and the high mortality rate, it is imperative to identify patients early in the course of the disease. Currently, only 19% of cases are staged as localized disease at diagnosis. The benefit of localized disease is that it opens up a whole array of treatment options including the use of endoscopic therapy. The increasing recognition of risk factors associated with metaplasia and dysplasia has led to an increased interest in screening and surveillance programs. For example, the role of gender, obesity and gastroesophageal reflux disease in the development of Barrett's esophagus (BE) has allowed for the development of screening and surveillance guidelines, which has then lead to treatment guidelines for pre-cancerous and early cancerous lesions [3,4] . Although endoscopy was initially limited to the diagnosis of EC, recent advancements have allowed the modality to play a growing role in the management of the tumor. The development of advance camera technology has allowed better recognition of the disease, while simultaneously the introduction of novel endoscopic techniques and instruments has allowed endoscopists to treat pre-cancerous lesions and even early ECs. In this review, the current indications for endoscopy in the management of EC are reviewed. The pre-endoscopic management work-up, endoscopic options for curative therapy, the role of endoscopy in managing complications of surgery as well as how endoscopy can play an essential part in the palliative management of EC are described. PRE-ENDOSCOPIC MANAGEMENT INVESTIGATIONS The first step in performing endoscopic management in patients with EC is to recognize the setting where it is appropriate. Multiple studies have demonstrated improved outcomes and less complications in high-volume centers, and therefore consideration should be given to referring patients to centers with experience when endoscopic curative management is an option [5,6] . Certain guidelines recommend that endoscopic resection (ER) of early EC only be done in high-volume centers. Similarly, a multi-disciplinary approach, with involvement of surgery, oncology and pathology is critical as the diagnosis of dysplasia can be controversial with poor intra-and interobserver agreement [8] . A second opinion, ideally from a gastrointestinal pathologist, should be sought if there are doubts about the presence of dysplasia. Additionally, a multi-disciplinary approach will allow for more flexibility and options for the patients and assist in managing any potential complications. Once a patient has been referred for endoscopic management of pre-cancerous lesions or early EC, it is vital to establish the stage and characteristics of the tumor. This is done through a combination of endoscopic investigations, as well as potentially other modalities to ensure the tumor has not progressed. In terms of endoscopy, careful examination of the lesion is essential prior to any decision regarding endoscopic therapy. After washing the esophagus to remove any food, liquid or debris, careful examination of affected areas with white-light endoscopy should be performed. Recent studies have demonstrated that high-definition endoscopy is superior to standard definition in assessing mucosal changes in patients with BE ( Figure 1) [9] . In addition, although there has been an increase in the use of adjuncts to whitelight imaging, their evidence in the diagnosis of EC is still controversial with the exception of narrow-band imaging (NBI). NBI is a technique that allows increased highlighting of mucosa and the mucosal vasculature ( Figure 2). A meta-analysis on the use of NBI to identify high-grade dysplasia (HGD) in patients with BE demonstrated a pooled sensitivity of 0.96 (95% confidence interval: 0.93-0.99) and a pooled specificity of 0.94 (95% confidence interval: 0.84-1.0). The meta-analysis included eight studies with 446 patients and a total of 2194 lesions. Based on these studies, there has been an increasing use of NBI to identify high-risk lesions ] . Similar to NBI, there has been extensive investigation into the use of chromoendoscopy. Chromoendoscopy is the use of selective dyes to highlight specific features on the mucosa and potentially increase the contrast between normal mucosa and abnormal mucosa. The most commonly used dye in chromoendoscopy is methylene blue, which is thought to selectively stain intestinal metaplasia. A previous randomized control trial on the use of methylene blue as compared to random 4quadrant biopsies showed that although there was no increased detection of dysplasia, the use of methylene blue led to a smaller requirement for the number of biopsies [11] . On the other hand, a separate randomized control trial showed that methylene blue detected less dysplasia compared to random 4-quadrant biopsies ] . Finally, a systematic review and meta-analysis was performed in 2009 and included nine studies with a total of 450 patients. The study demonstrated no incremental yield in the use of chromoendoscopy as compared to standard 4-quadrant biopsies ] . Subsequently, current guidelines do not recommend the routine use of chromoendoscopy when assessing esophageal lesions for advanced or high-risk features. When inspecting a lesion with white-light endoscopy or NBI, there are certain features that should be carefully sought for in the mucosa as they will likely change therapy. When examining BE, it is important to document landmarks including any potential hiatal hernia, the location of the gastroesophageal junction, the top of the gastric folds, the location of the squamo-columnar junction and the length of columnar mucosa both circumferentially and the maximal longitudinal length. One commonly used classification for reporting BE is the Prague classification, which documents circumferential and maximal longitudinal length and has been found to have high validity and inter-observer agreement [14,15] . In addition, it is critical to document any nodularity found and the location of the nodularity as it will likely require separate management from the remainder of the BE. Nodules are also suggestive of advanced lesions requiring therapy. In addition to nodules, other high-risk features that portend to malignancy include the presence of ulceration or structuring [16] . Careful examination should be done in the 12 o'clock to 6 o'clock (or the right hemisphere) as these have higher rates of EC in BE [17] . Although a careful examination of a lesion using white-light endoscopy is the gold standard, there have been previous studies looking into adjunctive methods to determine resectability. One potential option was the use of endoscopic ultrasound (EUS). EUS would allow the clinician to determine the depth of the lesion as well as any potential locoregional lymph nodes. Initially, the thought was that EUS could provide the ability to determine whether any invasive cancer was present and therefore assist in determining which lesions endoscopic therapy should be avoided. Although initial results were promising, they have not been followed up by similar outcomes in subsequent studies [18] . A systematic review and meta-analysis examining the role of EUS found that EUS only had a 65% concordance for T-staging when compared to surgical or endoscopic mucosal resection (EMR) based pathology [19] . A follow-up meta-analysis found better results but was limited due to the heterogeneity between studies ] . A more recent study examined the same utility of EUS in pre-malignant lesions and found poor correlation with a sensitivity of 50% and a specificity of 93% [21,22] . Interestingly, previous studies have found that EUS-guided mini-probe based examinations have better sensitivity than radial echoendoscopes ] . Due to all the previous studies, the use of EUS to determine resectability is limited and should not be done to guide Nevertheless, EUS can provide a helpful role in patients with early EC. Although EUS has difficulty staging cancers, it can be a useful tool in both identifying and sampling lymph nodes ( Figure 3). EUS generally has been found to over-stage T2 malignancies and therefore caution should be taken before labelling a lesion as unresectable [24] . When it comes to lymph nodes, EUS was found to have fairly high sensitivity and specificity as compared to positive electron-transmission scans and has the added benefit of being able to sample nodes through a fine needle aspiration or biopsy [25,26] . In general, when approaching a patient for potential endoscopic management, it is important to ensure that care is provided in a center with expertise not only in endoscopy but also in surgery, pathology and radiology. The most important investigation is a careful examination during upper endoscopy both with white light endoscopy and NBI. Although adjunctive investigations have so far not yielded fruit, consideration can be given to performing EUS if there is concern for locoregional invasion. When it comes to the endoscopic management of EC, it can generally be divided into two categories, curative and palliative therapy. Curative therapy is generally reserved for early ECs limited to the mucosa with no lymph node involvement. In this section, we will review the common methods for endoscopic management, as well as upcoming frontiers. ER ER is the mainstay of endoscopic management of early ECs. ER can be performed in two ways, by EMR or by endoscopic submucosal dissection (ESD). ER can be performed for both adenocarcinomas and SCCs. In adenocarcinoma patients, the spectrum of disease where ER can be performed generally includes pre-malignant low-grade dysplasia in a patient with BE up to in some cases stage T1b adenocarcinoma (as per the TNM staging of tumors). For SCCs, ER can be performed in patients with early EC that is staged as T1 or intramucosal. EMR is generally performed by two distinct methods: the cap-assisted method and the ligation-assisted method. The cap-assisted method, also known as the "suck and cut" method involves suctioning the mucosa into a cap-fitted endoscope and then using a snare to cut the mucosa. The snare is pre-opened prior to suctioning and generally comes as part of a pre-assembled ensemble kit. In the ligation-assisted method, or multi-band ligator method, the upper endoscope is fitted with an apparatus similar to a variceal band ligator, and the mucosa is suctioned and has a band placed around it. Subsequently, a snare is passed, and the mucosa upheld by the band is resected (Figure 4). The evidence comparing the two methods of EMR showed that they are generally comparable. In a randomized control trial comparing the techniques, the ligationassisted method was shown to be quicker with smaller resection specimens compared to the cap-assisted method. However, both techniques had similar maximal thickness in their resection specimens and similar adverse event rates ( Figure 5) [27] . Previous studies that compared the two techniques in a non-randomized manner also demonstrated similar results [28,29] . The use of the lifting and then direct snare technique that is commonly used in the colon is discouraged in the esophagus due to an increased risk of perforation [30] . ESD is a more recent technique that involves careful dissection of the submucosa of the lesion in systematic fashion followed by en bloc removal of the desired tissue. Although the benefit is that it provides en bloc specimen and can give information about the margins of resection, the disadvantage is that it is time consuming and requires a deeper resection potentially leading to increase adverse events. Indeed, in a systematic review and meta-analysis comprising of 15 non-randomized trials comparing ESD to EMR, they found that although ESD had higher curative resection rates and lesser local recurrence rates, it was balanced by more time-consuming procedures and higher rates of bleeding and perforation ] . Another meta-analysis looking specifically at esophageal neoplasms found no difference between EMR and ESD in terms of margins, lymph node positivity or metachronous cancers but found less recurrence with ESD though balanced by an increased risk of strictures [32] . The one situation where ESD has had positive results (as compared to EMR) is in the setting of SCCs. A previous study examining resection techniques found less recurrence when en bloc resection was performed by ESD in patients with SCC as compared to patients that had piecemeal resection [33] . Based on this study, EMR is generally considered sufficient for small lesions (less than 10 mm) if the diagnosis is SCC, but patients with larger lesions should ideally undergo ESD. Overall, current guidelines recommend EMR for resection of BE or early esophageal adenocarcinomas unless the lesions are larger than 15 mm, are poorly lifting or are at risk for submucosal invasion in which case ESD should be performed. For patients with SCC, current guidelines generally recommend ESD though EMR is acceptable in smaller lesions [34] . ENDOSCOPIC ABLATION Ablative therapy is generally reserved for flat lesions or treatment of BE after ER. There are many ways to perform ablative therapy with the most common being radiofrequency ablation (RFA) (Figure 6). Other methods that are less commonly used include photodynamic therapy (PDT) and cryoablation. The main purpose of ablative therapy is to destroy the remaining residual malignant or pre-malignant tissue to prevent recurrence. RFA is the application of thermal energy that is generated by radiofrequency waves to destroy tissue. It involves contact ablation and can be done in localized areas or in a circumferential manner. The seminal study examining the effects of RFA was published in 2009. It was a multi-center randomized control trial that compared RFA to sham therapy in patients with dysplastic BE. The primary outcome (complete eradication) was followed until 12 mo post-therapy. In the RFA group, when using intention-to-treat analysis, 90.5% of patients had complete eradication whereas in the sham group only 22.7% had eradication. The main adverse event related to RFA was the development of chest pain post-treatment [35] . Similar results have been shown in the other multi-center studies including European and Asian populations [36,37] . The role of endoscopic therapy in patients with low-grade dysplasia has been controversial, and there has been debate on whether to pursue endoscopic management or only perform careful observation. A previous study examining patients with BE with only low-grade dysplasia found a decrease in the progression of the dysplasia and the development of cancer with the use of RFA [38] . Finally, there have been studies on whether RFA should be applied to patients with BE but no evidence of dysplasia. A study looking at the cost-effectiveness of RFA therapy found that treatment of patients with BE without dysplasia did not provide cost-effective therapy [39] . Current guidelines generally recommend RFA in patients with dysplasia with nonnodular lesions or intra-mucosal cancer. RFA should also be performed to treat residual BE in patients who have undergone ER. Additionally, although RFA has become well-established in the management of patients with BE or adenocarcinoma, its role in the management of SCC is still developing. Recent studies have showed promise in early SCC with high complete eradication rates and low recurrence rates [40,41] . Other types of ablative therapy include argon plasma coagulation (APC), cryoablation and PDT. APC is widely available, generally due to its use in multiple conditions and diseases and has been widely investigated in the management of BE. In one study examining the role of APC in patients with non-dysplastic BE, complete eradication was successful in 77% of the patients (37/48). The mean number of sessions required was 2.8 (range 1-5) though 9.8% (5/51) had major complications including perforation, hemorrhage and stricture formation [42] . Nevertheless, other studies have showed similar positive results with APC [43,44] . Cryoablation of the esophagus has also been studied in the management of premalignant and malignant conditions of the esophagus. The most widely used method is the application of liquid nitrogen therapy. Previous studies have shown high eradication rates in patients with intestinal metaplasia and HGD with minimal adverse events [45] . There have also been long-term retrospective studies to determine the sustained ability of cryotherapy. A 5-year follow-up of patients who received cryotherapy revealed complete eradication rates of 93% in HGD and 75% in intestinal metaplasia. The rate of progression to HGD or adenocarcinoma was 1.4% per patientyear in those treated with cryotherapy [46] . Cryotherapy has also been studied as rescue or salvage therapy in patients who have had recurrence after initial RFA therapy. The complete eradication of dysplasia rate was 75% in those subsequently treated with cryotherapy, including two patients who initially had intramucosal adenocarcinoma and were both successfully treated [47] . PDT is an ablative process in which a photosensitizer drug is activated by the use of laser light, which leads to mucosal destruction. PDT has evidence in the management of both SCC and esophageal adenocarcinomas. Treatment of either cancer staged as either T1 or T2 showed a complete response rate of 87% with the majority of the complications being either cutaneous photosensitization or esophageal strictures [48] . Long-term follow-up has shown sustained response and low rates of recurrence as well [49,50] . Comparisons between PDT and APC in the eradication of both BE and dysplasia have showed similar effectiveness though higher costs associated with PDT [51,52] . A study comparing RFA to PDT in patients with BE with dysplasia found that RFA had higher complete response rates and was significantly less costly. Though caution must be taken in interpreting these results as the study was nonrandomized with major differences in the baseline characteristics of the two groups [53] . In summary, there are many methods that have evolved to treat flat mucosal lesions with pre-malignant or malignant findings. RFA is generally the most widespread method with increasing evidence of its utility backed by a strong safety record. The development of circumferential balloons as well as through-the-scope segmental pads has made it more user-friendly. In patients who have failed RFA after multiple attempts, consideration should be given to alternative modalities including APC, cryoablation and potentially PDT based on local expertise. ENDOSCOPIC MANAGEMENT OF POST-OPERATIVE COMPLICATIONS Although an increasing number of patients are being diagnosed with early EC that is amenable to curative resection by endoscopy, a large proportion still progress to surgery. Depending on the features of the tumor and its aggressiveness, the algorithm of neo-adjuvant therapy followed by surgery is generally followed. Nevertheless, endoscopy can play a central role in patients who develop post-operative complications after surgery for EC. The most common complication is the development of a post-operative leak generally at the anastomosis (Figure 7). The incidence of post-operative complications can be as high as 22.9% of post-esophageal resection cases [54] . The rates of esophageal leaks have been shown to be as high as 7.9% of all esophageal surgeries [55] . Prompt recognition and management of esophageal leaks is imperative as the mortality rate associated with leaks can be as high as 35% [56] . Esophageal stent placement is an alternative to a re-operation for an anastomotic leak. Most commonly, a self-expanding metal stent (SEMS) is placed to overlap the site of the leak and allow it to heal. Although SEMS generally come in varying sizes, consideration should be given to place the largest tolerable diameter to prevent migration of the stent as there likely is no narrowing to hold the stent in place. In our practice, we generally use SEMS with a diameter of 23 mm to treat esophageal anastomotic leaks. The securing of the esophageal stent can be done by a variety of methods, including placing a hemostatic clip between the stent and the mucosa or possibly using an endoscopic suturing device to secure the stent in place. The evidence for the role of esophageal stents in the post-operative setting is variable with studies ranging from a technical success (ability to place the stent) rate between 80% to 100% to a clinical success rate (resolution of the leak and removal of the stent) that can be as low as 45% [57,58] . The most common complications post-stent placement is pain, stent migration and bleeding [59] . Other methods can be considered for anastomotic leaks including endoscopic clip placement to close the defect. The development of over-the-scope clips have allowed larger defects to be closed endoscopically. Multiple trials on the use of endoscopic clip placement have demonstrated high rates of clinical success and closure. A recent large study examined the role of over-the-scope clips in closure of luminal defects. A total of 188 patients were included of which 108 had fistulas, 48 had perforations and 32 had leaks. Successful closure occurred in 90% of patients with perforations, 73% with leaks but only 42.9% of patients with fistulas [60] . PALLIATIVE ENDOSCOPIC MANAGEMENT Once a patient has advanced disease not amenable to curative therapy, the shift of care turns towards palliative management. The role of endoscopy in palliative care is generally the improvement of symptoms especially dysphagia. As patients focus more on end of life care, the need to ensure the ability to take oral contents becomes a matter of quality of life. The main components of endoscopic management in palliative care are dilation, debulking and esophageal stent placement. In regard to dilation, the reducing caliber of the esophagus secondary to tumor is the main reason for dysphagia and intermittent periodic dilations are an option to treat the disease (Figure 8). Unfortunately, dilation alone rarely provides long-lasting efficacy, and this is compounded by high rates of complications especially perforations [61] . Endoscopic debulking therapy can be achieved by the use of laser therapy, PDT or chemical therapy. Chemical therapy, including the use of absolute alcohol, generally only provides transient relief and requires multiple ongoing sessions ] . PDT has generally been found to be better than laser therapy as shown in randomized comparison trials that have showed similar efficacy between laser therapy (e.g., Nd : YAG) and PDT but less perforations associated with PDT [63] . Finally, the mainstay of esophageal palliation is the placement of esophageal stents. The most common form of esophageal stents are SEMS, and they can come in covered, partially covered and uncovered forms (Figure 9). The evidence for the role of esophageal stents is controversial. Although they have been shown to have durable effectiveness towards dysphagia and lower rates of perforation as compared to dilation alone, they are limited due to patient intolerance of chest pain as well as the risk of stent migration [64] . CONCLUSION As the epidemiology and presentation of EC evolves, so does the role of endoscopy in its care. No longer relegated to diagnosis only, endoscopy can provide curative therapy in early EC as well as provide therapy for pre-malignant changes. It can also be used to manage complications related to the management of EC specifically postoperative complications. Finally, there is a growing role for endoscopy in the palliative management of EC with an increasing use of debulking therapy as well as the ongoing relief of dysphagia with esophageal stent placements.
2019-10-17T09:05:48.897Z
2019-10-15T00:00:00.000
{ "year": 2019, "sha1": "36a41c5191d0e1dfb495673f584851233a2474bb", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4251/wjgo.v11.i10.830", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bc9816f727d6be12a067361941c674a2ab902d6e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209490446
pes2o/s2orc
v3-fos-license
Effects of Physical Self-Concept, Emotional Isolation, and Family Functioning on Attitudes towards Physical Education in Adolescents: Structural Equation Analysis (1) Background: The present research seeks to define and contrast an explanatory model of physical self-concept, emotional isolation, attitude towards physical education, and family functioning, and analyse the existing associations between these variables. (2) Methods: The sample was made up of 2388 adolescents (43.39% male and 56.61% female), with ages of 11–17 years (M = 13.85; SD = 1.26) from Spain. Self-concept (AF-5), Isolation (UCLA), Attitude towards Physical Education (CAEF), and Family Functioning (APGAR) were analyzed. (3) Results: Good fit was obtained for all evaluation indices included in the structural equation model, which was significantly adjusted (χ2 = 233,023; DF = 14; p < 0.001; comparative fit index (CFI) = 0.913; normalized fit index (NFI) = 0.917; incremental fit index (IFI) = 0.906; root mean square error of approximation (RMSEA) = 0.072). (4) Conclusions: Attitudes towards physical activity were found to be positive when isolation levels were low and where adequate self-concept existed, specifically in students reporting high family functioning. Introduction Lack of physical activity (PA) in the 21st century, both in children and in adults, has become a topic of interest worldwide in developed countries. The increase in sedentary habits as a consequence of technological advances has provoked a rise in physical inactivity within the population. This has negative consequences for physical health and mental wellbeing [1,2]. This situation is reflected in the growth of both physical diseases such as obesity, diabetes, and cardiovascular diseases (with inactivity being considered the fourth leading cause of death in the world [3]), and mental illnesses [4,5], in addition to anti-social behaviors such as school bullying and violence [6]. PA during infancy and adolescence is associated with physical health, and psychological and social benefits over both the medium and long term [7]. Despite this, children do not currently meet recommendations laid out by the World Health Organisation for five to 17-year olds, of at least 60 min of moderate to vigorous PA a day [3]. This should be considered alongside evidence that more young people abandon sport when they begin the secondary school stage [8][9][10]. In order to tackle this problem, it is necessary to initiate prevention processes during the adolescent phase, given that the majority of risk factors related with sedentary behaviors start at this age [11]. It is during adolescence where the processes that lead to the construction of personality are consolidated. This is, therefore, Design and Participants The present research is descriptive and cross-sectional in nature. A total of 2388 adolescents participated who reported being aged between 11 and 17 years (M = 13.85 years; SD = 1.268). The sample included 1036 (43.39%) males and 1352 (56.61%) females. All participants were enrolled on the third year of primary school, or the 1st or 2nd year of compulsory secondary education (CSE) in Andalusia. Sample selection took place through a convenience sampling strategy, attending to the criteria of being enrolled on the third year of primary education, or the 1st or 2nd year of CSE. All participants had informed consent from their parents or legal guardians and did not suffer from any type of pathology that impeded their ability to participate in the research. This formed the inclusion and exclusion criteria. The sample was obtained from eight Spanish cities, with participation being requested from all centers who voluntarily agreed to participate. It is necessary to indicate that 281 questionnaires were excluded after it was detected that they had been incorrectly completed or had missing data. Variables and Instruments Ad hoc questionnaire: For the selection of descriptive variables, various aspects were considered which could establish differences at some stage of the research process. These included sex, school year, population, engagement in PA outside of school hours, and the place in which individuals engage in PA. Self-concept Questionnaire: Data is collected through the original questionnaire "Autoconcepto Forma-5 [Self-concept Form] (AF-5)" of García and Musitu [19]. It measures the dimensions of Academic Self-concept (AA), Social Self-concept (AS), Emotional Self-concept (AE), Family Self-concept (AFM) and Physical Self-concept (AF). This test includes 30 questions which are rated along a five-point Likert scale, where 1 is never and 5 is always. In the study conducted by García and Musitu [19] reliability of α = 0.810 was determined. This value is almost identical to that detected in the present work (Cronbach alpha α = 0.833). The values produced for each dimension (AA: α = 0.773; AS: α = 0.702; AE: α = 0.697; AFM: α = 0.778; AF: α = 0.721) and the values produced in all of the groups were satisfactory, in the same way as has been presented in studies by Estévez, Martínez, and Musitu [33], and Cava et al. [34]. Loneliness Scale (UCLA): This scale is based on the original created by Russell, Peplau, and Cutrona [35], and the adapted version of Russell [36]. The adaptation to Spanish used here corresponds to that described by Expósito and Moya [37]. It contains 20 items and targets students ages 11 years and up. The original factor structure is formed by a factor that reports a general index of the perception of loneliness, divided into the following dimensions: loneliness; emotional loneliness; and subjective evaluation of the social network. The scale presents reliability coefficients that range between 0.74 and 0.94 depending on the population within which the questionnaire is administered [36,38], and shows adequate test-pre-test reliability [39]. Excellent psychometric properties have been observed in studies carried out with Spanish adolescents [40][41][42][43]. In the present research, the value obtained for the Cronbach alpha was 0.89. Coefficients of internal consistency for the bi-factor structure are as follows: Cronbach alpha of 0.84 and 0.83, respectively, and 0.88 for the complete scale. Questionnaire on attitudes towards physical education (CAEF): The original questionnaire of Moreno, Rodríguez, and Gutiérrez [44] will be used. This comprises 56 items rated along a four-point Likert-type scale, where 1 = disagree and 4 = totally agree. This instrument is composed of seven different dimensions: Rating of the subject and of the PE teacher, PE difficulty, PE usefulness, empathy with the teacher and the subject, agreement with subject management, preference for PE and sport, and PE as sport. A consistency value of α = 0.75 was obtained for this instrument. This value is acceptable and slightly higher than the value obtained by Moreno et al. [44] in their original study (α = 0.73). Family Functioning Scale (APGAR): This test is extracted from the original version "Family APGAR" developed by Smilkstein, Ashworth, and Montano [45] and adapted to Spanish by Bellón, Delgado, Luna, and Lardelli [46]. It uses a three-point Likert scale (0 = almost never, 1 = sometimes and 2 = almost always), along which five positively-framed items are rated. It generates three types of functionality: severe dysfunction (D.S), moderate dysfunction (D.M), and family functioning (F.F). Internal consistency of the questionnaire in its original version is α = 0.750, whilst the authors Sánchez, Villarreal, and Musitu [47] more recently reported an internal consistency of α = 0.790. Procedure Educational centers were contacted from the University of Granada in order to inform them about the nature of the study, with the centers that voluntarily agreed to participate then being selected to participate in the research. Informed consent packs were given out to students at the center, requesting collaboration from their parents or legal guardians. Next, questionnaires were administered to the group during lesson time. Anonymity of participants was guaranteed, clarifying that collected data would be used purely for scientific purposes. Researchers were present during data collection in order to guarantee the correct development of processes and to resolve any doubts. The present research study received approval from the Ethics Committee of the University of Granada with code641/CEIH/2018. Data Analysis The statistical software IBM SPSS ® in its version 23.0 (SPSS Inc., Chicago, IL, USA) for Windows was used for the analysis of basic descriptive data. The program IBM AMOS ® 23 (International Business Machines Corporation, Armonk, NY, USA)) was employed with the aim of analyzing the existing relationships between the constructs included in the structural model. After developing the theoretical model, a path analysis was carried out considering matrix associations through a multi-group analysis which grouped participants as a function of whether or not they regularly participated in physical activity. Finally, a path model was constructed which constituted nine factors ( Figure 1). Difficulty of physical education (DEF), usefulness of physical education (UEF), empathy with the teacher (EPA), and agreement with subject management (COA) act as exogenous variables in the model. On the other hand, the variables describing rating of the subject or of the teacher (VPEF) and the preference of physical education as sport (PEFD) receive the effects of exogenous variables, whilst emotional isolation (SOLEM), family functioning (APGAR), and physical self-concept (AF) receive the effects of VPEF and PEFD. These last five variables act as endogenous variables within the model and its associated error terms. The statistical software IBM SPSS ® in its version 23.0 (SPSS Inc., Chicago, IL, USA) for Windows was used for the analysis of basic descriptive data. The program IBM AMOS ® 23 (International Business Machines Corporation, Armonk, NY, USA)) was employed with the aim of analyzing the existing relationships between the constructs included in the structural model. After developing the theoretical model, a path analysis was carried out considering matrix associations through a multi-group analysis which grouped participants as a function of whether or not they regularly participated in physical activity. Finally, a path model was constructed which constituted nine factors ( Figure 1). Difficulty of physical education (DEF), usefulness of physical education (UEF), empathy with the teacher (EPA), and agreement with subject management (COA) act as exogenous variables in the model. On the other hand, the variables describing rating of the subject or of the teacher (VPEF) and the preference of physical education as sport (PEFD) receive the effects of exogenous variables, whilst emotional isolation (SOLEM), family functioning (APGAR), and physical self-concept (AF) receive the effects of VPEF and PEFD. These last five variables act as endogenous variables within the model and its associated error terms. Unidirectional arrows show the effects between the variables incorporated (direct and indirect). Likewise, parameter estimation was carried out through the maximum likelihood method (ML) as this method is coherent, unbiased, and invariant to scale type. Error terms were established for all of the endogenous variables. Model fit was examined with the aim of verifying compatibility of the model to the empirical data obtained. Analysis of the reliability of model fit was performed according to goodness of fit criteria established by Marsh [48], p. 785. In the case of the chi-squared analysis, non-significant values associated top indicate good model fit. Comparative fit index (CFI) values will be considered acceptable if they are greater than 0.90 and excellent if they are greater than 0.95. The normalized fit index (NFI) should not be greater than 0.90. Incremental fit index (IFI) values will be considered acceptable if they are greater than 0.90 and excellent if they are greater than 0.95. Finally, root mean square error of approximation (RMSEA) values will be considered excellent if they are lower than 0.05 and acceptable if they are lower than 0.08. Unidirectional arrows show the effects between the variables incorporated (direct and indirect). Likewise, parameter estimation was carried out through the maximum likelihood method (ML) as this method is coherent, unbiased, and invariant to scale type. Error terms were established for all of the endogenous variables. Model fit was examined with the aim of verifying compatibility of the model to the empirical data obtained. Analysis of the reliability of model fit was performed according to goodness of fit criteria established by Marsh [48], p. 785. In the case of the chi-squared analysis, non-significant values associated top indicate good model fit. Comparative fit index (CFI) values will be considered acceptable if they are greater than 0.90 and excellent if they are greater than 0.95. The normalized fit index (NFI) should not be greater than 0.90. Incremental fit index (IFI) values will be considered acceptable if they are greater than 0.90 and excellent if they are greater than 0.95. Finally, root mean square error of approximation (RMSEA) values will be considered excellent if they are lower than 0.05 and acceptable if they are lower than 0.08. Results With regards to the descriptive data referred to in Table 1, the sample is composed of a total of 2388 students of both sexes (48.2% males and 51.8% females), enrolled on the 3rd year of primary education or the 1st year of CSE in one of the eight provinces of Andalusia. 81.5% of participants belong to a functional family, 15.1% to a moderately functional family and 3.4% to families with signs of serious dysfunction. With regards to attitudes towards physical education, the most highly rated dimension is agreement with EF management (M = 3.01). All of the other dimensions were found to have values lower than 3, with the lowest mean pertaining to usefulness of PE (M = 2.06). With regards to self-concept, the academic dimension received the highest rating (M = 3.61), followed by the physical (M = 3.54), social (M = 3.48) and family (M = 3.39) dimensions, and finally, emotional self-concept (M = 3.02). In reference to the variable describing isolation, the highest score was achieved for the dimension describing subjective evaluations of the social network (M = 2.94), followed by general isolation (M = 2.00), and finally, emotional isolation (M = 1.95). Table 2 shows the mean scores obtained for the dimensions of attitude towards physical education and family functioning, finding statistically significant differences (p = 0.000). These are reflected in higher ratings of the subject and of the teacher on behalf of students who have highly functioning families (M = 2.76), relative to those who have moderately dysfunctional families (M = 2.59). The dimension describing difficulty of PE (p = 0.010) is more highly scored by students who report moderate family dysfunction (M = 2.49) and scored less highly by those with families with severe dysfunction (M = 2.32). With respect to the usefulness of PE, this dimension is more highly scored by students with families with moderate dysfunction (M = 2.21), in comparison to those with highly functional families (M = 2.02). Those who proportion higher scores to the dimension of empathy with the teacher and subject (M = 2.45), and agreement with subject management (M = 3.03) also reported good family functioning. Further, for preference for PE and sport (p = 0.005), the highest score was also obtained for those who have high family functioning (M = 2.33) relative to serious dysfunction (M = 2.10). Table 3 shows the mean scores obtained for the dimensions of self-concept in relation to family functioning. In the following table it can be seen that academic self-concept (p = 0.000 ***) reflects higher scores for those who have a highly functional family (M = 3.69), with scores being lower when family dysfunction is serious (M = 3.06). Social self-concept reaches higher scores amongst those with highly functioning families (M = 3.54) relative to those who present serious (M = 3.10) or moderate dysfunction (M = 3.26). However, in the case of emotional self-concept the highest scores are obtained for those from families with moderate dysfunction (M = 3.12) and the lowest scores relate to those with serious dysfunction (M = 2.99). Finally, it must be highlighted that both family self-concept (M = 3.45) and physical self-concept (M = 3.62) achieve their highest mean values in students from highly functional families, this value being lowest in students from families with serious dysfunction (M < 3). Finally, with regards to the dimensions of isolation in relation to family functioning, statistically significant differences were achieved (p = 0.000 ***) in all of its dimensions (Table 4). Students with moderately dysfunctional families reflect higher levels of emotional isolation (M = 2.36) relative to those who have highly functional families (M = 1.89). In contrast, higher scores are obtained for subjective evaluations of the social network in students with highly functional families (M = 3.00) relative to those who have families with serious dysfunction (M = 2.54). This is in contrast to general isolation whose highest mean value pertained to serious family dysfunction (M = 2.401) and lowest value pertained to those who were highly functional (M = 1.93). Table 5 shows the bivariate correlations between study variables. Turning attention to the physical dimension of self-concept in relation to attitudes towards PE, a positive correlation is observed with the dimensions describing students' ratings of the subject of physical education and its teacher (r = 0.230 **), empathy towards the teacher and subject r = 0.229 **), agreement with subject management (r = 0.210 **) and preference for PE and sport (r = 0.236 **).The positive correlation is weakest in relation to the dimensions describing the difficulty presented by PE (r = 0.153 **) and PE as sport (r = 0.060 **), whilst a low negative correlation is seen with usefulness of EF(r = −0.046 *). Attending to the dimension of emotional isolation in relation to attitudes towards Physical Education, it is observed that when emotional isolation is higher, ratings of the subject and the teacher imparting it are lower (r = −0.170 **). In contrast, when this emotional isolation is greater, perceived difficulty of this subject is also higher (r = 0.066 **). In the same way, a medium correlation is produced between the dimension of isolation and usefulness of PE (r = 0.323 **), however, when this emotional isolation is greater, agreement with subject management decreases (r = −0.191 **). In addition, a slight agreement is maintained regarding preference for PE and sport (r = 0.107 **), and with PE as sport (r = 0.085 **). Once the descriptive and comparative analysis had been performed, a structural equation model was constructed (Figure 2) which included the questionnaire items pertaining to attitude towards PE together with the variables related to it: emotional isolation (SOLEM), family functioning (APGAR), and physical self-concept (AF). Good fit was obtained for all evaluation indices of the structural equation model. Chi-squared analysis revealed a significant p-value p (χ 2 = 233.023; df = 14; p < 0.001), although we must bear in mind that this statistic, as an index, has no upper limit. Further, problems arise as it cannot be interpreted in a standardized way and is sensitive to sample size. In addressing this, other indices of standardized fit are employed which are less sensitive to sample size. The comparative fit index (CFI) showed a value of 0.913, this being acceptable. The normalized fit index (NFI) specified a value of 0.917 and the incremental fit index (IFI) was 0.906, both being acceptable. Analysis of the root mean square error of approximation (RMSEA) obtained an acceptable value of 0.072. Table 6 and Figure 2 present the values produced for the associations between the variables included in the structural equation model developed. Addressing the first level of the model (associations found between DEF, UEF, EPA and COA), all of the exogenous variables of the model show statistically significant relationships at the level p < 0.001. These were positive and direct in all cases except for the association between COA and UEF (r = −0.110). physical education (UEF); empathy with the teacher (EPA); agreement with subject management (COA); rating of the subject and teacher (VPEF); preference for physical education as sport (PEFD); emotional isolation (SOLEM); family functioning (APGAR); physical self-concept (AF). Note 3: *Statistically significant association between variables at the level 0.05; ** Statistically significant association between variables at the level 0.01, *** Statistically significant association between variables at the level 0.001. Approaching the second level, it can be observed that VPEF is positively associated with DEF (r = 0.127; p < 0.001), COA (r = 0.230; p < 0.001), and EPA (r = 0.309; p < 0.001), whilst the association between VPEF and UEF was negative (r = 0.170; p < 0.001). On the other hand, if the associations between PEFD and the exogenous variables are considered, statistically significant relationships are shown with DEF (r = 0.054; p < 0.01), UEF (r = 0.204; p < 0,001), and EPA (r = 0.335; p < 0.001). No statistically significant associations were obtained between COA and PEFD (p = 0.544). Following this, the endogenous variables in the second level of the model are related with emotional isolation, family functioning, and physical self-concept. In the first instance, a positive relationship can be observed between VPEF and family functioning (r = 0.143; p < 0.001), and between VPEF and physical self-concept (r = 0.154; p < 0.001), whilst this variable was negatively related with emotional isolation (r = −0.177; p < 0.001). Along similar lines, PEFD was directly associated with physical self-concept (r = 0.189; p < 0.001) and emotional isolation (r = 0.161; p < 0.001), although in this case an association was not found with family functioning (p = 0.544). Finally, it is highlighted that family functioning maintains a positive and direct relationship with physical self-concept (r = 0.207; p < 0.001), whilst this association was negative and indirect with emotional isolation (r = −0.224; p < 0.001). Discussion The present study, conducted with students in the 3rd year of primary education or the 1st year of CSE from the eight provinces of Andalusia, pursues as its principal objectives the analysis of existing relationships between the attitudes towards physical education of schoolchildren and their physical self-concept, level of emotional isolation and family functioning. These premises are of great interest when it comes to developing and adapting concrete teaching methods and strategies, which favor the development of programs that incentivize students' participation in AF and assume a positive role in the psychosocial development of students [49]. Analyzing the family context, it is deduced that eight out of every ten students belong to a family with good family functioning, with very few students coming from families with severe dysfunction. This type of family functioning will facilitate engagement in PA [50], although some authors maintain that students reporting dysfunctional family contexts can report high levels of PA, suggesting that PA may exert a compensatory effect [51]. Further, a good family climate will serve to extrapolate this effect to the social relationships maintained within individuals' personal context. This not only relates to their affect for and trust in others, but also to the way in which they approach conflict resolution or help others. In reference to attitudes towards PE, the most highly rated dimension is agreement with subject management relating to PE [52], in contrast to the usefulness of this subject which demonstrated a trend towards considering PE as of little use [53,54]. On the contrary, numerous studies positively rate the usefulness of engaging in sport and PA on a daily basis [55,56]. This positive evaluation of the usefulness of PE leads to intrinsic motivation for engaging in PA, both inside and outside of the school timetable [57]. Another significant fact uncovered by the study is that self-concept, in all of its dimensions, achieved a higher level amongst individuals who engaged in PA. This confirms the positive effects of PA at a physical and mental level, on social relations, and on academic performance [13], this being even more evident during the adolescent stage [58]. Other studies carried out on self-concept and PE, relate engagement in PA with improved physical self-concept in a more binding way, whilst failing to find relationships with other dimensions of self-concept [59]. Higher levels of isolation, both in a general and in an emotional sense, are evident within students who do not engage in PA [60], with subjective evaluations of the social network being greater amongst those who are active [61]. However, other authors maintain that greater subjective evaluations of the social network come from students who do not count on an excessive number of social links, given that their subjective social network is smaller and can be more effectively controlled [62]. With regards to family relations and engagement in PA, data from both the present study and other research studies indicate that the level of family functioning does not have a significant relationship with engagement in PA and/or sport [63]. Nevertheless, studies such as those conducted by Aaltonen, Kaprio, Kujala, Pulkkinen, Rose, and Silventoinen [64] argue that when family functioning is good, so are the PA levels of family members. Attitudes towards PE and family functioning, students who rate the subject and teacher more highly, report agreement with subject management and state a preference for EF and sport, also tend to present high family functioning. This is the case despite these students considering the subject to be of little use [65]. In contrast, those who rate the teacher and physical education less highly tend to be those students with moderately dysfunctional families, with these students also perceiving the aforementioned subject as being more challenging yet highly useful [66]. Thus, lack of attention to the basic psychological needs from within the family unit and positive relations between members, will have negative social repercussions for these students at school. Such repercussions could impinge upon their attitudes and interest towards PA and sport engagement. Good family functioning is also related with a high academic and social self-concept [67], in contrast to the higher emotional self-concept seen in students with families that present moderate dysfunction [68]. Both family and physical self-concept are higher in students who belong to functional families, whilst lower levels correspond to families with a serious dysfunction [69]. However, other research studies subscribe to different conceptions. These have demonstrated that family self-concept is higher within students with functional families, whilst no direct link was established between physical self-concept and family type, but was with the development of abilities through sporting practice [70]. Other works do not link the level of self-concept with the type of family to which students belong [71]. With respect to the variables of isolation according to family functioning, we found that emotional isolation predominates above all within those students who belong to severely dysfunctional families. High scores are obtained within functional families for the subjective evaluation of the social network, with this instead being low within those who live within a seriously dysfunctional family. In the same way, students who present greater general isolation are those who belong to seriously dysfunctional families. Research conducted by other authors collaborate these results [72], whilst others such as Twenge, Spitzburg, and Campbell [73] did not manage to demonstrate that isolation in adolescents is more linked to one type of family over another type. These authors instead found high levels of isolation within adolescents coming from functional families. This makes it clear that their isolation could be related to a lack of activities performed with peers during out of school hours and not to family typology. Analysis of bivariate correlations performed with the variables of physical self-concept, emotional isolation and attitude towards PE, demonstrated that students who have a good physical self-concept present scant levels of isolation at a general or emotional level. These data are in agreement with data produced by other studies and reflect that the lower the level of isolation in students, the greater their physical self-concept and security of the social relationships they establish with their social environment. All of this facilitates greater engagement in PA [74]. In the same way, greater physical self-concept is generally related with stronger attitudes towards PE [75,76], with a positive relationship between preference for PE and sport, and a negative relationship with its usefulness standing out. Attending to the dimensions of isolation in relation to those pertaining to attitudes towards physical education, it is observed that when emotional isolation increases in students, the rating that they attribute to the subject and teacher is lower [77,78], and they report less agreement with respect to subject management [79,80]. However, we demonstrate that this is not a specific aspect directed towards the teacher or this subject but is a trait that these types of students maintain towards teaching staff and all subjects generally speaking [81]. Nevertheless, they do perceive this subject to be more difficult [80] and highly useful [79]. This is in contrast to findings reported in other studies such as that conducted by Krause, Gulick, and Basin [82]. With regards to the dimensions pertaining to attitude towards PE, students who rate the subject as more difficult also tend to perceive it as being more useful [83]. Those who rate the subject and teacher more highly also demonstrate greater agreement with its management [84], whilst in contrast, also considering it to be less useful. When students conceive PE as sport, they find it more difficult as a subject [85], view it as being more useful, and hold a stronger preference for it [86]. From the structural equations carried out it can be seen that an increase in the perceived difficulty of physical education goes hand in hand with an increase in reports of its usefulness, coherence in the way it is managed and empathy towards the teacher who imparts it [87]. It can be confirmed that when each one of these variables increases, so to do the other variables, except in the case of coherence in subject management. Instead, when this increases, subject usefulness decreases. For its part, it is observed that ratings of the subject of PE improve when the difficulty of PE, coherence, organization, and empathy for the teacher increases. This final point has also been endorsed in other studies [88]. However, ratings of PE decrease when the usefulness of PE increases [89]. In the same way, ratings of the subject of PE improve when the difficulty of PE, coherence and organization increase. However, just as has been stated by other authors [90], ratings of PE decrease when the usefulness of PE increases. Finally, it is of note that preferences for PE and sport increase when met by increases in the perceived difficulty and usefulness of PE. This study presents some limitations. The first of these is due to the fact that it deals with a cross-sectional study and so is not able to establish causal relationships. A second limitation is that this study does not enable generalization of the data obtained to other populations. This being said, the explanatory model developed does enable a better understanding of the associations between variables. Further, we consider the sample to be broad and representative of the targeted population. For this reason, the present research permits development of a future line of research, providing a base from which we can evaluate, compare, and replicate the study with other groups of students. Findings will also be used for the planning and design of future research in a classroom setting. Conclusions The main conclusions of the present study confirm that the physical dimension of self-concept is high in students who maintain low percentages in all of the dimensions of isolation. Those families in which functionality is good engage regularly in AF. At the same time, it is also reflected that students who rate the subject and teacher more highly, whilst also demonstrating empathy towards both, tend to be those who present relatively high family functioning. The same can be seen to occur in reference to subject management, and preference for PE and sport. With regards to the theoretical model of self-concept, isolation, family functioning and attitude towards PE, it presented good fit. The data indicate that engagement in PA is not determined by the other variables measured in the model. Results also suggest that family functioning intervenes to a large extent on self-concept and its dimensions, in addition to the isolation demonstrated by students. It is concluded that attitudes towards PA are positive when levels of isolation are low and when an adequate self-concept is present. All of this is typically generated in students who, in the majority of cases, come from families with high levels of functioning.
2019-12-28T14:03:07.625Z
2019-12-21T00:00:00.000
{ "year": 2019, "sha1": "bdd72085921c30e5c3b8f326f9bf7d0e683b5b9b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/17/1/94/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cd1a446ca78e8092de6e583f850d581556bc08d2", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
41615728
pes2o/s2orc
v3-fos-license
Involvement of Cholinergic Motor Neurons in Pharmacological Regulation of Gastrointestinal Motility by Glucagon in Conscious Dogs To clarify the exact mechanisms of the pharmacological effects of glucagon on gastro intestinal motility, the following experiments were performed on the conscious and anesthe tized dogs. 1) During phase I of interdigestive migrating contractions (IMC), glucagon (5 ~50μg/kg , drip infusion for 5 minutes) induced phasic contractions in the duodenum , jejunum and ileum, but not in the antrum. These excitatory responses were also observed in the truncal vagotomized dogs. These contractions were abolished by atropine or hex amethonium in the conscious dogs, and also by tetrodotoxin in the anesthetized dogs . 2) Glucagon inhibited cisapride-induced contractions only in the antrum in the conscious dogs . After pre-treatment with hexamethonium, glucagon inhibited these contractions in the duodenum, jejunum and ileum as well as in the antrum. After pre-treatment with tetrodotoxin in the anesthetized dogs, glucagon did not affect acetylcholine-induced contrac tions in any region. 3) Glucagon inhibited spontaneous phase III contractions and eryth romycin-induced phase III-like contractions in the antrum, but did not inhibit either contrac tions in the other regions in the conscious dogs. These paradoxical effects of glucagon between the antrum and intestine were similar to those involved in the blockade of 5hydroxytryptamine 3 receptors. After pre-treatment with hexamethonium , glucagon inhib ited these contractions in the duodenum, jejunum and ileum as well as in the antrum . In conclusion: 1) Glucagon latently inhibits cholinergic motor activities in the antrum and intestine not directly, by binding to either receptor on the smooth muscle cells , but through postganglionic cholinergic neurons and possibly through 5-hydroxytryptamine neurons. 2) On the other hand, in the intestine the reverse effects through preganglionic cholinergic neurons involving nicotinic and muscarinic receptors are more potent . 3) As a result, glucagon inhibits antral contractions and does not affect intestinal contractions in a con scious state. Introduction Glucagon is one of the peptide hormones secreted from the A cells of the pancreatic islets. It antagonizes the effects of insulin and plays an important role in the regulation of the blood glucose.It has also a number of other actions, including a hypomotility and hypotonicity action on gastrointestinal motility (Stunkard et al., 1955;Sudsaneh et al., 1959;Detevall et al., 1963;Necheles et al., 1966).It is thus now commonly used as a pre-treatment drug for radiodiagnostics (Miller et al., 1974;Kreel et al., 1975;Carsen et al., 1976;Ishii et al., 1978) or endoscopic examinations (Qvigstad et al., 1979) of gastrointestinal tract in patients with complications such as heart diseases (Giesen, 1978;Harada et al., 1997), glaucoma (Sissons et al., 1991;Fink et al., 1995) and hypertrophy of the prostate (Chernish et al., 1972).The enteric nervous system may be a target of glucagon (Takenaka et al., 1975;Lin et al., 1989), but the exact mechanisms of the inhibitory effects have not been elucidated. On the other hand, the gene encoding proglucagon, the precursor of glucagon, is expressed not only in the pancreatic islets but also in the endocrine cells of the gastrointestinal mucosa. The proglucagon-derived peptides produced by the L cells in the jejunum, ileum and colon are called enteroglucagon and are partly composed of pancreatic glucagon. It is secreted into the blood in response to ingestion of carbohydrates and long-chain fatty acids, and may be one of the candidates of the "ileal-brake", which inhibits upper gastrointestinal functions elicited by the presence of unabsorbed nutrients in the ileum (Holst, 1997). Glucagon and enteroglucagon interact with a common receptor in vitro (Gros, et al., 1993), but its physiological role in vivo is not yet clear. The purpose of the present study was to investigate the pharmacological effects of glucagon and the exact mechanisms of its action on interdigestive contractions of the gastro intestinal tract in the conscious and anesthetized dogs. Preparation of animals Eleven healthy adult mongrel dogs of either sex weighing 8.8-17.5 kg were used in these experiments. The procedures were approved by the Review Committee on Laboratory Animal Science of Hiroshima University, Japan.Under pentobarbital sodium anesthesia (25mg/kg body weight, i.v.), the abdominal cavity was opened and strain gauge force transducers (Star Medical, Japan, F-12IS) were sutured onto the serosal side of various regions of the gastrointe stinal tract (the gastric antrum 5cm proximal to the pyloric ring, the duodenum at the level of the main pancreatic duct, the jejunum 15cm distal to the ligament of Treitz, and the ileum 15 cm proximal to the ileo-cecal junction) so that the contractile activities of the circular muscle could be recorded. A truncal vagotomy was performed on two of the dogs at the level of the abdominal esophagus. The completeness of the truncal vagotomy was examined according to a previous report (Mukai, 1984).Transducer lead wires were taken out of the abdominal cavity through the subcutaneous tunnel and brought out through a skin incision at the middle region of the superior end of the bilateral shoulder blade .After closure of the abdominal cavity, Silastic tubes (Argyle, Japan, 1216-27-P) were inserted into the right and left femoral vein for intravenous administration of agents or blood sampling.The tubes were brought out through another skin incision on the back and their outer ends were fixed to the skin with nylon sutures. After surgery, jacket-type protectors (Star Medical, Japan , FPJ-12) were put on the dogs to protect the lead wires and the tubes from scratching and biting .The dogs were housed and also recorded. The motility index was calculated as in a previous report (Okajima , 1988). Monitoring of blood glucose concentration Blood samples were drawn from the left femoral vein through the Silastic tube in a conscious state.The blood glucose concentration was measured by a glucose oxidase method . Materials The following agents were used in these experiments: induced strong phasic contractions in the duodenum, jejunum and ileum, but did not induce any contractions in the antrum (Fig. 1-A). Inhibitory effects of glucagon on gastric motility during quiescent phase of IMC Even if glucagon was administered intravenously during phase I of IMC, no contractions were induced in the antrum.However, it was not clear whether glucagon inhibited antral contractions or not. In order to clarify the effects of glucagon on antral phase I contractile activities, the following experiments were performed. During phase I contractile activities, intravenous administration of cisapride (0.5mg/kg body weight, drip infusion for 10 minutes) induced rhythmical phasic contractions in every region (Fig. 4-A).These contractions were completely inhibited only in the antrum by glucagon (Fig. 4-B).After pre-treatment with hexamethonium bromide, a nicotinic receptor antagonist, cisapride-induced contractions were partially inhibited and modified in every region (Fig. 4-C).Furthermore, after blocking the preganglionic excitatory responses of glucagon by hexamethonium bromide, glucagon completely inhibited cisapride-induced contractions in the duodenum, jejunum and ileum as well as in the antrum (Fig. 4-D).These results are shown in C: Glucagon-induced contractions were also completely inhibited by hexamethonium bromide (C6). pride-induced contractions with or without pre-treatment with hexamethonium bromide (p< 0.001 and p<0.01, respectively), whereas in the duodenum glucagon inhibited these contractions only after pre-treatment with hexamethonium bromide (p<0.01). On the other hand, in the anesthetized dogs after pre-treatment with tetrodotoxin, glucagon did not affect exogenous acetylcholine, acetylcholine chloride (0.5mg/kg body weight, drip infusion for 10 minutes)-induced contractions in any region (Fig. 6).minutes), the blood glucose concentration was measured.The maximal glucose concentration , approximately 180mg/dl, was observed 5 minutes after the administration of glucagon.The maximal glucose concentration was compatible with that of an administration of glucose at a dose of 0.3g/kg (Fig. 10).In order to examine the indirect effects of glucagon on gastrointesti nal motility through hyperglycemia, glucose was used in place of glucagon.On phase I contractile activities administration of glucose (0.3g/kg body weight, i.v.) did not induce contractions in any region (Fig. 11-A).Moreover, glucose did not affect either the cisaprideinduced contractions (Fig. 11-B) or erythromycin lactobionate-induced contractions (Fig. 11-C) in any region.On the other hand, insulin was released in response to glucagon-induced hyperglycemia. As insulin-induced contractions were via the central nervous system , in the truncal vagotomized dogs insulin-induced contractions could not be observed in any region , but glucagon-induced excitatory responses were induced even in the truncal vagotomized dogs in my experiments (Fig. 2).(EM : erythromycin lactobionate, C6 : hexamethonium bromide) Discussion One of the important actions of glucagon is a hypomotility and hypotonicity action on gastrointestinal motility (Necheles et al., 1966).Zollinger and Ellison (1955) were the first to recognize the effects of glucagon on the gastrointestinal tract. It reduced gastric hunger contractions in 7 healthy volunteers (Stunkard et al., 1955) and promptly inhibited motor activities of the stomach and duodenum (1mg/body, i.m.) for 25 minutes in humans (Nishioka et al., 1984).On the other hand, it (0.05μg/kg body weight, i.v.) induced strong contractions in the duodenum in dogs (Furukawa, 1987), and a low dose continuous administration activated for six to seven experiments. duodenal motility while a high dose bolus injection inhibited it in dogs (Wingate et al., 1979). As seen above, in previous papers the effects of glucagon on gastrointestinal motility were inconsistent in various regions and various species.The mechanisms of its inhibitory effects were focused on in some previous experiments.They were not concerned with direct effects on the receptors on smooth muscle, but on the myenteric nervous system in rabbits (Takenaka et al., 1975), or the interference of intramural cholinergic neuronal transmission in rat esophagus (Lin et al., 1989).Until now it has been thought that the enteric nervous system might be a target of glucagon. On the other hand, some papers have indicated that glucagon acts directly on gastric smooth muscle cells in humans (Wingate et al., 1979;VandeCreek et al., 1986).In my experiments the exact mechanisms of glucagon on upper gastrointestinal motility were examined in the conscious or anesthetized dogs. Intravenous administration of glucagon during the quiescent phase of IMC induced a series of strong contractions in the duodenum, jejunum and ileum in the conscious dogs.However, no contractions were induced in the antrum (Fig. 1-A).These contractions were also induced in the truncal vagotomized dogs in a fasted state (Fig. 2).The glucagon-induced contractions were inhibited by atropine sulfate or hexamethonium bromide (Fig. 1-B and C).Moreover, after pre-treatment with tetrodotoxin in the anesthetized dogs, glucagon did not induce contractions in any region (Fig. 3).These facts indicate that glucagon-induced excitatory responses in the duodenum, jejunum and ileum may be mediated by preganglionic cholinergic neurons involving nicotinic and muscarinic receptors in the myenteric plexus. In previous reports it was not clear how glucagon affected antral motility during the quiescent phase of IMC.To clarify whether glucagon inhibited antral motility or not, I investigated the effects of glucagon on contractions induced by cisapride, which is known to be an agonist at neural 5-hydroxytryptamine (5-HT) 4 receptors in the cholinergic motor path- ways and accelerates endogenous acetylcholine release from cholinergic nerve endings in the myenteric plexus (Hardcastle et al., 1984 ;Suzuki et al., 1985 ;Fujii et al., 1988 ;Taniyama et al., 1991).Cisapride induced strong rhythmical contractions in every region (Fig .4-A), and glucagon inhibited cisapride-induced contractions only in the antrum (Fig. 4-B).Furthermore, after administration of hexamethonium bromide to prevent the preganglionic excitatory responses of glucagon, glucagon completely inhibited cisapride-induced contractions in the duodenum, jejunum and ileum as well as in the antrum (Fig. 4-D).On the other hand, glucagon did not inhibit exogenous acetylcholine-induced contractions in any region after pre-treatment with tetrodotoxin in the anesthetized dogs (Fig. 6).These facts indicate that glucagon latently inhibits cholinergic motor activities not directly via either receptor on the smooth muscle cells but through postganglionic neurons in the duodenum, jejunum and ileum as well as in the antrum, whereas glucagon also preganglionically activates cholinergic activity in the duode num, jejunum and ileum.As a result, cisapride-induced contractions were inhibited by glucagon only in the antrum, and were not inhibited in the duodenum, jejunum and ileum in a conscious state (Fig. 4-B). How glucagon affects phase III contractions is not yet clear.In my experiments glucagon instantly eliminated phase III contractions in the antrum and somewhat altered the patterns of phase III contractions in the duodenum, jejunum and ileum (Fig. 7-A and B).These findings in the antrum and duodenum are consistent with previous reports in dogs (Wingate et al., 1979 ;Furukawa, 1987).To clarify these mechanisms, erythromycin lactobionate was used to induce phase III-like contractions (Itoh et al., 1984 ;Satoh et al., 1994).Administration of eryth romycin lactobionate during phase I of IMC immediately induced strong rhythmical contrac tions similar to spontaneous phase III contractions (Fig. 8-A).It has been commonly accepted that the cholinergic pathways and 5-HT3 neurons are involved in these contractions.(Itoh et al., 1977 ;Itoh et al., 1978 ;Itoh et al., 1991 ;Mizumoto et al., 1993 ;Haga et al., 1996).In fact, these contractions were completely inhibited by atropine sulfate and partially inhibited by hexamethonium bromide (Qin et al., 1993 ;Shiba et al., 1995).Glucagon inhibited eryth romycin lactobionate-induced contractions only in the antrum (Fig. 8-B), whereas in the duodenum, jejunum and ileum they were strongly inhibited by glucagon only after pre-treatment with hexamethonium bromide (Fig. 8-C and D). As stated above, it is suspected that glucagon latently postganglionically inhibits phase III contractions or erythromycin lactobionate-induced phase III-like contractions in any region, but in the duodenum, jejunum and ileum reverse effects through preganglionic cholinergic neurons are so strong that intestinal motility is not quite affected in a conscious state (Fig. 7-B and 8-B).These paradoxical effects of glucagon between antral and intestinal phase III contractions are similar to the antagonism of neural 5-HT3 receptors (Itoh et al., 1991).These facts indicate the involvement of cholinergic neurons and possibly 5-HT3 receptors or neurons in terms of the inhibitory effects of glucagon. Finally, to confirm that these excitatory or inhibitory effects were not an indirect action of glucagon through hyperglycemia or hyperinsulinemia, some additional experiments were performed. It is well known that continuous hyperglycemia inhibits gastrointestinal motility in a conscious state.Gastric contractions were nearly absent at a serum glucose level of 250 mg/kg for 3 hours and markedly reduced at 175 and 140 mg/dl, but duodenal phase III activities were unchanged at all levels of glucose infusion in healthy volunteers (Barnett et al., 1988).However, in my experiments hyperglycemia followed by glucagon administration continued for a short period (Fig. 10), so even when glucose was administered in place of glucagon, no excitatory or inhibitory responses were observed in any region (Fig. 11-A, B and C).On the other hand, it was also reported that hyperinsulinemia enhanced cholinergic motor activities via the central nervous system (Rayner et al., 1981) and increased antral motility in the vagal innervated dogs but did not increase it in the truncal vagotomized dogs (Yokomichi et al., 1976). In my experiments, glucagon-induced excitatory responses were induced even in the truncal vagotomized dogs (Fig. 2).These facts indicate that these excitatory and inhibitory effects are not an indirect action through secondary hyperglycemia or hyperinsulinemia , but a direct pharmacological action of glucagon itself in the myenteric plexus.In previous studies it was not clear whether glucagon acted directly on glucagon receptors or other receptors located on the neurons, or acted indirectly through other chemical mediators . Many hormones, including enteroglucagon, might act through paracrine release of somatostatin (Lloyd, 1994).In fact, glucagon receptors were detected in a somatostatin-secreting cell line RIN T3 (Gros et al., 1993).Furthermore, glucagon receptor mRNA transcripts were detected in the central nervous system, jejunum and ileum in both fetal and adult mice (Campos et al ., 1994), but what kind of neurons glucagon receptors were located on was not ascertained .Further examination will clarify the location of glucagon receptors . In conclusion, glucagon inhibits cholinergic motor activities not directly by binding to either receptor on the smooth muscle cells but through postganglionic cholinergic neurons, and possibly 5-HT neurons, in the duodenum, jejunum and ileum as well as in the antrum. On the other hand, glucagon activates cholinergic motor activities through preganglionic cholinergic neurons involving nicotinic and muscarinic receptors in the duodenum , jejunum and ileum. Fig. 4 .Fig. 6 . Fig. 4. Effects of glucagon on cisapride-induced gastrointestinal contractions with or without pre-treat ment with hexamethonium bromide (C6).A : When cisapride was administered during phase I contractile activities of IMC, rhythmical phasic contractions occurred in every region.B : Glucagon completely inhibited cisapride-induced contractions only in the antrum, but scarcely had any influence in the duodenum, jejunum and ileum.C : With pre-treatment with hexamethonium bromide, cisapride-induced contractions were par tially inhibited.D : With pre-treatment with hexamethonium bromide, glucagon completely inhibited cisaprideinduced contractions in the duodenum, jejunum and ileum as well as in the antrum. Fig. 9. Effects of glucagon on erythromycin lactobionate-induced antroduodenal contractions in conscious dogs by means of motility index-15 (MI-15).A : In the antrum glucagon significantly inhibited erythromycin lactobionate-induced contractions with or without pre-treatment with hexamethonium bromide (C6) (p<0.001 and p<0.01, respectively).B : In the duodenum glucagon inhibited these contractions only with pre-treatment with hexamethonium bromide (p<0.05).Values are means±S.D. for six to eight experiments. Fig. 11.Influences of glucose on gastrointestinal motility in conscious dogs .A : When administered during phase I contractile activities of IMC , glucose did not induce contractions in any region.B : Glucose did not inhibit cisapride-induced contractions in any region.C : Glucose did not inhibit erythromycin lactobionate-induced contractions in any region .
2018-04-03T05:06:39.162Z
1997-08-01T00:00:00.000
{ "year": 1997, "sha1": "6c3acc883487a2040f790faa7c6a08c0211f7491", "oa_license": "CCBYNC", "oa_url": "https://www.jstage.jst.go.jp/article/jsmr1991/33/4-5/33_4-5_145/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6c3acc883487a2040f790faa7c6a08c0211f7491", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
229174768
pes2o/s2orc
v3-fos-license
Direct comparison of biopsy techniques for hepatic malignancies Background/Aims The core needle biopsy (CNB), fine needle aspiration cytology (FNAC) and touch imprint cytology (TIC) are commonly used tools for the diagnosis of hepatic malignancies. However, little is known about the benefits and criteria for selecting appropriate technique among them in clinical practice. We aimed to compare the sensitivity of ultrasound-guided CNB, FNAC, TIC as well as combinations for the diagnosis of hepatic malignancies, and to determine the factors associated with better sensitivity in each technique. Methods From January 2018 to December 2019, a total of 634 consecutive patients who received ultrasound-guided liver biopsies at the National Taiwan University Hospital was collected, of whom 235 with confirmed malignant hepatic lesions receiving CNB, FNAC and TIC simultaneously were enrolled for analysis. The clinical and procedural data were compared. Results The sensitivity of CNB, FNAC and TIC for the diagnosis of malignant hepatic lesions were 93.6%, 71.9%, and 85.1%, respectively. Add-on use of FNAC or TIC to CNB provided additional sensitivity of 2.1% and 0.4%, respectively. FNAC exhibited a significantly higher diagnostic rate in the metastatic cancers (P=0.011), hyperechoic lesions on ultrasound (P=0.028), and those with depth less than 4.5 cm from the site of needle insertion (P=0.036). Conclusions The sensitivity of CNB is superior to that of FNAC and TIC for the diagnosis of hepatic malignancies. Nevertheless, for shallow (depth <4.5 cm) and hyperechoic lesions not typical for primary liver cancers, FNAC alone provides excellent sensitivity. INTRODUCTION The histological and cytological examination of liver tissues is crucial for the diagnosis of diffuse and focal hepatic lesions on images. 1 The liver biopsy is not only an important tool to correctly diagnose focal liver lesions, but also provide molecular information to guide treatment plans and future studies in hepatocellular carcinoma (HCC) and metastatic cancers. 2 Currently, there are three major methods for percutaneous tissue sampling and subsequent examination, namely core needle biopsy (CNB), fine needle aspiration cytology (FNAC), and touch imprint cytology (TIC). The CNB is usually conducted with a 16-gauge core needle and is expected to obtain sufficient tissue. However, the risk of severe complication including intraperitoneal bleeding or needle tract seeding of malignancy is the major concern. 3 FNAC and TIC provide the choice of rapid on-site examination (ROSE), which may be helpful for guidance of clinical practice at the time of sampling. 4 The FNAC is considered to be safer and less costly than CNB because it is usually performed with a 22-gauge fine needle. The drawbacks of the cytological examination include the lack of detailed diagnostic information regarding the tissue architectures. Although TIC does not cause additional risk, it may deplete cellularity and DNA of the obtained CNB specimens. 5 Several studies comparing the CNB and FNAC in multiple organs including liver have generated conflicting and inconclusive results. 6 Some recent studies suggested that the accuracy of the two methods are comparable, and the FNAC might provide higher sensitivity in metastatic liver mass. 7 Another study concluded the specimen from fine needle aspiration is accurate and produces higher tumor fraction for molecular studies than that from CNB. 8 However, most of prior studies focused on the pathologic diagnosis and findings. Little information was provided regarding the clinical and procedural conditions, such as the tumor size, echogenicity and the depth from the insertion site to the tumor margin. In addition, there is no established consensus or criteria for guidance of technique selection. http://www.e-cmh.org https://doi.org/10.3350/cmh.2020.0301 In this study, we performed a direct comparison between CNB, FNAC as well as TIC in hepatic malignancies receiving real-time ultrasound-guided biopsies, and determined the clinical factors associated with better sensitivity in each method. Patients and procedures From January 2018 to December 2019, a total of 634 consecutive percutaneous ultrasound-guided liver biopsies at the National Taiwan University Hospital were collected. Of these patients, 235 with ultrasound-detectable malignant lesions receiving CNB, FNAC and TIC simultaneously were included for analysis. The flowchart of patient selection is shown in Figure 1. The sampling of the hepatic lesions of interest was guided by the real-time ultrasonography (Aplio 500; Toshiba Medical Systems Corporation, Tochigi, Japan) with linear (PLT-308P; Toshiba Medical Systems Corporation) or convex (PVT-350BTP; Toshiba Medical Systems Corporation) transducers. All patients received local anesthesia prior to the protocolized procedures, which included a FNAC with 4-6 axial movements inside the lesion by a 22-gauge needle connected to a 20-mL syringe, followed by the CNB with a springloaded 16-gauge needle, and then the TIC via gently touching the obtained tissue on two slides. The samples from fine needle aspiration were released onto two slides equally and then covered by the other two slides, followed by gently and quickly pulling each pair apart. The smears were prepared by both air-drying for Liu's stain and 95% ethanol wet fixation for Papanicolaou stain in every FNAC (four slides) and TIC (two slides). In patients with multiple hepatic tumors, the choice of targeted lesion was based on the combination of the location, size, depth, nearby vessels or organs to achieve maximal safety and yield as possible. The conduction of all the procedures was led by well-experienced interventional hepatologists with more than 100 cases performed per year. The samples of biopsy and cytology were examined by the certified pathologists and cytopathologists separately. ROSE was not the routine practice for liver biopsies in our hospital. Cell blocks were not performed for the cytology specimens. All patients gave written Informed consents for the invasive intervention. The study was approved by the Institutional Review Board of National Taiwan University Hospital (202006053RINA) and conformed to the ethical guidelines of the 1975 Declaration of Helsinki. The informed consent for the study was waived because it was a retrospective study involving review of medical record only. Data collection The clinical and procedural information was collected by retro- spective review of the medical records and images. A standardized record form was used. The clinical information including age, gender and the final diagnosis for the patients was recorded. The procedural information was composed of tumor location, size, number, echogenicity as well as the depth from the skin site of needle insertion to the margin of the targeted tumors. In this study, the histopathological diagnosis of malignancy from the CNB specimens was defined as the gold standard. The results of cytological examination were considered as non-diagnostic in malignant lesions if it showed negative for malignant cell, inadequate specimen or atypia of undetermined significance. If the CNB did not provide the diagnosis of malignancy, either due to inadequate specimens or inaccurate sampling, the final diagnosis was based on subsequent surgical specimens, repeated biopsy, or overall clinical evaluation of images and data from the medical records. Statistical analysis The categorical data was compared by chi-squared and twotailed Fisher's exact tests. The continuous variables were examined by two-sample t-test. The procedural factors, namely tumor location, size, number, echogenicity as well as the depth from the skin site of needle insertion to the margin of the targeted tumors, were comprehensively included in a logistic regression analysis to determine the association with the sensitivity of FNAC. Factors with P<0.1 in the univariate analyses were used in a multivariate logistic regression model. A two-tailed P<0.05 was considered statistically significant. The statistical analyses were conducted by PASW Statistics for Windows, version 18.0 (SPSS Inc., Chicago, IL, USA). Demographics and characteristics A total of 235 patients including 148 men and 87 women were enrolled in the study. The mean age was 65.7 years (ranging from 30 to 94 years). Among them, 144 cases were finally diagnosed as primary liver cancers (including 97 HCC, 41 cholangiocarcinoma, three hepatic angiosarcoma, one hepatocholangiocarcinoma, one hepatic malignant spindle cell carcinoma, and one mucinous cystadenocarcinoma), while the other 91 cases were metastatic cancers. A total of 135 cases had single and 100 cases had multiple lesions. The majority of the lesions was located at right lobe (193 of 235). The mean size and depth of the lesions were 4.7 cm and 4.4 cm, respectively. Based on the echogenicity, the target lesions were classified into four groups, including 42 hyperechoic, 118 hypoechoic, nine isoechoic, and 66 mixed echogenicity. Comparison of the sensitivity The sensitivity of CNB, FNAC, TIC and combinations for diagnosis of malignancy are shown in Table 1. Among the 235 malignant hepatic lesions, the CNB, FNAC and TIC were diagnostic in 220 (93.6%), 169 (71.9%), and 200 (85.1%) patients, respectively. The sensitivity of CNB was superior to that of FNAC (P<0.001) and TIC (P=0.003). As compared with CNB alone, the combination of CNB plus FNAC or CNB plus TIC provided additional sensitivity of 2.1% and 0.4%, respectively. CNB yielded non-diagnostic results in 15 cases, of which five were diagnostic in FNAC (including HCC, cholangiocarcinoma, esophageal squamous cell carcinoma, lung adenocarcinoma, and colon adenocarcinoma). There was only a case with negative CNB but positive TIC result (colon adenocarcinoma), in which the FNAC was also positive. Analysis of factors associated with the sensitivity Factors associated the sensitivity of the three methods were analyzed. The sensitivity of CNB was not associated with the origin, number, size, location, depth or echogenicity of the targeted hepatic lesions (all P >0.05). The sensitivity of FNAC was significantly higher in metastatic cancers than in primary liver cancers (81.3% vs. 66.0%, P =0.011), but showed no statistical difference between those with HCC and non-HCC primary liver cancers (63.9% vs. 70.2%, P=0.455). The sensitivity of FNAC was 85.7% in hyperechoic lesions and 68.9% in non-hyperechoic lesions (P =0.028), respectively. Significantly higher sensitivity of FNAC was also observed in the lesions with depth less than 4.5 cm from the site of needle insertion as compared with those with depth equal to or more than 4.5 cm (76.7% vs. 64.0%, P=0.036). Notably, the sensitivity of FNAC reached 100% in the 15 cases with metastatic hyperechoic lesions less than 4.5 cm in depth, in which the CNB provided 14 diagnostic results (CNB sensitivity 93.3%). The number, size and location of the lesions were not associated with the sensitivity of FNAC ( Table 2). The two procedural factors reaching statistical significance in the univariate analysis, including the presence of hyperechogenicity and depth less than 4.5 cm, were included in a multivariate logistic regression model (Table 3), and remained independently associated with higher sensitivity of FNAC (odds ratio [OR], 2.654; 95% confidence interval [CI], 1.056-6.672 and OR, 1.819; 95% CI, 1.014-3.264, respectively). The sensitivity of TIC was higher in metastatic cancers than in primary liver cancers (93.4% vs. 79.9%, P=0.004). Higher yield rate was also disclosed in multiple lesions as compared with single one (91.0% vs. 80.7%, P=0.029). The mean size of the targeted lesions was significantly greater in positive TIC results than that in negative ones, 4.91 cm versus 3.47 cm, respectively (P=0.001). The location, echogenicity and depth of the targeted lesions were not associated with the sensitivity of TIC (Table 4). DISCUSSION In clinical practice, CNB, FNAC and TIC are useful diagnostic tools for the hepatic malignant lesions. The judgement of technique selection is based on a variety of considerations, including the risk, accuracy, cost effectiveness, the institutional and opera- Values are presented as mean±standard deviation or number. FNAC, fine needle aspiration cytology. tors' preference. No consensus has been established and only limited information guiding the selection of each method for better sensitivity in the diagnosis of hepatic malignancies. In this study, the results by directly comparing the sensitivity of the three methods suggest that CNB may be the most sensitive one (93.6%) for both primary liver cancer and metastatic cancers, and the TIC has comparable sensitivity in metastatic lesions (93.4%). In the clinical practice, prior to receiving liver biopsies, those patients usually undergo ultrasonographic and radiologic examinations, revealing certain degree of suspicion for malignancy. 9,10 The results of the current study, including the sensitivity of each method as well as the factors associated with the sensitivity, may help the clinical physicians select the appropriate techniques with satisfactory sensitivity and safety for patients with suspected hepatic malignancies on images. The most important benefit from TIC is the utility of ROSE without additional invasive intervention. The ROSE of TIC provides timely information about the adequacy of obtained tissue, thus avoids unnecessary needle passes. Our study confirmed the satisfactory sensitivity of TIC for metastatic lesions, but also observed the suboptimal sensitivity (79.9%) for primary liver cancers. Additionally, depletion of the malignant cells in the obtained tissue of CNB should be considered as a potential problem of the technique. 11 There was a case of metastatic colon adenocarcinoma with positive TIC but negative CNB result in our study, although both of them were derived from the same initial specimen. One possible explanation is the aforementioned condition. 5 This is the first study to directly compare the sensitivity of these common techniques simultaneously for the diagnosis of hepatic malignancies, and then determine the relationship between the procedural parameters and the sensitivity of FNAC. We identified three clinical variables associated with higher sensitivity of FNAC, including metastatic cancers, hyperechoic lesions on ultrasonography, and superficial lesions with depth less than 4.5 cm from the site of needle insertion. Consistently, a previous retrospective study enrolling 74 patients with liver masses also reported the better sensitivity of FNAC in metastatic cancers than in HCC. 7 Our study revealed a wider difference between the sensitivity of FNAC in diagnosing metastatic cancers and primary liver cancers (81.3% vs. 66.0%). A plausible explanation is that the fine needle aspiration obtains fragmented tissue without the intact architecture and surrounding stroma, so the atypia of hepatocytes on cytology is not sufficient to make the definite diagnosis of HCC. The limitation of cytology in the diagnosis of HCC is a crucial consideration for choosing optimal technique in hepatitis B virus (HBV) or hepatitis C virus (HCV) endemic area, where the incidence of HCC is much higher. 12,13 A recent study collecting 10-year cases (most were metastatic cancers) of hepatic FNA in a single institution in United States, a non-HBV/HCV endemic area, showed higher sensitivity of FNA up to 93.4%; although the cell blocks used in the study may also strengthen the diagnostic ability. 14 However, the discrepancy between the sensitivity of FNAC and TIC exists in our study, either for primary or metastatic cancers, not explained by the limitation of cytology itself, suggesting the procedure-associated factors may influence the accuracy of sampling. 15 For example, the FNAC is performed with a 22-gauge fine needle, while the CNB is conducted with a 16-gauge core needle. The difference between the equipment is not only the amount of obtained tissue, but also the degree of difficulty in reaching the targeted lesions accurately. Mechanistically, the needle was inserted and proceeded with the guidance of ultrasonography. Since this is a 3-dimensional operation based on a 2-dimensional plane of view from the ultrasound probe, a small deviation of the needle from the plane of view may lead to disappearance of the needle tip on sonography. Compared with the 16-gauge needle, the 22-gauge needle is much finer and tends to be bent or curved while penetrating tissue with greater friction or resistance, such as a thick subcutaneous layer or the lateral compression of the needle to the rib. The deviation may be greater if the needle is inserted deeper. This phenomenon may also explain the better sensitivity of FNAC in diagnosing hepatic tumors that were less deep. The echogenicity of hepatic tumors is associated with the pathological nature of the lesions, including the composition and microscopic architecture, therefore different cancers tend to have distinct features on ultrasonography. HCCs are typically known to be hypoechoic, especially for smaller ones, while the metastatic cancers are prone to be hyperechoic. 16,17 In addition, necrotic tissue (that may be associated with fewer viable malignant cells) is less likely to be hyperechoic on ultrasonography. In the current study, the higher yield rate of FNAC for hyperechoic lesions could be explained by above reasons. Our data showed that add-on FNAC to CNB led to additional diagnostic value of 2.1%, but the risk of complications in repeated needle insertion should be taken into consideration. 18,19 In contrast, FNAC alone in selected patients was found to be beneficial. Although the overall sensitivity of FNAC was only 71.9%, we identified three clinical parameters (metastatic, hyperechoic and <4.5 cm in depth) significantly associated with higher sensitivity. Furthermore, the sensitivity of FNAC increased to 100% (15/15) in hepatic tumors that matched all three criteria. In some cases, the FNAC was diagnostic for malignancy but the CNB revealed nondiagnostic result. This point may also be explained by the operating features of the techniques; the FNAC allowed more to-and-fro movements during tissue acquisition, creating possibly wider sampling area of lesion in question than that of CNB. Future larger prospective studies may focus on more precise predictors that help select the sampling technique with maximal accuracy and minimal risk. There were limitations in this study. First, it was a retrospective study conducted in a single tertiary center in Taiwan, and the results need to be validated in other facilities with variable clinical settings. Second, those not receiving these three methods simultaneously were not included in the analysis, therefore selection bias could not be totally avoided. Third, the methods compared in this article were all percutaneous ultrasound-guided procedures, which may be not extrapolated to computed tomography-guided ones. Fourth, only malignant lesions were included in this study, therefore other statistical profiles such as specificity were not presented. Fifth, although the samples were examined by the certified pathologists and cytopathologists for histology and cytology separately, the results were not blinded and may be affected by each other, causing possible bias. Further prospective study with blinding design may be needed to clarify this point in the future. In conclusion, the sensitivity of CNB is superior to that of FNAC or TIC in hepatic malignancies. Nevertheless, FNAC provides excellent diagnostic sensitivity in selected hepatic malignancies that are shallow (<4.5 cm in depth), hyperechoic, and not typical for primary liver cancers.
2020-12-15T21:59:31.913Z
2020-12-03T00:00:00.000
{ "year": 2020, "sha1": "62b68bbee457105c5139b75a093a65fcbbe742b4", "oa_license": "CCBYNC", "oa_url": "https://www.e-cmh.org/upload/pdf/cmh-2020-0301.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78deed3db75917fbff0be780e6470688f7f7347d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119314178
pes2o/s2orc
v3-fos-license
On the meaning of the Vakhitov-Kolokolov stability criterion for the nonlinear Dirac equation We consider the spectral stability of solitary wave solutions \phi(x)e^{-i\omega t} to the nonlinear Dirac equation in any dimension. This equation is well-known to theoretical physicists as the Soler model (or, in one dimension, the Gross-Neveu model), and attracted much attention for many years. We show that, generically, at the values of where the Vakhitov-Kolokolov stability criterion breaks down, a pair of real eigenvalues (one positive, one negative) appears from the origin, leading to the linear instability of corresponding solitary waves. As an auxiliary result, we state the virial identities ("Pohozhaev theorem") for the nonlinear Dirac equation. We also show that \pm 2\omega i are the eigenvalues of the nonlinear Dirac equation linearized at \phi(x)e^{-i\omega t}, which are embedded into the continuous spectrum for |\omega|>m/3. This result holds for the nonlinear Dirac equation with any nonlinearity of the Soler form ("scalar-scalar interaction") and in any dimension. Introduction Field equations with nonlinearities of local type are natural candidates for developing tools which are then used for the analysis of systems of interacting equations. Equations with local nonlinearities have been appearing in the Quantum Field Theory starting perhaps since fifties [Sch51a,Sch51b], in the context of the classical nonlinear meson theory of nuclear forces. The nonlinear version of the Dirac equation is known as the Soler model [Sol70]. The existence of standing waves in this model was proved in [Sol70,CV86]. Existence of localized solutions to the Dirac-Maxwell system was addressed in [Wak66,Lis95] and finally was proved in [EGS96] (for ω ∈ (−m, 0)) and [Abe98] (for ω ∈ (−m, m)). The local well-posedness of the Dirac-Maxwell system was considered in [Bou96]. The local and global well-posedness of the Dirac equation was further addressed in [EV97] (semilinear Dirac equation in n = 3), [Bou00] (Dirac -Klein-Gordon system in n = 1), and in [MNNO05] (nonlinear Dirac equation in n = 3). The question of stability of solitary wave solutions to the nonlinear Dirac equation attracted much attention for many years, but only partial numerical results were obtained; see e.g. [AC81, AKV83, AS83, AS86,Chu07]. The analysis of stability with respect to dilations is performed in [SV86,CKMS10]. Understanding the linear stability is the first step in the study of stability properties of solitary waves. Absence of an eigenvalue with a positive real part will be referred to as the spectral stability, while its absence as the spectral (or linear) instability. After the spectrum of the linearized problem for the nonlinear Schrödinger equation [VK73] was understood, the linearly unstable solitary waves can be proved to be ( "nonlinearly", or "dynamically") unstable [Gri88,GO10], while the linearly stable solitary waves of the nonlinear Schrödinger and Klein-Gordon equations [Sha83,SS85,Wei86] and more general U(1)-invariant systems [GSS87] were proved to be orbitally stable. The tools used to prove orbital stability break down for the Dirac equation since the corresponding energy functional is sign-indefinite. On the other hand, one can hope to use the dispersive estimates for the linearized equation to prove the asymptotic stability of the standing waves, similarly to how it is being done for the nonlinear Schrödinger equation [Wei85], [SW92], [BP93], [SW99], and [Cuc01]. The first results on asymptotic stability for the nonlinear Dirac equation are already appearing [PS10,BC11], with the assumptions on the spectrum of the linearized equation playing a crucial role. In this paper, we study the spectrum of the nonlinear Dirac equation linearized at a solitary wave, concentrating on bifurcation of real eigenvalues from λ = 0. Derrick's theorem As a warm-up, let us consider the linear instability of stationary solutions to a nonlinear wave equation, We assume that the nonlinearity g(s) is smooth. Equation (1.1) is a Hamiltonian system, with the Hamiltonian E(ψ, π) = R n π 2 There is a well-known result [Der64] about non-existence of stable localized stationary solutions in dimension n ≥ 3 (known as Derrick's Theorem). If u(x, t) = θ(x) is a localized stationary solution to the Hamiltonian equationṡ π = −δ ψ E,ψ = δ π E, then, considering the family θ λ (x) = θ(λx), one has ∂ λ | λ=1 E(φ λ ) = 0, and then it follows that ∂ 2 λ | λ=1 E(φ λ ) < 0 as long as n ≥ 3. That is, δ 2 E < 0 for a variation corresponding to the uniform stretching, and the solution θ(x) is to be unstable. Let us modify Derrick's argument to show the linear instability of stationary solutions in any dimension. Lemma 1.1 (Derrick's theorem for n ≥ 1). For any n ≥ 1, a smooth finite energy stationary solution θ(x) to the nonlinear wave equation is linearly unstable. Proof. Since θ satisfies −∆θ + g(θ) = 0, we also have −∆∂ x1 θ + g ′ (θ)∂ x1 θ = 0. Due to lim |x|→∞ θ(x) = 0, ∂ x1 θ vanishes somewhere. According to the minimum principle, there is a nowhere vanishing smooth function χ ∈ H ∞ (R n ) (due to ∆ being elliptic) which corresponds to some smaller (hence negative) eigenvalue of L = −∆+g ′ (θ), The matrix in the right-hand side has eigenvectors χ ±cχ , corresponding to the eigenvalues ±c ∈ R; thus, the solution θ is linearly unstable. Let us also mention that Remark 1.2. A more general result on the linear stability and (nonlinear) instability of stationary solutions to (1.1) is in [KS07]. In particular, it is shown there that the linearization at a stationary solution may be spectrally stable when this particular stationary solution is not from H 1 (such examples exist in higher dimensions). Vakhitov-Kolokolov stability criterion for the nonlinear Schrödinger equation To get a hold of stable localized solutions, Derrick suggested that elementary particles might correspond to stable, localized solutions which are periodic in time, rather than time-independent. Let us consider how this works for the (generalized) nonlinear Schrödinger equation in one dimension, where g(s) is a smooth function with m := g(0) > 0. One can easily construct solitary wave solutions φ(x)e −iωt , for some ω ∈ R and φ ∈ H 1 (R): φ(x) satisfies the stationary equation ωφ = − 1 2 φ ′′ + g(φ 2 )φ, and can be chosen strictly positive, even, and monotonically decaying away from x = 0. The value of ω can not exceed m. We consider the Ansatz ψ(x, t) = (φ(x) + ρ(x, t))e −iωt , with ρ(x, t) ∈ C. The linearized equation on ρ is called the linearization at a solitary wave: (1.4) Note that since L − = L + , the action of L on ρ considered as taking values in C is R-linear but not C-linear. Since lim |x|→∞ φ(x) = 0, the essential spectrum of L − and L + is [m − ω, +∞). First, let us note that the spectrum of JL is located on the real and imaginary axes only: σ(JL) ⊂ R ∪ iR. To prove this, we consider (JL) 2 = − L − L + 0 0 L + L − . Since L − is positive-definite (φ ∈ ker L − , being nowhere zero, corresponds to its smallest eigenvalue), we can define the selfadjoint root of L − ; then with the inclusion due to L 1/2 − being selfadjoint. Thus, any eigenvalue λ ∈ σ d (JL) satisfies λ 2 ∈ R. Given the family of solitary waves, φ ω (x)e −iωt , ω ∈ Ω ⊂ R, we would like to know at which ω the eigenvalues of the linearized equation with Re λ > 0 appear. Since λ 2 ∈ R, such eigenvalues can only be located on the real axis, having bifurcated from λ = 0. One can check that λ = 0 belongs to the discrete spectrum of JL, with for all ω which correspond to solitary waves. Thus, if we will restrict our attention to functions which are even in x, the dimension of the generalized null space of JL is at least two. Hence, the bifurcation follows the jump in the dimension of the generalized null space of JL. Such a jump happens at a particular value of ω if one can solve the . This leads to the condition that ∂ ω φ ω 0 is orthogonal to the null space of the adjoint to JL, which contains the vector φ ω 0 ; this results in φ ω , ∂ ω φ ω = ∂ ω φ ω 2 L 2 /2 = 0. A slightly more careful analysis [CP03] based on construction of the moving frame in the generalized eigenspace of λ = 0 shows that there are two real eigenvalues ±λ ∈ R that have emerged from λ = 0 when ω is such that ∂ ω φ ω 2 L 2 becomes positive, leading to a linear instability of the corresponding solitary wave. The opposite condition, is the Vakhitov-Kolokolov stability criterion which guarantees the absence of nonzero real eigenvalues for the nonlinear Schrödinger equation. It appeared in [VK73,Sha83,GSS87] in relation to linear and orbital stability of solitary waves. The above approach fails for the nonlinear Dirac equation since L − is no longer positive-definite. For the completeness, let us present a more precise form of the Vakhitov-Kolokolov stability criterion [VK73]. We follow [VK73]. Assume that there is λ ∈ σ d (JL), λ > 0. The relation (JL − λ)Ξ = 0 implies that λ 2 Ξ 1 = −L − L + Ξ 1 . It follows that Ξ 1 is orthogonal to the kernel of the selfadjoint operator L − (which is spanned by φ ω ): Thus, the inverse to L − can be applied: Since L − is positive-definite and η / ∈ ker L − , it follows that η, L − η > 0. Since λ > 0, Ξ 1 , L + Ξ 1 < 0, therefore the quadratic form ·, L + · is not positive-definite on vectors orthogonal to φ ω . According to Lagrange's principle, the function r corresponding to the minimum of r, L + r under conditions r, φ ω = 0 and r, r = 1 satisfies (1.6) Since r, L + r = α, we need to know whether α could be negative. Since L + ∂ x φ ω = 0, one has λ 1 = 0 ∈ σ p (L + ). Due to ∂ x φ ω vanishing at one point (x = 0), there is exactly one negative eigenvalue of L + , which we denote by λ 0 ∈ σ p (L + ). (This eigenvalue corresponds to some non-vanishing eigenfunction.) Note that β = 0, or else α would have to be equal to λ 0 , with r the corresponding eigenfunction of L + , but then r, having to be nonzero, could not be orthogonal to φ ω . Denote It follows that the quadratic form L 1/2 3. Moreover, there may be point eigenvalues already present in the spectra of linearizations at arbitrarily small solitary waves. Formally, we could say that these eigenvalues bifurcate from the essential spectrum of the free Dirac operator (divided by i), which can be considered as the linearization of the nonlinear Dirac equation at the zero solitary wave. In the present paper we investigate the first scenario. The main result (Lemma 4.1) states that if the Vakhitov-Kolokolov breaks down at some point ω * , then, generically, the solitary waves with ω from an open one-sided neighborhood of ω * are linearly unstable. We also demonstrate the presence of the eigenvalue ±2ωi in the spectrum of the linearized operator (Corollary 2.8) and obtain Virial identities, or Pohozhaev theorem, for the nonlinear Dirac equation (Lemma 3.2), which we need for the analysis of the zero eigenvalue of the linearized operator. Linearization of the nonlinear Dirac equation The nonlinear Dirac equation in R n has the form where ∂ j = ∂ ∂x j , N is even and g smooth, with m := g(0) > 0. The Dirac matrices α j and β satisfy the relations where I N is an N × N unit matrix. We will always assume that β = I N/2 0 0 −I N/2 . In the case n = 1, we assume α 1 = −σ 2 ; in the case n = 3, one could take α j = 0 σ j σ j 0 , where σ j are the standard Pauli matrices. Equation (2.1), usually with g(s) = 1 − s, is called the Soler model [Sol70], which has been receiving a lot of attention in theoretical physics in relation to classical models of elementary particles. Below, we assume that there are solitary waves for ω from some nonempty set Ω ⊂ R: with φ ω smoothly depending on ω. We will not indicate the dependence on ω explicitly, and will write φ instead of φ ω . The profile φ of a stationary wave satisfies the stationary nonlinear Dirac equation 3) The energy and charge functionals corresponding to the nonlinear Dirac equation (2.1) are given by where G(s) is the antiderivative of g(s) which satisfies G(0) = 0. Q(ψ) is the charge functional which is (formally) conserved for solutions to (2.1) due to the U(1)-invariance. The nonlinear Dirac equation (2.1) can be written in the Hamiltonian form as ∂ t Im ψ = − 1 2 δ Re ψ E, ∂ t Re ψ = 1 2 δ Im ψ E, or simplyψ = −iδ ψ * E. The relation (2.3) satisfied by the profile of the solitary wave φ(x)e −iωt can be written as where the primes denote the Fréchét derivative of the functionals E(ψ), Q(ψ) with respect to (Re ψ, Im ψ). Let us write the solution in the form ψ(x, t) = (φ(x) + ρ(x, t))e −iωt , ρ(x, t) ∈ C N . The linearized equation on ρ is given byρ = J Lρ, (2.5) where J corresponds to a multiplication by 1/i and Note that, because of the presence of Re(φ * βρ), the action of L on ρ ∈ C N is R-linear but not C-linear. Because of this, it is convenient to write it as an operator L acting on vectors from R 2N ; then (2.5) takes the following form: Note that J, A j , and B correspond to multiplication by −i, α j , and β under the C N ↔ R 2N correspondence. Proof. Recall that Since φ(x) ∈ C N satisfies the stationary nonlinear Dirac equation (2.3), we get: Taking the derivative of (2.3) with respect to x k yields Lemma 2.6. Let α 0 be an hermitian matrix anticommuting with α j , 1 ≤ j ≤ n, and with β. Then α 0 φ is an eigenfunction of L − and of L, corresponding to the eigenvalue λ = −2ω. Since σ(JL) is symmetric with respect to R and iR, for any g(s) in (2.1) and in any dimension n ≥ 1, we have: Corollary 2.8. ±2ωi are L 2 eigenvalues of JL. Remark 2.9. For |ω| > m/3, the eigenvalues ±2ωi are embedded in the essential spectrum. This is in contradiction with the Hypothesis (H:6) in [BC11] on the absence of eigenvalues embedded in the essential spectrum, although we hope that this difficulty could be dealt with using a minor change in the proof. Remark 2.10. The result of Corollary 2.8 takes place for any nonlinearity g(ψ * βψ) and in any dimension. The spatial dimension n and the number of components of ψ could be such that there is no matrix α 0 which anticommutes with α j , 1 ≤ j ≤ n, and with β; then the eigenvector corresponding to ±2ωi can be constructed using the spatial reflections. Virial identities When studying the bifurcation of eigenvalues from λ = 0, we will need some conclusions about the generalized null space of the linearized operator. We will draw these conclusions from the Virial identities, which are also known as the Pohozhaev theorem [Poh65]. In the context of the nonlinear Dirac equations, similar results were presented in [ES95]. Due to Lemma 2.4, we can choose a small counterclockwise-oriented circle γ centered at λ = 0 such that at ω = ω * the only part of the spectrum σ(JL) inside γ is the eigenvalue λ = 0. Assume that O is an open neighborhood of ω * small enough so that σ(JL) does not intersect γ for ω ∈ O. Define (4.8) For each ω ∈ O, P 0 is a projection onto a finite-dimensional vector space X := Range(P 0 ) ⊂ L 2 (R n ). The operator P 0 JL is bounded (since P 0 is smoothing of order one) and with the finite-dimensional range. Applying the Fredholm alternative to P 0 JL, we conclude that there is E 3 such that P 0 JLE 3 = ∂ ω Φ (hence JLE 3 = ∂ ω Φ) if and only if ∂ ω Q(φ) = 0. Indeed, one can check that it is precisely in this case that ∂ ω Φ is orthogonal to N ((JL) * ) = Span {Φ, J∂ k Φ}: The right-hand side vanishes since it is the kth component of the (zero) momentum of the standing solitary wave. Remark 4.3. Using (2.3), one can explicitly compute Once there is E 3 such that JLE 3 = ∂ ω Φ, there is also E 4 such that JLE 4 = E 3 , since E 3 is orthogonal to the null space N ((JL) * ) = Span {Φ, J∂ k Φ}: To check that the right-hand side of the second line is indeed equal to zero, one needs to take into account the following: (4.10) Above, K(s) is the antiderivative of sg ′ (s) such that K(0) = 0. The integrals in the right-hand sides of (4.9) and (4.10) are equal to zero due to our assumption on the symmetry properties of φ * φ and φ * βφ. Note that L − is C-linear, hence commutes with a multiplication by i, and therefore J and L 0 commute; we used this when deriving (4.10). We will assume that E 3 , LE 3 = 0. (4.11) Then there is no E 5 such that JLE 5 = E 4 , since E 4 is not orthogonal to the null space N ((JL) * ) ∋ Φ: Remark 4.4. If (4.11) is not satisfied, then the dimension of the generalized null space of JL at ω * may jump by more than two; this means that there are more than two eigenvalues colliding at λ = 0 as ω passes through ω * . We expect that generically this scenario does not take place. Near ω = ω * , the eigenvalues of JL which are located inside a small contour around λ = 0 coincide with the eigenvalues of the matrix M , defined in (4.16). Since σ 3 is identically zero in O, these eigenvalues satisfy Remark 4.5. This argument is slightly longer than a similar computation in [CP03] since we do not assume that φ could be chosen purely real (allowing for a common ansatz used in [ES95] in the context of the nonlinear Dirac equation), and consequently we could not take e j to be "imaginary" for j odd and "real" for j even, and enjoy the vanishing of J −1 e j , e k for j + k is even. Concluding remarks In the conclusion, let us make several observations. Remark 5.1. In the case of the nonlinear Schrödinger equation, the function µ(ω) is strictly positive (this is due to positive-definiteness of L − in (1.4), which results in positivity of (4.19); see [VK73,CP03]), so that the collisions of eigenvalues at λ = 0 always happen according to the following scenario: when dQ/dω changes from negative to positive (no matter whether this happens as ω increases or decreases), there is a pair of eigenvalues on the imaginary axis colliding at λ = 0 and proceeding along the real axis. Then, further, if dQ/dω changes from positive to negative, this pair of real eigenvalues return to λ = 0 and then retreat onto the imaginary axis. For the Dirac equation, we can not rule out that µ(ω) changes the sign (becoming negative). If this were the case, vanishing of dQ/dω would be accompanied with the reversed bifurcation mechanism: as dQ/dω goes from positive to negative, another pair of imaginary eigenvalues collide at λ = 0 and proceed along the real axis. We do not have examples of particular nonlinearities which lead to such a scenario. Remark 5.2. In one dimension, for the nonlinearity g(s) = 1 − s k , k ∈ N, using the asymptotics as in [Gua08], one can derive that, as ω → 1−, the function µ(ω) defined in (4.18) has the asymptotics There is the same asymptotics in the case of the nonlinear Schrödinger equation (1.2) with the same nonlinearity g(s). In particular, lim ω→1− µ(ω) = +∞, hence µ(ω) remains positive for ω sufficiently close to 1 (precisely as for the nonlinear Schrödinger equation, when µ(ω) > 0 for all ω), suggesting that the bifurcation scenario for the nonlinear Dirac equation near ω = 1 is the same as for the nonlinear Schrödinger equation. Remark 5.4. Even if dQ/dω never vanishes, so that there are no bifurcations of nonzero real eigenvalues from λ = 0, there may be nonzero real eigenvalues present in the spectrum of all solitary waves. We expect that the Vakhitov-Kolokolov criterion could again be useful here when applied in the nonrelativistic limit ω → m, when the properties of the nonlinear Dirac equation are similar to properties of the nonlinear Schrödinger equation. This idea has been mentioned in [CKMS10]. Our preliminary results indicate that in one dimension, for ω sufficiently close to 1, the nonlinearity with g(s) = 1 − s k + o(s k ) with k = 1 and k = 2 does not produce nonzero real eigenvalues, while for k ≥ 3 there are two real eigenvalues, one positive (leading to linear instability) and one negative.
2011-08-15T12:43:44.000Z
2011-07-09T00:00:00.000
{ "year": 2011, "sha1": "a4e47190fae256ef1dfa63693b3b971a206f9795", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a4e47190fae256ef1dfa63693b3b971a206f9795", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
247685725
pes2o/s2orc
v3-fos-license
A Novel Formula Comprising Wolfberry, Figs, White Lentils, Raspberries, and Maca (WFWRM) Induced Antifatigue Effects in a Forced Exercise Mouse Model Long-term body fatigue poses a threat to human health. To explore novel sources of antifatigue medicine and food, we developed a novel formula composed of wolfberry, figs, white lentils, raspberries, and maca (WFWRM) according to the theory of traditional Chinese medicine. In this study, we explored whether the administration of the WFWRM relieves fatigue. Thirty male Kunming mice were divided into three groups, which received either intragastric administration of saline, vitamin C (100 mg/kg), or WFWRM (1.00 g/kg) every day. After 30 days of treatment, all mice exhaustively performed weight-bearing swimming. Another ten mice that did not perform swimming were treated with saline for 30 days and used as sedentary control. The antifatigue effect and biochemical oxidation phenomena were assessed in the exercise-exhausted model and sedentary controls. The histopathological changes in the liver and kidney tissues of mice were observed by performing hematoxylin-eosin (HE) staining. After 30 days of oral administration, the liver and kidney tissues of mice were healthy and show no pathological changes. Compared to the fatigue model group, WFWRM significantly increased the rota-rod time of the mice. Also, the concentrations of lactic acid (LA), blood urea nitrogen (BUN), creatine kinase (CK), and lactate dehydrogenase (LDH) in the WFWRM group significantly reduced. On the contrary, the levels of hepatic glycogen (LG), muscle glycogen (MG), and serum glucose (GLU) increased in the WFWRM group. Besides, WFWRM markedly reduced the levels of malondialdehyde (MDA) but increased the levels of glutathione peroxidase (GSH-PX) and superoxide dismutase (SOD). Pearson correlation analysis indicated that the concentrations of the sources of energy (LG, MG, and GLU) significantly correlated with those of metabolites (BLA, BUN, CK, and LDH) and antioxidant levels (SOD, GSH-PX, and MDA). Overall, our results suggested that the supplementation of WFWRM could improve exercise capacity and relieve fatigue probably by normalizing energy metabolism and attenuating oxidation. Introduction Fatigue refers to the inability of the body to maintain its function at a specific level or to achieve the predetermined exercise intensity [1]. Based on the development process of fatigue, it can be divided into acute, chronic, and excessive fatigue [2]. Acute fatigue can be recovered to normal after a long rest, but chronic and excessive fatigue are suboptimal health states that cannot be recovered only by resting and usually lead to aging, metabolic disorder, depression, and cancer [2]. ese symptoms can seriously affect the learning efficiency, quality of life, and work progress of people. Some chemical drugs, such as modafinil, amphetamine, and methylphenidate are available to relieve fatigue, but the side effects of those chemical drugs, including mental disorders, drowsiness, and addiction, were commonly reported and might limit their application [3][4][5]. erefore, additional strategies that alleviate fatigue are required. e pathogenesis of fatigue remains unclear, although increasing evidence suggests that it is mainly related to energy exhaustion and accumulation of metabolites and free radicals [6,7]. Carbohydrates in the body are mainly in the form of hepatic glycogen (LG) and muscle glycogen (MG) in tissue cells and serum glucose (GLU) in the blood [8,9]. During exercise, these energy sources are exhausted, leading to fatigue. e concentrations of LG, MG, and GLU are, therefore, used to evaluate fatigue. Additionally, a large amount of urea nitrogen (BUN) is generated when protein metabolizes to produce energy. After exercise, the increase in metabolites, such as BUN and lactic acid (LA), can decrease the vitality of muscle cells, affect enzyme activity (LDH and CK), and reduce energy production, resulting in fatigue [10,11]. Free radicals are also considered the main factor that contributes to fatigue. e accumulation of free radicals can cause lipid peroxidation of the mitochondrial membrane, inhibit cell respiration, induce the oxidation of energy substances and lead to fatigue [12]. Glutathione peroxidase (GSH-PX) and superoxide dismutase (SOD) are important antioxidant enzymes, which can scavenge free radicals and reduce the production of malondialdehyde (MDA) to resist exercise-induced oxidative damage and relieve fatigue [13]. It is reported that several plant extracts or Chinese traditional medicines containing alkaloids, flavonoids, and polyphenols not only reduce fatigue but also have the advantage of having fewer side effects [14]. erefore, based on the theory of Chinese traditional medicine, we developed a novel formula composed of wolfberry, figs, white lentils, raspberries, and maca (WFWRM) and hypothesized that it would have an antifatigue effect. Several studies demonstrated the high antioxidant activity of wolfberry; it could scavenge free radicals in high-fat-fed rats and inhibit the peroxidation of low-density lipoprotein (LDL) [15,16]. Meanwhile, it has been reported that maca contains flavonoids and alkaloids, is neuroprotective, and improves swimming endurance capacity in ICR mice [17]. Previous studies demonstrated that white lentils, figs, and raspberries have antioxidant effects, but there are only a few reports of their antifatigue activity [18][19][20]. In the current study, we set out to explore the efficacy of WFWRM to relieve fatigue, using a well-established weightbearing swimming mouse model. Preparation of WFWRM and Compositional Analysis. e formulation of WFWRM is presented in Table 1. e Chinese medicinal herbs were made into tablets by Shanghai Tianyuan Plant Products Co. Ltd (Shanghai, China). e company mixed the powders of wolfberry, fig, white lentils, raspberry, and maca and boiled them in water (materialliquid ratio � 1 : 10) for 2 hours to obtain the extract. en, the aqueous extract was spray dried and tabletted to obtain WFWRM. In this study, we dissolved WFWRM in normal saline for gavage. Moreover, the phytochemical characterization of WFWRM was performed on an UPLC-Q-Orbitrap High Resolution Mass Spectrometer ( ermo Fisher Scientific Company, USA) using both positive and negative modes. e WFWRM powder (1.09 g) was added to 100 mL methanol and was then extracted for 60 min in a water bath at 60°C. e extract was centrifuged at 4, 000g for 10 min, and the supernatant was filtered with a 0.22 μm nylon filter membrane. e WFWRM extract was separated using a ermo Scientific AccucoreTM C18 column (2.6 μm, 3 mm × 100 mm). e temperature of the column was 40°C, and the sample injection volume was 5 μL. e eluents were 0.1% formic acid (A) and acetonitrile (B). e flow rate was 0.2 mL/min, and the gradient program started at 95% A and held at 95% A for 1 min, decreased to 5% A for 20 min, held at 5% A for 1 min, increased to 95% A for 0.1 min, and held at 95% A for 0.9 min. e parameters of mass spectrometry were set as follows: scanning range, 100-1500 m/z; spray voltage, 3 kV; sheath gas volume flow, 35 arb; auxiliary gas volume flow, 10 arb; auxiliary device temperature, 250°C; and the temperature of the ion transfer tube, 300°C. Animals and Experimental Design. Healthy male Kunming mice (specific pathogen free, SPF) used in this study were supplied by the Dashuo Laboratory Animal Co. Ltd. (Chengdu, China). All experiments were performed according to the guidelines by the Animal Research Committee of the Chengdu University of Traditional Chinese Medicine. ey were 5 weeks old and their body weights were 22 ± 3 g, and they were kept in a temperature (22 ± 2°C) and humidity (55 ± 15%) controlled room at a 12-hour light/ dark cycle. All mice were allowed free access to distilled water and a rodent chow diet throughout the experimental period. After adaptive feeding for one week, 40 mice were randomly divided into 4 different groups (n � 10): (1) the sedentary control group (CON) mice were treated with saline daily for 30 days without weight-bearing swimming; (2) the model group (MOD) mice were treated with saline by oral gavage daily for 30 days and then were exhaustively exercised through weight-bearing swimming; (3) the vitamin C group (VC) mice were daily treated with vitamin C (100 mg/kg) by oral gavage daily for 30 days and then exhaustively exercised through weight-bearing swimming; and (4) the WFWRM treatment group (WFWRM) mice were treated with WFWRM (1.00 g/kg) by intragastric administration (0.2 mL/10 g) daily for 30 days and then exhaustively exercised through weight-bearing swimming. In order to evaluate the safety of WFWRM, another 9 mice were also randomly divided into 3 different groups (n � 3): the CON group, the VC group, and the WFWRM treatment group. ey were daily gavaged with saline, vitamin C (100 mg/kg), and WFWRM (1.00 g/kg) for 30 days without the rota-rod test and the swimming exhaustion experiment. e doses were decided based on the recommended human dose [21]. e general food intake was monitored every day, and the body weight was recorded every five days. Rota-Rod Test and Histopathological Analysis. After treatment for 30 min on the 28 th day, mice from each group were trained on a Fatigue Rotary Rod Apparatus ZB-200 (Chengdu Taimeng Science Technology Co. Ltd, Chengdu, China) at 15 rpm [22]. In the formal test, mice were placed on the rota-rod at 15 rpm, until they were exhausted and dropped from the rod, and the total running time was measured. ree mice from each group were not subjected to the rota-rod test or the swimming exhaustion experiment, and the internal organs, such as liver and kidney, and skeletal muscles of the hind legs were collected. e liver and kidney tissues were fixed in 4% paraformaldehyde, embedded in paraffin, and cut into 4 μm thick sections. e sections were stained with HE and observed using a light microscope (Olympus, Tokyo, Japan) to evaluate the pathological changes. Weight-Loaded Forced Swimming and Sample Collection. e weight-loaded forced swimming test (WFST) was carried out in a swimming pool (40 × 46 × 63 cm) with 30 cm deep water, maintained at 25 ± 1°C. On the 30 th day, each mouse was loaded with a lead block (4% of the body weight) and was given a swimming exercise for 30 min. If a mouse was floating during the experiment, it would be forced to swim by stirring the water with a glass rod [22]. After 30 minutes, all mice were removed from the water and dried using a towel. Mice were sacrificed, and the serum, liver, and skeletal muscles of the hind legs were collected. All samples were stored at −80°C until use. Determination of Biochemical Parameters in the Serum, Liver, and Muscles. Serum LA, serum BUN, muscle MG, and liver (MDA, SOD, GSH-PX, and LG) parameters were measured using the ELISA kits, in accordance with the manufacturer's instructions. e LDH, GLU, and CK levels were determined using the Mindray BS-200 automatic biochemical analyzer (Mindray Biological Technology Co. Ltd, Shenzhen, China) using the diagnostic kits, in accordance with the manufacturer's instructions. Data Analysis. Statistical analysis was performed using GraphPad Prism 9 (San Diego, CA, USA) and SPSS 22.0 software (IBM, USA). All data were expressed as mean ± SD. One-way ANOVA with a LSD-t or Dunnett multiple comparison test was used for comparison among three groups. e correlation between energy metabolites and antioxidant-related traits was evaluated using Pearson correlation analysis. P < 0.05 was considered statistically significant difference. Chemical Compounds in WFWRM. In this study, we analyzed WFWRM using data-dependent UPLC-Q-TOF-MS, using both positive and negative polarity modes. After data comparison using the PubChem, ChemSpider, mzCloud, and mzVault databases and corresponding literature, twenty-seven constituents of WFWRM were identified, which are presented in Table 2. We found that the components of WFWRM were amino acids and metabolites (DL-arginine, 2-hydroxyphenylalanine, L-phenylalanine, and indole-3-acrylic acid), flavonoids (formononetin, (+)-ar-turmerone, isoliquiritigenin, butein, kaempferol, catechin, trifolin, and rutin), polyphenols (gallic acid and curcumin), alkaloids (DL-stachydrine, caffeine, and trigonelline), fatty acids (9-Oxo-ODE, 12,13-DiHOME, 9S,13R-12-oxophytodienoic acid and 9-HpODE), terpenoids (18-β-glycyrrhetinic acid, zerumbone, (−)-caryophyllene oxide and (±)-abscisic acid), and others (1-linoleoyl glycerol and palmitoylethanolamide). ese chemical compounds may be the material basis of WFWRM to resist fatigue. A previous study indicated that patients with amino acid deficiency may develop dysfunctions of the pain-inhibitory mechanisms together with fatigue [23]. Consistent with this conclusion, Chen et al. recently showed that L-arginine supplementation could minimize skeletal muscle damage and reduce the accumulation of free radicals to decrease the occurrence of fatigue in rats [24]. Moreover, L-phenylalanine is a potential lipophilic antioxidant. Physalis pubescens L contains L-phenylalanine and other metabolites that relieve fatigue in rats by ameliorating the disturbances in amino acids and energy metabolism, alleviating the oxidative stress due to the reactive oxygen species [25]. Besides, 2-hydroxyphenylalanine is an isomeric tyrosine derived from L-phenylalanine and can participate in amino acid metabolism to provide energy to the body [26]. Flavonoids polyphenols, which are promising natural plant antioxidants, resist fatigue by scavenging free radicals and reducing the transfer speed of the auto-oxidation chain reaction [27]. For instance, there are reports suggesting that the catechin of grape seeds, rutin, curcumin, and kaempferol were able to extend the swimming time of weighted mice before exhausting by reducing the accumulation of free radicals, increasing the activity of antioxidant enzymes (GSH-PX, CAT, and SOD), and decreasing the production of metabolites (BUN and BLA) [27,28]. e application of alkaloids to relieve fatigue is increasingly being considered. Caffeine and trigonelline are hallmark plant alkaloids in coffee that increase athletic ability [29]. A study has shown that male participants who consumed low caffeine showed better fatigue resistance of the knee flexors, compared to those in the control group [30]. Similarly, trigonelline has been reported to decrease apoptosis and restore the MDA content in unilaterally 6-OHDA-lesioned rats [31]. Moreover, trigonelline could reduce oxidative stress and insulin resistance to maintain normal blood glucose in type-2 diabetes mellitus rats [32]. Fatty acids are also reported to have the effect of resisting cancerrelated fatigue [33]. For instance, 9-oxo-ODE could strongly activate the antioxidant response element to lessen the damage caused by oxidative stress [34]. Moreover, 12,13-DiHOME increased fatty acid uptake and oxidation in skeletal muscles of mice, which was able to reduce the production of free fatty acids and prevent tryptophan in the plasma from entering the brain to resist fatigue [35]. In addition, peripheral inflammation and immune activation Meanwhile, palmitoylethanolamide is a fatty acid derivative that can antifatigue through inhibiting inflammation [36]. Notably, it has been reported that the treatment of Baoyuan Jiedu decoction involving indole-3-acrylic acid, isoliquiritigenin, formononetin, 18-β-glycyrrhetinic acid, and 9S,13R-12-oxophytodienoic acid prevented prominent myotube atrophy and regulated mitochondrial production [37]. Indeed, mitochondria and muscles are damaged with fatigue [38]. erefore, the protection of muscles and mitochondria may relieve fatigue to a greater extent. Although no studies have shown that DL-stachydrine, (+)-ar-turmerone, gallic acid, butein, trifolin, zerumbone, 1-linoleoyl glycerol, (−)-caryophyllene oxide, and (±)-abscisic acid can relieve fatigue. However, studies have shown that they exhibit antioxidant activities towards oxidative stress and antioxidants are essential for relieving fatigue [39][40][41][42][43][44][45][46][47]. Since, WFWRM contains these components, we speculated that it has an antifatigue effect. WFWRM Increased the Rota-Rod Time. Exercise endurance is a direct indicator of fatigue and the rota-rod time has been used to evaluate the antifatigue effect in several studies [48,49]. In this study, we determined the rota-rod time after 30 days of WFWRM supplementation. As shown in Figure 1(a), WFWRM markedly increased the rota-rod time by 147.07% compared to the model group (P < 0.001). We used vitamin C as a positive control drug, which is usually used for resisting fatigue and could reduce oxidative damage due to exercise [50]. e rota-rod time of mice in the vitamin C treatment group was 41.75% higher than that in the model group (P < 0.0001). Surprisingly, WFWRM had a better effect on the rota-rod time compared to vitamin C. erefore, it is reasonable to assume that WFWRM is a promising agent for relieving fatigue. Moreover, the food intake ( Figure 1(b)) and body weight (Figure 1(c)) were not affected by WFWRM (P > 0.05). We also found that there were no significant histological changes in the livers and kidneys of mice across the different groups (Figure 2), suggesting that WFWRM is a safe and promising approach to relieve fatigue. ese results indicated that WFWRM could significantly enhance the exercise capacity of mice without any damage. WFWRM Increased Serum Glucose Levels and Liver and Muscle Glycogen Levels in Exhaustive Mice. GLU is an important component of the body and an important source of energy for various tissues and organs [51]. In addition to maintaining optimum GLU levels, the excess sugar ingested by the body is stored in the form of LG and MG [8]. During vigorous exercise, if GLU is not sufficient, LG is broken down into GLU [9]. During prolonged and strenuous exercise, MG can also be converted into LG to supply energy [52]. Increased storage of GLU, LG, and MG could enhance endurance and exercise capacity. erefore, the serum GLU, LG, and MG levels can reflect the degree of fatigue. In this study, we found that the serum GLU (P < 0.0001), LG (P < 0.01), and MG (P < 0.01) levels of WFWRM-treated mice were significantly increased by 75.1%, 43.5%, and 36.1%, respectively (Figure 3). ese results suggested that the improvement in the antifatigue activity of WFWRM may be related to the homeostatic ability of blood glucose. WFWRM Decreased Serum BUN, LA, CK, and LDH Levels. As the intensity of exercise increases, carbohydrates and fats may not meet the energy needs. Proteins, thus, will be consumed and produce a large number of nitrogen-containing compounds and α-keto acids [53]. e former is converted into urea and excreted via urine, while the latter is used as a raw material for the synthesis of glucose [54,55]. erefore, BUN, the final by-product of protein metabolism, can be used to evaluate protein mobilization and fatigue. In the present study, WFWRM reduced the production of BUN by 40.9% compared to the model group (Figure 4(a), P < 0.0001). is suggests that WFWRM reduced the consumption of protein in mice during exercise and showed antifatigue activity. During vigorous exercises in a short period, the oxygen-carrying capacity of the body becomes insufficient, resulting in anaerobic respiration by the muscle cells [56]. Anaerobic glycolysis produces LA while supplying energy, which could damage organs and lead to fatigue by lowering pH [57]. CK is an important enzyme responsible for muscle contraction and ATP regeneration [58]. LDH exists in almost all organs and tissues but its content in the blood is very low. However, when the cells are destroyed, the blood LDH levels rise [59,60]. erefore, the cytosolic enzymes, LA, CK, and LDH can be used to evaluate the extent of muscle damage and the degree of fatigue. e more vigorous and longer the exercise, the higher the levels of LA, CK, and LDH. In this study, the content of LA, CK, and LDH increased significantly in the MOD as compared to the CON (Figure 4(b), P < 0.0001). Compared to the MOD, the levels of LA, CK, and LDH decreased by 34.0%, 43.5%, and 33.5% in WFWRM, respectively (Figures 3(b)-3(d), P < 0.0001). ese findings suggested that WFWRM may reduce the damage to the muscle cells in mice, which may be related to its antifatigue activity. WFWRM Increased Antioxidant Activity and Decreased the MDA Level. It has been reported that oxidative stress is closely related to fatigue even though the mechanism is still unclear. SOD is a catalytic enzyme that converts oxygen free radicals into H 2 O 2 [61]. H 2 O 2 is decomposed into O 2 and H 2 O under the catalysis of GSH-PX and CAT (catalase) [62]. When the content of SOD and GSH-PX in the body is low, free radicals obtain electrons from the cell membrane, damage the mitochondrial membrane, and cause lipid peroxidation [63]. Moreover, MDA is the end product of lipid peroxidation. Previous studies indicated that mice gavaged with polysaccharides from Lepidium meyenii Walp. LG, and (c) MG content in the serum. All experiments were performed three times. Data were expressed as mean ± SD. One-way ANOVA with the LSD-(t) or Dunnett multiple comparisons test was used for comparison between the three groups, * * P < 0.01 and * * * * P < 0.0001, compared with MOD. CON, control group. MOD, model group. VC, vitamin C group. WFWRM, the novel formula group (n � 10). All experiments were performed three times. Data were expressed as mean ± SD. One-way ANOVA with the LSD-(t) multiple comparisons test was used for comparison between the three groups. * P < 0.05 and * * * * P < 0.0001, compared with MOD. # P < 0.05, compared with CON. CON, control group. MOD, model group. VC, vitamin c group. WFWRM, the novel formula group (n � 10). Figure 6: Correlations between carbohydrate metabolism and metabolite-and oxidative stress-related parameters. Pearson correlation analysis was calculated for all experimentally determined parameters in this study. e R values are presented by gradient colors, where red and blue indicate positive and negative correlations, respectively. e asterisks * , * * , and * * * indicated P < 0.05, P < 0.01, and P < 0.001, respectively. 8 Evidence-Based Complementary and Alternative Medicine (maca) or Lycium ruthenicum show inceased GSH-PX and SOD levels and decreased the MDA levels [64,65]. In our study, exhaustive exercise did not affect the hepatic concentration of GSH-PX and SOD, but significantly increased the hepatic MDA levels (P < 0.05). e content of GSH-PX and SOD increased by 49.8% and 59.4%, respectively, while that of MDA decreased by 24.4% after WFWRM treatment (P < 0.0001, Figure 5). Our results indicated that the WFWRM supplementation attenuated the oxidative stress and might help restore the oxidant-antioxidant balance. Pearson Correlation Analysis among the Different Indicators. e Pearson correlation analysis is used to analyze the direction and degree of linear correlation between two variables [66]. In this study, we used Pearson correlation analysis to evaluate the correlation between the different indicators. As shown in Figure 6, all possible pairs of LDH, BUN, LA, and CK showed a significant positive correlation (P < 0.001). ere was no significant correlation between LG, MG, and GLU (P > 0.05). SOD and GSH-PX were positively correlated (P > 0.001). However, SOD and MDA were negatively correlated (P < 0.05). Overall, these results indicated that WFWRM administration could effectively reduce the accumulation of metabolites while increasing the activity of antioxidant enzymes to maintain optimum blood glucose levels even after weight-bearing swimming. Conclusion Gavaged with the novel formula comprising wolfberry, figs, white lentils, raspberries, and maca, the mice showed increased exercise ability as assessed by the rota-rod test and accelerated the metabolism of lactic acid and urea nitrogen to relieve fatigue. Also, the novel formula decreased the activity of creatine kinase and lactate dehydrogenase to reduce the occurrence of fatigue. Moreover, the novel formula also resisted fatigue by increasing the activity of antioxidant enzymes and reducing malondialdehyde production to reduce oxidative stress. In addition, we found that the main components of the novel formula were amino acids, alkaloids, and fatty acids. ese chemical compounds may contribute to its antifatigue activity. In short, the novel formula showed a significant antifatigue effect and had the potential to develop into an antifatigue supplement (Figure 7). Data Availability e data used to support the findings of this study are included within the article. Ethical Approval e animal protocol was reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) of the Chengdu University of Traditional Chinese Medicine, Chengdu. Conflicts of Interest e authors declare that they have no conflicts of interest with regard to the contents of this article. Authors' Contributions YCX, LSJ, and TL designed the experiments; YCX, YJY, and TL carried out the laboratory experiments; YCX, TP, PT, and GTH analyzed the data, interpreted the results, prepared figures, and wrote the manuscript; GJL and LSJ contributed to reagents, materials, and analysis platforms and revised the
2022-03-26T15:07:39.989Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "dddb84205029ac259f33c80db837effcc397f15c", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ecam/2022/3784580.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "027d1fb250adb673e1271d73836ff2c4a5e74370", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
74511676
pes2o/s2orc
v3-fos-license
Evidence for Policy Making: Health Services Access and Regional Disparities in Kerman Background and purpose: Health indices, regarding to their role in the development of a society, are one of the most important indices at national level. Success of national development programs is largely dependent on the establishment of appropriate goals at the health sector, among which access to healthcare facilities is an essential requirement. The aim of this study was to examine the disparities in health services access across the Kerman province. Materials and Methods: This was a cross-sectional study. Study sample consisted of the cities in Kerman province, ranked based on 15 health indices. Data was collected from statistical yearbook. The indices were weighted using Shannon entropy, then using the TOPSIS technique and the result were classified into three categories in terms of the level of development across towns. Results: The findings showed distinct regional disparities in health services across Kerman province and the significant difference was observed between the cities in terms of development. Shannon entropy introduced the number of pharmacologist per 10 thousand people as the most important indicator and the number of rural active health center per 1000 people as the less important indicator. According to TOPSIS, Kerman (0.719) and Fahraj (0.1151) ranked the first and last in terms of access to health services respectively. Conclusion: There are significant differences between cities of Kerman province in terms of access to health care facilities and services. Therefore, it is recommended that officials and policy-makers determine resource allocation priorities according to the degree of development for a balanced and equitable distribution of health care facilities. [Anjomshoa M. Mousavi M. *Seyedin H. Ariankhesal A. Sadeghifar J. Shaarbafchi-Zadeh N. Evidence for Policy Making: Health Services Access & Regional Disparities in Kerman. IJHS 2013;1(3):35-42] http://jhs.mazums.ac.ir 1.Introduction Recognizing the existing situation using appropriate indicators and reducing disparities is essential. The aim of socioeconomic development programs after the Iranian revolution is reduction of inequality in order to create social and economic justice in all provinces (1). Regional studies reveals that some areas have better performance than the other ones and enjoy more facilities and development level (2).GDP(Gross domestic product) and GDP per capita were the main indicators to assess development level. These indicators do not consider fairness in the distribution of health services and other social services (3). Health and development are closely linked to each other and can affect interchangeably (4). Health sector as an important social part of any country plays a decisive role in the well-being of people (5).Access to health care is crucial and is a multi-dimensional concept; physical access and financial access. Physical access is defined as geographical access to health facilities which can effect on health services usage (6). This issue has been the concern of community and health policy makers (7,8). Regional studies in many countries reveal that specific areas have better performance and have enjoyed the modern facilities (2).After the Islamic Revolution special attention has been paid to the health sector. Iran's Constitution has defied the provision of basic needs in health care as the responsibility of the government to mobilize its resources to meet the nation's health (9). The geographical distribution of health indicators (as one of the most important indicators of development) in the cities of Iran is heterogeneous and disproportionate (10). Iran's geographical conditions has led to the diversity and unbalanced development (11). Similar to other developing countries, some areas compare to small areas are responsible for the majority of production and national income. This means their income is at a higher level and as a result, they enjoy more public service (12).Therefore, it is necessary to define the term access to health services and then develop a comprehensive program to fix this problem. This study was conducted using TOPSIS technique (The Technique for Order of Preference by Similarity to Ideal Solution) to assess the health services access across the Kerman province. Due to the multiplicity of criteria for comparison subjects, the use of other techniques will lead to problems in decision-making. However, these things do not occur in the TOPSIS technique. TOPSIS technique as a family member in a multicriteria decision making techniques are used to rank the different concepts of science, due to the transparent nature, mathematical logic and do not have any operational issues (13). Kerman located in southeastern Iran is considered as one of important and historical provinces. Kerman is regarded as the most important reference to the industrial, cultural, political, agricultural, academic and scientific, religious and other factors within the South-East region of Iran. The aim of this study was to provide a clear vision from the status of Kerman cities in terms of access to health services. Data were collected by a data collection form made by the researcher from the above-mentioned source. These indices were weighted using Shannon entropy, then using the TOPSIS technique, according to which each town was evaluated in terms of its access to health services and finally the towns were classified into three categories in terms of the level of development. Shannon entropy method included the following steps (14): First, the raw data matrix was normalized according to the formula: "Pij" is the normalized value of the index "j" in the "i-th "rank; "r ij " is the initial index value; and "m" is the number of options available to the ranking. Then "E j "(entropy per index) of "P ij ", for each index was calculated: "n"is the number of variables and "m" the number of places which are compared with. Accordingly, the degree of uncertainty or standard deviation (d j ) for each of the indices is obtained: Finally, the weight of each indicator (W j ) is calculated as follows: TOPSIS is done in the following steps (22): First, the maximum ( + ) and minimum ( − ) values of each index are identified. Then, using the following equation, normalization takes place.If the positive and negative indices are intended to be combined reversing the negative aspects into positive aspects should be done as follows: The standard weighted matrix based on the following equation is obtained: The positive ideal and negative ideal solutions for each of the indices are determined by the following procedure: IJHS 2013;1(3): 38 The positive ideal index is equal to its maximum, and the negative ideal in every index, the index is equal to the minimum. Distance of each option compared with ideals of positive and negative, are as follows ( + , − ): Distance option i from the positive ideal: Distance option i from the negative ideal: For calculation of relative closeness of each alternative to the ideal we should combine the values of + and − : The ranking is done based on the decreasing values of + ,means that the highest + is considered as the most developed, and the lowest + as most undeveloped. Results Encompassing more than 11 percent of Iran's area, Kerman is the largest province of Iran. The objective of this study was to use TOPSIS technique for appraising health services access in cities of Kerman province and to aware the policy maker in order to reduce differences in the cities. First, using Shannon entropy techniques, weight and ranking indices for determining the degree of development of Kerman cities is derived (Table 1). Number of general practitioner per 1000 people 0.065 9 4 Number of specialist per 1000 people 0.0691 5 5 Number of paramedical per 1000 people 0.065 10 6 Number of active beds of treatment centers per 1000 people 0.0688 6 7 Number of rural active health house per 1000 people 0.062 14 8 Number of pharmacy to 1000 people 0.0634 12 9 Number of pharmacologist per 10 thousand people 0.0756 1 10 Number of dentist per 1000 people 0.0697 2 11 Number of rural active health center per 1000 people 0.0618 15 12 Number of urban health centers per 1000 people 0.0621 13 13 Number of radiology centers per 1000 people 0.0693 4 14 Number of rehabilitation centers to 1000 people 0.0693 3 15 Number of active treatment centers per 1000 people 0.0687 7 IJHS 2013;1(3): 39 As Table 1 shows, among the 15 indices of health, the number of pharmacologists per 10 thousand people with weight of 0.0756 and number of rural active health centers per 1000 people with weight of 0.0618 had the highest (1 th ) and lowest (15 th ) ranks respectively. Using TOPSIS technique, Kerman towns were compared in terms of access to health services. In order to define a priority measure and for a better understanding of the status to health services access in Kerman, 16 towns were assessed and ranked into three categories (Table 2). Discussion The quality and accessibility of health services is one of the most important indicators for developing countries owing that having a healthy life is a right for all and is a prerequisite for the realization of sustainable development (15). In this study we tried to examine the disparities in health services resources across Kerman Province and consequently, identify priorities for the development. 15 indicators were selected and analyzed using Shannon entropy and TOPSIS technique. IJHS 2013;1(3): 40 The Shannon entropy results showed that the number of pharmacologists per 10 thousand people was the most important indicator, and the rural active health centers per 1000 people had the least importance. The results are consistent with study of TahariMehrjardi et al (16). The results indicate that number of pharmacologists per 10 thousand people had an uneven distribution across the cities of Kerman; while Kerman city had an score of 0.374, other cities such as Bardsir, Reygan, Sirjan, Shahr-e Babak, Faryab, Narmashir and Fahraj suffered from lack of pharmacologists with a score Similar uneven distribution was found about other human resources; while number of dentists per 1000 people was 0.145 in Rafsanjan, Arzouyeh, Reygan, Faryab and Narmashir had no dentists. Also number of rehabilitation centers per 1000 people was 0.055 in the city of Kerman, while Reagan, Faryab, and NarmashirFahraj County had no such facilities (zero). Health services should be accessible for all people in the Farthest and poorest parts of the country; however some services, such as specialist and active bedscan only delivered at larger cities and cover the suburbs. Smaller cities with a population lower than the service for population threshold only can be provided with mobile services or only on special weekdays (15). The results of this study show that access to healthcare facilities was very different among people in the Kerman Province. 18.75 % of the cities (Kerman, Rafsanjan and Ravar) were in the level of development; 43.75 % (including seven cities: Baft, Zarand, Bardsir, Bam, Shahr-e Babak, Sirjan and Jiroft) were in developing level, and 37.5 of cities (including 6 cities: Kahnuj, Faryab, Arzouyeh, Narmashir, Reagan and Fahraj) were under developed in terms of their access to healthcare services. Kerman city took the most and Fahraj city the least access to the health care facilities. In Zangiabadi`s studies (17) in Kurdistan, Nastaran (18) and Zarrabi(19) in Isfahan, Rafi'iyaan(4) at the Metropolis of Mashhad, Mousavi(20) in Kermanshah and Farhadyan(21), Amini(2) and Zarrabi (15) in all provinces were similar in conclusions about differences and disparities in the health services access. Also other studies at national level have shown uneven access to healthcare and other public facilities (22). Therefore, planners and policy-makers should focus their efforts on disparities in access and solving. In order to reduce the uneven access to healthcare services across cities and making it as an equitable distribution the authorities may need to develop first, a comprehensive and coordinated plan as a large-scale centralized and top-down approach, and second, (ii) a local and micro plan in small spatial scale (23). Finally, it is suggested that government's investments and its support from the private sector should be based on local needs and social justice which can result in solving regional disparities and elimination of inequalities (24). In terms of access to facilities we are certain that there are gaps and inequalities among the provinces and cities across the country. One of the possible reasons for this situation is governments' planning approaches through last 50 years. Because the decision IJHS 2013;1(3): 41 making have been according to centerperiphery model; most facilities located in large towns and cities, especially the capitals of the provinces, and very poor situation in other regions (25). In order to reduce the gap between the cities' access to health care and making distribution of services equitable, development of health indicators in poor cities (such as Kahnuj, Faryab, Arzouyeh, Narmashir, Reagan and Fahraj), and then developing ones (Baft, Zarand, Bardsir, Bam, Shahr-e Babak, Sirjan and Jiroft) is recommended. Furthermore, it is suggested that in the first stages of city development, authorities have special notice to development of short term health policies and programs which could result in expansion of critical health services and equity in access. At the next stage, authorities may pay attention to the development of necessary services in developing and deprived cities over a medium-term plan (5 years). Finally all cities spatial development in long term (10 year plan) could make access to health services quite equitable. Thus in order to reduce the access difference across Kerman Province, a hierarchical order based on the spatial hierarchy will occur.
2019-03-12T13:05:02.182Z
2013-12-15T00:00:00.000
{ "year": 2013, "sha1": "6399740151cca3329ba1ddf7a78bc849c40d3807", "oa_license": "CCBYNC", "oa_url": "http://jhs.mazums.ac.ir/files/site1/user_files_d258bd/bastanip-A-10-343-13-16f8ea0.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "abe96b1c0ce47e8ebf58a6b6f90be4552b740f1f", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
225197107
pes2o/s2orc
v3-fos-license
Photo-sensing and photo-conversion investigation of single walled carbon nanotube-silicon interface: role of acid stimulation Single walled carbon nanotubes (SWCNTs) are emerging as potential candidate in solar applications because of their remarkable structural, electrical and optical properties. In this work, we have reported simple and effective approach for the fabrication of SWCNTs/Si interface which play important role in photo-sensing and photo conversation applications. It is observed that, controlled acid treatment of SWCNTs at room temperature conditions proved helpful for the removal of amorphous carbon and significantly enhanced their photo-sensing response from 4% to 40%, respectively. In addition, it is found that open circuit voltage and current density of SWCNT/Si interface is become increased due to the presence of functional groups. However, Raman analysis also confirms that, acid stimulation significantly affect their crystalline structure. These results are important and compatible with the existing silicon technology without adopting any complex technique. Introduction Nowadays, carbon materials are emerging as prominent candidate for the research due to their outstanding physical, electrical and chemical properties [1][2][3][4]. Among various carbon materials, carbon nanotubes (CNTs), graphene and carbon nanoparticle, CNTs shows remarkable optical properties viz. tuneable and broadband light absorption which is required in photoconductive applications. Although, the photo-sensing response of nanotubes has produced huge debate with the various studies leading to different interpretations about the cause of photoconductivity [5]. Chen et al [6], have investigated the photo-detecting properties of SWCNTs for the fabrication of prototype infrared (IR) camera. It is well reported and investigated that, nanotubes based photodetector demonstrates excellent IR detection properties with simple fabrication cost and extraordinary performance [6][7]. However, the addition of few amounts of nanotubes in organic photovoltaic cell (OPC) effectively enhanced the efficiency of cell because of their fast charge transport mobility, good optical absorption near to IR band gaps [8]. The well optimized incorporation of nanotubes especially SWCNTs in polymer/oxides matrix significantly increase the efficiency and stability of OPC [9][10][11][12]. In addition, the efficiency of CNT/Si interface based solar cells can be controlled easily using acid stimulated CNTs. The chemical functionalization of the nanotubes with acids e.g. nitric (HNO 3 ) and sulfuric (H 2 SO 4 ) is very effective for making Si/CNTs interface based solar cells [13-1413-14]. However, the efficiency of Si/CNTs solar cell can be effectively improved by controlled etching of silicon oxide (SiO 2 ) using hydrofluoric acid (HF). The etching time corresponding to the thickness of SiO 2 is an important role in the formation of Si/SWCNTs interface. Without the removal of the oxide, device parameters such as Jsc, FF significantly degraded and inconsistent with SWCNTs/Si interface [9]. Although, the understanding of Si/CNTs interface physical behavior is not fully explored/discussed by the researchers. Generally, when a p-type nanotubes is positioned in contact with n-type etched silicon substrate, the energy band bending take place and p-type nanotubes ultimately bends down approach the Fermi level. On the other hand, n-type Si bends move away from the Fermi level [12][13][14]. It is also expected that a thin insulating layer exists due to native oxide in between Si and CNTs that affect the efficiency and properties of Si/CNTs Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. interface. Therefore, experimentally it is very important to control the HF etching parameters at every stage of the Si/CNTs interface. The main aim of present work is to report simple steps for the fabrication of SWCNT/Si interface for the study of photo sensitivity and power conversion efficiency properties under illumination of AM1.5 (100 mW cm −2 ). It is found that, acid stimulation of SWCNT significantly affect the porous nanotube network and form numerous heterojunctions that improved the SWCNTs/Si interface properties by decreasing interior electrical resistance and support charge carrier separation and their transport. Although, the step by step study of acid treated SWCNTs/Si interface for photocurrent and photo-sensing applications are not commonly investigated in the literature with our best knowledge. The study indicate that SWCNTs/Si interface shows prospective application in optoelectronic devices. Experimental method Acid stimulation of CNTs Pristine SWCNTs were purchased from Global Nanotech, India with more than 95% purity level and labeled here as sample 1. The nitric acid (HNO 3 ) of 0.5 and 1 M was used for chemical functionalization of the SWCNTs at room temperature (RT). Before acid functionalization, as-procured SWCNTs placed in electric oven for 4 h at 400°C for the elimination of the metal particles/amorphous carbon. The heating of nanotubes in electric oven for removal of impurities is well optimized in previous work [15][16]. Afterwards, heated nanotubes were dissolved in a 0.5 M and 1 M HNO 3 and then keep on stirring for 2 h at RT. The acid dissolved solution was further kept for 10 h at RT. To remove acid from nanotubes, centrifugation, multiples washing was done with deionized water and then dried in electric oven at 100°C. The so obtained tubes were named as sample 2 (0.5 M) sample 3 (1 M) and finally dispersed in dimethyl formamide (DMF) solvent for further analysis. Acid stimulated nanotubes reduce the tube-tube Vander Waals attraction and are soluble even in water. Fabrication of Si/CNTs interface To fabricate SWCNTs/Si interface, n-type Si wafer was used as substrate. An insulating layer of silicon dioxide (SiO 2 ) of 350 nm thickness was deposited on Si wafer. For the metal contact, the desired area of back side of Si substrate was chemically etched with hydrofluoric acid (HF) using Teflon beaker and twizzer. Afterwards, the aluminum (Al) metal was deposited on etched area using thermal evaporator technique. For the fabrication of SWCNTs/Si interface, a shadow mask of desired area was designed and then placed on top of SiO 2 layer. Similarly, various windows were created by selective chemical etching of SiO 2 and then each window makes one active device area. Finally, the well dispersed nanotubes were deposited on active window by using drop-casting method. For making top contact on the fabricated SWCNTs/Si interface, silver (Ag) metal of 150 nm thickness was deposited using thermal evaporator technique. The choice of metal contacts in the fabricated device is based on their work function value relative to the electron affinity of SWCNT/Si interface. The detail of adopted process is shown in figure 1(a). The schematic of electrical mesurement setup for fabricated device is shown in figure 1(b). For testing the device, the top contact (Ag) of the CNT film (on oxide) and bottom of Si (Al) were wired such as positive and negative electrodes to finish the entire fabrication process. The I-V and I-T measurement were recorded finally in PC utilizing a program developed in Labview with GPIB control. Characterizations The structural studies of all samples were evaluated by recording Raman spectra using Horiba, Lab RAM HR spectrometer coupled with Ar + ion laser of wavelength 514.5 nm. To perform the photo-sensing and PV testing, the devices were irradiated under solar simulator at AM1.5 (∼100 mW cm −2 ), and data were recorded using a Keithley 2400 source meter. Result and discussions The effect of acid stimulations on the structure of carbon nanotube has been investigated by Raman spectra. The Raman spectra of as-procured SWCNTs (sample 1) and chemically functionalized SWCNTs (sample 2 and 3) are shown in figure 2(a). The spectra consist of three standard characteristic bands, namely the D band at around 1352 cm −1 , G band at 1592 cm −1 and 2D band at 2698 cm −1 , respectively. It is also observed that D+G band appeared after the acid stimulation of the nanotubes which confirms that the intrinsic structure of procured nanotubes is retained after functionalization and they become less entangled giving an increased intensity of Raman signal. It is further observed that after functionalization, the D band intensity in sample 2 and 3 is considerably improved, which is mainly due to side wall sp 2 -sp 3 hybridization [15-1615-16]. It is well known that, the D to G band intensity ratio (I D /I G ) is related to degree of disorder and is inversely proportional to the crystalline nature of the nanotubes [15-1615-16]. As shown in table 1, I D /I G ratio for pristine nanotubes is 0.9, which confirms the presence of sp 2 hybridization. Furthermore, I D /I G ratio for acid stimulated nanotubes is found to enhance to 1.17 (sample 2) and 1.11 (sample 1) as compared to sample 1 that confirms the attachment of functional groups on the side walls as well as on the ends of tubes. The presence of these groups on the CNTs is mainly considered as defects in their tubular structure [17-1817-18]. It can also be interpreted as: the creation of defect sites is responsible for the weakening of sp 2 hybridization and a comparatively strengthened sp 3 -bonded carbon [17]. In addition, the chemical functionalization of nanotubes resulting to the intercalation of acid molecules inside their lattice that experienced a pressure and hence stress is exerted on them. The experienced pressure is main cause or the up-shifting of wavenumbers of G bands as shown in figure 2(b) [19-2019-20]. However, this stress is responsible for a change in nanotube inter-atomic distance that responsible for shortening of C-C bonds. Another cause of stress could be due to charge transfer between acid molecules and nanotubes and hence holes doping occur in nanotubes [20-2120-21]. The crystallite size (L a ) of nanotubes before and after acid stimulation was calculated using following formula [15-1615-16]: Table 1 shows that crystallite size of pristine nanotubes (sample 1) is higher as compared to acid stimulated tubes (sample 2 and 3). In addition, the defect density [(1/L a ) 2 ] indicates that acid stimulated nanotubes contains lower defect density in same trends as crystallite size found to decreases in table 1. Interestingly, the full width at half-maximum (FWHM) of the G and 2D band of sample 2 and 3 increases significantly on interaction of acid molecules with nanotubes, as can be seen from Table 1. Moreover, a secondorder band called G′ and D+G band also appeared in sample 2 and 3. The intensity of G′ band indicates the electrical conditions of the nanotubes. As shown in figure 2(a), the intensity of G′ band in sample 3 is enhanced as compared to sample 2 which clearly confirms higher charge doping in sample 3. In additions, the attachment of functional groups leads to increment in the number of bands close to the Femi level which further facilitate the charge transfer between the carbon atoms [22-2322-23]. Figures 3(a), (b) shows the typical J-V characteristics of SWCNT/Si interface in dark and light under 100 mW cm −2 . It is observed that, the different parameters such as open circuit voltage (V oc ), Short circuit current (I sc ), current density (J) and fill factor (FF) of proposed three samples has found to be increased. It means functional groups play important role in the SWCNT/Si interface. It is well known that, acid stimulated nanotubes usually behave as p-type semiconductors and also increased the surface to volume ratio [14-1514-15]. Also, when nanotubes fully expanded on a planar Si substrate, there will be numerous p-n junctions formed due to close contact between SWCNTs and underlying Si wafer. Furthermore, it is found that the sample 3 has highest V oc , J and I sc value as compared to all investigated samples. It is because of large surface to volume ratio which affect the mobility of nanotubes and hence promote in exciton dissociation and charge carrier transport phenomena [14,24]. In order to investigate the photo-sensing behavior of the SWCNTs/Si interface, a bias is applied across the two terminals and corresponding current measured under light illumination and dark as a function of time. Figures 4(a), (b), shows the photo-sensing response of 1% by wt. acid stimulated nanotubes-silicon interface at room temperature. At a fixed bias voltage, the current flowing through the SWCNT/Si instantaneously increased on light illumination, stable after a few seconds and then quickly recovered to its initial current value when the light was switch off. As shown in figures 4(a), (b), these photo response transients, the samples 2 and 3 respond reversibly to light over a number of gas ON and OFF cycles indicating the repeatability of the photo sensor response. A photo response of 28% and 40% has been observed for samples 2 and sample 3, respectively. It is observed that acid infiltration of SWCNTs proved helpful for the removal of impurities such as amorphous carbon/disorder and improved photo response at room temperature. The functionalized SWCNTs/Si based interface exhibit high photosensitivity i.e. 40% as compared to pristine CNTs/Si interface (5%) at room temperature with 1 V. This significant improvement could be due to the photocurrent generation ability of SWCNT/Si interface and the effective charge separation at the functional groups, which is considered as defect in the nanotubes as well as the large number of SWCNT/Si heterojunctions [14]. In addition, the defects in the acid stimulated nanotubes work as a localized potential barrier for the separation of photogenerated carriers [14]. The calculated response time of pristine SWCNT/Si was 30 ms which is quite large as compared to functionalized SWCNT/Si (10 ms) photosensor. Figure 4(d) shows the comparison in photo response of three different samples under same conditions. The proposed SWCNTs/Si interface does not require any complex and time consuming techniques and is completely friendly with existing silicon technology. Conclusions In brief, we observed the role of functional groups in SWCNTs/Si interface for the application of photo-sensor/ solar cell response. Here, we reported simple and economic approach for the fabrication of SWCNTs/Si interface which can be applicable to existing solar technology to design on large area. It is observed that, acid stimulation reduces significantly the interior electrical resistance and facilitates additional charge transport paths in the porous carbon material, resulting in considerably enhanced photo conversation efficiency and photo-sensing response. The fabricated SWCNTs/Si interface is widely useful in photo-sensor and solar cell applications.
2020-09-03T09:10:27.171Z
2020-09-10T00:00:00.000
{ "year": 2020, "sha1": "9cfe4f328d370753b23b74349f5b5b74a159388b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2632-959x/abb362", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1bcac2871e1f3c49600f44c4ee55d8dd12268ab0", "s2fieldsofstudy": [ "Chemistry", "Engineering", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
109595686
pes2o/s2orc
v3-fos-license
PV Generator Performance Evaluation and Load Analysis of the PV Microgrid System in Thailand Normally, the main generators of microgrid system use controllable energy resources such as fossil fuel, biomass, biogas, hydro, etc for uncomplicated control. However, it is very challenging to control the microgrid system that uses uncontrollable energy resources such as solar and wind for main generators of microgrid system because they have many advantages. From this point, the PV microgrid system is constructed and operated at School of Renewable Energy Technology (SERT), Naresuan University for research and development of the microgrid system that is supplying 50% of total electricity demand by PV main generator. By measuring the important parameters such as solar irradiance, PV array voltage, PV array current, and AC electrical power, these data were collected for a year from November 2008 to October 2009 to use in evaluation processes. The PV generator evaluation result is revealed that the average reference yield (Yr), array yield (YA), and final yield (Yf) are 5.21, 4.32, and 3.84 kWh/kWp day respectively. The average total loss of the PV generator is 26.27 % that comes from summing up the average capture losses (LC) 17.21 % and average system losses (LS) 9.06 %. The average overall PV plant efficiency ( tot) is 10.41 %, and the average performance ratio (PR) is 73.45 %. For load analysis of the microgrid, the total load is 231673 kWh/year or 635 kWh/day that the main loads of the microgrid are the real load and the battery storage loss. For the real load, it varies from 9803 to 22506 kWh/month and the average real load is 15434 kWh/month. However, the battery storage loss is really constant at 3888 kWh/month. When consider the load profile, it shows that the peak load period is 8 A.M. to 7 P.M. and the off peak load periods are 0 A.M. to 8 A.M. and 7 P.M. to 0 A.M. Moreover, the load in peak load period of working day is higher than day off but the load in off peak load period is not different. When compare the load profile with PV generator production, the load is 100% supplied by PV generator during 8 A.M. to 4 P.M. in working day and 8 A.M. to 5 P.M. in day off. Moreover, PV generator generates the surplus energy 169 kWh/day in working day and 232 kWh/day in these periods. © 2010 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of I-SEEC2011 Introduction Microgrid system can be defined as a group of distributed energy resources (DER) and loads functioning as a single controllable system that reacts to central control command signals and supplies both power and heat to its regional area [1]. Moreover, it is also defined as an independent low-voltage distribution system that has a squad of DER with energy storages and microsources such as PV, wind, microturbine, CHP, fuel cell, etc. [2]. Microgrid system is not only providing power to its local area, but also exchanging power with national power grid when its power generation is insufficient or surpass its load. Normally, microgrid system has 5 main components as follows: 1) Distributed generation or microsources 2) Loads 3) Energy storage system 4) Controller and 5) Point of common coupling There are two common operation modes of microgrid system that are grid connected mode and island mode. Normally, just only controllable energy resources such as fossil fuel, biomass, biogas, hydro, etc. are used in primary generator because it is uncomplicated controlling with high security and stability of power generation from these energy resources. In the different way, the potential of uncontrollable energy resources is high and scattering in every part of the world. So, uncontrollable energy resources are the alternative energy resources for using as energy resources of main generator in microgrid system. Therefore, it is very interesting to study how to manage these problems in operating of microgrid systems that have uncontrollable energy resources for major generator. This is the origin of the research on of microgrid system that use PV for major generator with the energy fraction over 50 % by the cooperation of School of Renewable Energy Technology, Naresuan university (SERT), New Energy and Industrial Technology Development Organization (NEDO), and Shikoku Electric Power co., inc. (YONDEN) in The International Cooperative Demonstration Project for Stabilized and Advanced Grid-Connection Photovoltaic Systems Demonstrative Research Project on Micro Grid Stabilization. The PV microgrid system is constructed and operated at SERT in Thailand. The main devices and schematic diagram of PV microgrid system are presented in Fig. 1. From the PV microgrid concept, PV generator and load are the vital components that play the serious role to achieve the electricity supply fraction goal. The performance evaluation of the PV generator and load analysis are the important activities that is not only designating the efficiency and performance but also exploring the imperfect and unusual of the generator and load. The evaluation results can be used as the database in adjustment and maintenance PV generator to keep the highest efficiency and performance. The performance evaluation of the PV system is a common action that is usually executed in many part of the world. In Japan, 30 kW p bifacial PV systems that installed in the microgrid system in Aichi airport-site demonstrative research plant for new energy power generation was evaluated. The result shows that the energy generation of bifacial PV in vertical installation is about 90% of singlefacial PV faced to south with 30º tilt [3]. In addition, various configurations of PV roof top systems are evaluated and the result is presented that south-oriented systems have about 22 % higher reference yield [4]. In Thailand, 500 kW p PV power plant at Mea Hong Son province was evaluated. The result was displayed that the final yield is in 2.91 -3.98 range and performance ratio is in 0.7 -0.9 range [5]. Moreover, 5 kW p PV system at Rajamangala University of Technology Suvarnabhumi was evaluated by using data from data monitor system and standard instrument (IV Checker) and the result was showed that the both data give the different of average array efficiency about 1 % [6]. For the objective of this research, PV generator performance and load of PV microgrid system that installed at SERT is evaluated and analyzed for 1 year. Data collection In this evaluation, PV microgrid system is assigned to operate in grid connected mode and every major device is regularly working for highest efficiency and performance of PV generator. The data collection system in PV microgrid controlling system collects the important parameters such as solar irradiance, ambient temperature, module temperature, PV output, PV inverter output, exchanged power between grid and PV microgrid system, diesel generator output, battery inverter input/output, and load are collected from the sensors that installed in PV microgrid system. The data collection system collects these significant parameters every minute during PV microgrid system operation. The collected data is transferred to graphic operation terminal (GOT) 1000 for displaying and storing in its compact flash memory. The data that stored in the compact flash memory is downloaded to the computer every week for using in performance evaluation of PV generator. For the instrument, solar irradiance is measured by EKO MS-602 ISO second class pyranometer, all temperature is measured by type T thermocouple, and all electric power is measured by Toyo keiki WGM-04A watt/watt hour transducer set. Performance evaluation procedure The technical analysis processes of International Energy Agency Photovoltaic Power Systems (IEA PVPS) Task 2 -Performance, Reliability and Analysis of Photovoltaic Systems that based on EU guidelines and IEC 61724 standard [7,8,9,10] are used to evaluate the efficiency and performance of PV generator in this evaluation. The important parameters and equations for analysis are presented as follows: PR = Y f / Y r (10) Solar radiation and ambient temperature analysis The daily average solar radiation in each month range from 4.50 to 5.93 kWh/m² day and can be classified into 2 groups. First, high solar radiation group that available from November 2008 to May 2009 in winter and summer of Thailand has the daily average solar radiation about 5.50 kWh/m² day. Second, low solar radiation group that available from June 2009 to October 2009 in rainy season of Thailand has the daily average solar radiation about 4.81 kWh/m² day. For the annual daily average solar radiation, it is approximately 5.21 kWh/m² day that is a little bit higher than the annual daily average solar radiation of Thailand, 5.05 kWh/m² day given by DEDE. For daily average ambient temperatures in each month, it ranges from 29 to 35 Cº that can be categorized into 3 groups following the seasons. The daily average ambient temperature of winter, summer, and rainy season are 31, 32, and 33 Cº respectively. Annual daily average ambient temperature is 33 Cº. The daily average solar radiation and ambient temperature in each month is presented in Fig. 2. Overall PV generator The daily average PV generator output is in 394 -530 kWh/day range in each month and 461 kWh/day for annual. When evaluate over all PV generator follow EU guidelines and IEC 61724 standard, daily average reference yield, array yield and final yield in each month is showed in Fig. 3 (a). In winter and summer, these parameters value is higher than rainy season. When consider the daily average total energy in PV generator that is equivalent to daily average reference yield, it consist of daily average final yield, capture loss, and system loss in each month with the ratio that presented in Fig. 3 (b). From the figure, it reveals that the daily average reference yield and final yield in winter and summer are higher than those in rainy season. Moreover, the daily average capture loss, and system loss in winter and summer are lower than those in rainy season. For the annual daily average final yield, capture losses, and system losses, they are 3.84 h or 73.6%, 0.90 h or 17.3% and 0.47 h or 9.1% respectively that displayed in Fig 4(a).The daily average performance ratio and overall PV plant efficiency in each month are as showed in Fig. 4 (b). From the figure, the annual daily average performance ratio and overall PV plant efficiency are 73.6% and 10.41% and these parameter values in winter and summer are higher than those in rainy season. Load analysis From the load data that collected from the PV microgrid, the total load is 231673 kWh/year or 635 kWh/day and in 13970 -27004 kWh/month in each month that classified into 2 main loads. First, real load is 185212 kWh/year or 507 kWh/day and in 9803 -22506 kWh/month in each month with average real load at15434 kWh/month. Second, battery storage loss is 46658 kWh/year or 507 kWh/day and about 3888 kWh/month in each month. Fig. 5 is displayed total load, real load, and battery storage loss in each month. When calculate the PV generator supplying ratio, it is 73% that achieve the PV generator supplying target. For the load profile, the peak load period is 8 A.M. to 7 P.M. and the off peak load periods are 0 A.M. to 8 A.M. and 7 P.M. to 0 A.M. In addition, the load in peak period of working day is about 35 kW and the highest peak period is 1 P.M. to 5 P.M. while day off is about 25 kW. However, the loads in off peak periods are about 15 kW that is not different. When compare the load profile with PV generator production, the load is 100% supplied by PV generator during 8 A.M. to 4 P.M. in working day and 8 A.M. to 5 P.M. in day off. Moreover, PV generator generates the surplus energy 169 kWh/day in working day and 232 kWh/day in these periods. For the surplus energy, it is used to charge battery and exported to the external grid. Fig. 6 showed working day load profile, day off load profile, and PV generator power. Conclusion The performance and efficiency of PV generator in PV microgrid system is really good. The annual daily average performance ratio and overall PV plant efficiency are 73.6% and 10.41% respectively. The annual daily average reference yield, array yield and final yield are 5.21 4.32 and 3.84 h respectively. These parameters have higher values in winter and summer than in rainy season. The total loss in PV generator is 26. 27% that consists of capture loss (17.3 %) and system loss about (9.1 %). Both capture loss and system loss are higher in rainy season than in winter and summer. For the cause of higher losses in rainy season, it is due to the fluctuation of solar irradiance in this season that makes the power generation of PV array unstable and PV inverter has to change MPP point very often. For load analysis of the microgrid, the total load is 231673 kWh/year or 635 kWh/day that the main loads of the microgrid are the real load and the battery storage loss. For the real load, it varies from 9803 to 22506 kWh/month and the average real load is 15434 kWh/month. However, the battery storage loss is really constant at 3888 kWh/month. In addition, PV generator supplying ratio is 73% that achieve the PV generator supplying target of PV microgrid. When consider the load profile, it shows that the peak load period is 8 A.M. to 7 P.M. and the off peak load periods are 0 A.M. to 8 A.M. and 7 P.M. to 0 A.M. Moreover, the load in peak load period of working day is about 35 kW that is higher than the load in peak load period of day off about 10 kW but the load in off peak load periods are about 15 kW that is not different. When compare the load profile with PV generator production, all load is supplied by PV generator during 8 A.M. to 4 P.M. in working day and 8 A.M. to 5 P.M. in day off. Moreover, PV generator generates the surplus energy 169 kWh/day in working day and 232 kWh/day in these periods. The surplus energy is used to charge battery and exported to the external grid.
2019-04-12T13:57:36.989Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "e07bfaa2a484b985ca1c0d58b8447716e6158de5", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.proeng.2012.01.1283", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "7ae87f4828791eecff33b95175d761cf1468ae0e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Engineering" ] }
81648383
pes2o/s2orc
v3-fos-license
Migration of metallic wire foreign body slicked and instilled in to the lower part of hypopharynx to the neck-a rare case report We report the case of a 40 year old female, who presented to our OPD with severe neck pain, fever and swallowing difficulty for the past one week. She had an ingestion of a sharp foreign body eight days back after which she developed pain and difficulty in swallowing. She was advised an x-ray AP and lateral view of neck, which showed a radio-opaque foreign body at the C5 and C6 level on the left side (Figure 1). A video laryngoscopic examination was normal. The patient had persistent pain and difficulty in swallowing and developed fever. She was then referred to our centre for further management. We report the case of a 40 year old female, who presented to our OPD with severe neck pain, fever and swallowing difficulty for the past one week. She had an ingestion of a sharp foreign body eight days back after which she developed pain and difficulty in swallowing. She was advised an x-ray AP and lateral view of neck, which showed a radio-opaque foreign body at the C5 and C6 level on the left side ( Figure 1). A video laryngoscopic examination was normal. The patient had persistent pain and difficulty in swallowing and developed fever. She was then referred to our centre for further management. On examination, the patient was conscious and coherent. She was febrile (101 °F), with rest of the vitals stable. The patient had no difficulty in breathing and there was no change of voice. On examination, a diffuse swelling approximately 2.5 x 2.5 cm was seen in the left side of the neck, medial to sternocleidomastoid muscle. On palpation over the swelling, it was tender with local temperature raised. The patient was admitted and the blood investigations were done. The patient was advised an x-ray neck lateral view in which we found a radio-opaque foreign body in the same level, with pre-vertebral soft tissue widening and bamboo stick appearance (straightening of the cervical vertebra) ( Figure 2). The patient underwent a direct laryngoscopy with oesophagoscopic examination under general anesthesia, which did not reveal any abnormality. The patient was further advised CT scan which revealed a 2.7 cm linear well defined foreign body with metallic density in the retropharyngeal space at the C5-C6 level ( Figure 3). It was localized to be in the left paramedian aspect, superior and posterior to left lobe of thyroid gland and extending posterior to the carotid sheath at that level. A collection of size 3.5 x 3.2 cm superior to the left lobe of thyroid gland was seen. The patient was planned for a neck exploration by an external approach. A vertical incision was given along the anterior border of the sternocleidomastoid from the level of hyoid bone superiorly to the cricoid cartilage inferiorly. The deep fascia was incised along the anterior border of the sternocleidomastoid muscle. The dissection was continued in the paralaryngeal tunnel. Pus collected in the superior pole of the left lobe of thyroid gland pus collected was drained ( Figure 2). A sample of pus obtained was sent for culture and sensitivity. On further dissection, the foreign body was seen in horizontal orientation, embedded in the soft tissues medial to the carotid sheath and deep to the superior lobe of the thyroid gland ( Figure 3). The foreign body was retrieved and was found to be a metal wire of length 2.7 cm ( Figure 5). Post operative course was uneventful and the patient was discharged after five days. Discussion The incidence of ingested foreign bodies penetrating the esophagus and going extra luminal into the neck and forming a neck abscess is rare. The perpendicular orientation of the foreign body to the esophageal lumen facilitates the migration. The migrating foreign body paves the way for bacteria to enter the soft tissues of the neck causing suppuration. The incidence of abscess in the neck following extra luminal migration of the foreign body is less than 1%. 1,2 X ray, though useful in confirming the presence of foreign body, cannot give a conclusion about the foreign body's precise location. The CT scan is an effective tool that can be used to locate the migrated foreign bodies. 3 It is an accurate and gives precise location of foreign body compared to a plain X-ray of the neck. The relation of the foreign body to the great vessels of the neck and chest should be studied prior to surgical exploration. Al Muhanna et al.,7 reported a case of foreign body, in which location of any foreign body was unsuccessful even after repeated oesophagoscopy. A careful clinical and radiological evaluation is necessary to detect extra luminal migration. The present case is unusual, in a way, that patient had a neck abscess following an extra luminal migration of a foreign body, which was localized and surgically managed, thus avoiding life threatening complications. A cervical abscess due to a migrated foreign body can be managed by draining the abscess and retrieving the foreign body. Conclusion Any suspicion of ingestion of a foreign body should not be taken on a lighter note. Absence of foreign body on clinical examination and rigid endoscopy is not confirmatory. This rare case report warrants a high index of suspicion when a foreign body is missed by a rigid esophagoscopy. The persistence of symptoms is an ominous sign of the developing complications. A CT scan of neck is recommended to know the exact size and location of the foreign body, helps in the treatment planning. Immediate exploration of the neck and removal of the foreign body avoids life threatening complications. We report this case because, an ingested foreign body migrated into the neck with cervical abscess is a rare presentation and we managed it successfully. Consent Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent is available for review by the editor in chief of this journal.
2019-03-18T13:57:56.424Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "18918215050d4f19375c6862cda8eb68ad24a255", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/JOENTR/JOENTR-10-00327.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "48aae5758ce4ebece58c90def9c03a4dc6773399", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256946970
pes2o/s2orc
v3-fos-license
Theoretical prediction and atomic kinetic Monte Carlo simulations of void superlattice self-organization under irradiation Nano-structured superlattices may have novel physical properties and irradiation is a powerful mean to drive their self-organization. However, the formation mechanism of superlattice under irradiation is still open for debate. Here we use atomic kinetic Monte Carlo simulations in conjunction with a theoretical analysis to understand and predict the self-organization of nano-void superlattices under irradiation, which have been observed in various types of materials for more than 40 years but yet to be well understood. The superlattice is found to be a result of spontaneous precipitation of voids from the matrix, a process similar to phase separation in regular solid solution, with the symmetry dictated by anisotropic materials properties such as one-dimensional interstitial atom diffusion. This discovery challenges the widely accepted empirical rule of the coherency between the superlattice and host matrix crystal lattice. The atomic scale perspective has enabled a new theoretical analysis to successfully predict the superlattice parameters, which are in good agreement with independent experiments. The theory developed in this work can provide guidelines for designing target experiments to tailor desired microstructure under irradiation. It may also be generalized for situations beyond irradiation, such as spontaneous phase separation with reaction. an intrinsic instability that leads to the appearance of a periodic inhomogeneous structure with both characteristic symmetry and length (i.e., superlattice symmetry and parameter). In the literature, different theoretical approaches have been proposed to explain these two characteristic properties. The appearance of a characteristic length has been explained by thermodynamic instability 11,12 analogous to the spinodal decomposition in solid and liquid solutions 13 , dynamic instability in reaction-diffusion systems (e.g., Turing instability) [14][15][16][17] , and long-range interactions (e.g., void-void elastic interaction) 18 . In principle, a characteristic length of of void distribution should emerge during phase separation (i.e., between a void phase and the matrix phase), provided the defect dynamics (including production, annihilation and reaction) is considered correctly. In Imada's model, only constant defect production is considered, so that the critical coupling effect of defects through annihilation and reaction was lost 11 . In Veshchunov's work, the phase separation analysis was done by assuming a quasi-stationary state 12 , and void superlattice was regarded as a consequence of spinodal decomposition of solid solutions in binary alloys. Therefore the theory cannot be applied to any unary systems. As a matter of fact, void superlattices have been widely observed in unary systems experimentally. The dynamic instability analysis in reaction-diffusion systems involves defect production, annihiliation and reactions, which captures the dynamic nature of defects, including SIAs, vacancies, their clusters and loops. However, it overlooks the thermodynamic origin of the void formation. A void is formed through the uphill diffusion and local accumulation of vacancies, which requires the description of the chemical potential gradient rather than the concentration gradient in conventional Fick's law. Without an appropriate thermodynamic consideration, a complete understanding of the selection mechanism of defect microstructure cannot be achieved. In particular, the pattern selection by dynamic instability is very sensitive to the dynamic parameters especially near post-bifurcation regime, implying that distinctively different patterns may form in the same material system, which is inconsistent with experimental observations that the superlattice structure is unique in a given material. For voids in an elastically anisotropic matrix, the elastic interaction between voids could suggest a void distribution minimizing the total elastic strain energy at a given ratio of void radius R over superlattice parameter a L 18 . Theories along this line have been appealing as they can predict both the superlattice parameter and the symmetry. Recent 2D phase field simulations also demonstrate that indeed bubble superlattice can form in an elastically anisotropic matrix 19 . However, it has difficulties in explaining the long-range ordering at the early, nucleation stage 9 . Also it cannot explain the formation of void superlattices in body-centered cubic tungsten, which is elastically isotropic 8 . As of today, a theory is yet to be developed that can couple thermodynamics and defect dynamics to successfully predict the experimentally observed superlattice parameters. In addition to anisotropic elasticity, another mechanism proposed to understand the superlattice symmetry is anisotropic defect diffusion, such as 1D [20][21][22] and 2D 23 diffusion of self-interstitial atoms (SIAs) and/ or SIA clusters/loops. These mechanisms, especially the 1D SIA and SIA cluster diffusion, seem consistent with many experimental observations, with support from recent 2D phase field 24,25 and 3D objective kinetic Monte Carlo (KMC) simulations 26,27 . Also, recent atomistic calculations have shown that 1D diffusion is indeed the case for SIA in many body-centered-cubic (bcc) metals 28 , SIA cluster in bcc iron 29 and face-centered-cubic (fcc) Ni 30 . However, atomic scale perspectives on how SIA diffusion affects superlattice nucleation are yet to be discerned. This work focuses on the above two open issues. For the first time, the nucleation process of void superlattice is observed via atomistic simulations. Void superlattices form via spontaneous separation of a void phase from the matrix, analogous to phase separation in immiscible regular solid solution. The superlattice symmetry dictated by kinetic anisotropy such as 1D SIA diffusion. The phase separation is driven by thermodynamics and influenced by defect dynamics. Corresponding theoretical analysis leads to an quantitative prediction of superlattice parameter based on materials properties and irradiation conditions, without any fitting parameters. The unprecedented predictivity is demonstrated using independent experiments in bcc Molybdenum (Mo) and tungsten (W). The theory is also capable to guide new experiments in various materials and under different irradiation conditions. Methodology Thermodynamic and kinetic formulations. Our theory couples the rate theory for defect accumulation 31 and the Cahn-Hillard approach for phase separation 13 . The evolution of time-and spatially-dependent concentrations, c v and c i for vacancy and SIA, respectively, are given by: here subscripts i, v, s denote SIA, vacancy and sink, respectively. P is the production rate (or dose rate). The term (1 − c v ) ensures mass conservation considering volumetric swelling. M and D denote atomic mobility and diffusivity, with the subscripts i and v for SIA and vacancy, respectively; M = D/K B T, with K B being the Boltzmann constant. F is the total free energy of the system. k iv is the reaction rate for recombination, while k vs and k is are those for sink absorption. And k iv = 4πR iv (D i + D v )/Ω; here R iv is the instantaneous recombination radius and Ω is the atomic volume. Using Q = k iv c i + k vs D v + P, the vacancy evolution equation in eq. 1 can be reduced to: with this mathematical form the theory is now generalized to phase separation with the source (P) and the reaction (Qc v ) terms. The use of a free energy description in the spatially dependent diffusion term allows for the formation and migration of voids driven by the free energy. Effectively, voids are represented by local concentration c v = 1, and they can precipitate out as a void phase from the matrix phase, in a way similar to phase separation in immiscible regular solid solution. Such an approach has been widely used for void formation under irradiation using the phase field method 24 . Note that the reaction term can be nonlinear since both c i and k vs in Q are coupled with c v . The theoretical formulation provides a phenomenological description of defects evolutions during void superlattice formation at the continuum level. It will be utilized later to predict the superlattice parameter. The above equation includes three indispensable pieces: (i) thermodynamics driving vacancy evolution, (ii) defect dynamics including production and annihilation, and (iii) non-linear coupling between opposite types of defects through recombination. Note that in existing theories usually only one or two of the three critical pieces were considered. For example, (ii) and (iii) are considered in the dynamic instability analysis 8,9 , while (i) and (ii) are considered in spinodal decomposition analysis 11,19 . In analogy to the binary regular solution formulation, the total free energy of the system can be written as a function of c v as: here f is the bulk free energy density of the binary mixture of vacancies and matrix metal atoms, given by E mix is the heat of mixing (vacancy formation energy E v f here). κ is the coefficient of gradient energy and it is associated with the interfacial energy γ (interface between void and matrix) approximately by κ γ ≅ E 9 /8 mix Atomic kinetic Monte Carlo modeling and simulations. In accordance to the above theoretical framework, a rigid-lattice AKMC model for regular solution that incorporates 1D SIA diffusion for kinetic anisotropy is developed to explore the nucleation of void superlattice. Here, vacancy and SIA are denoted as types of elements occupying and diffusing on a prescribed lattice. Vacancies diffuse isotropically via first nearest neighbor (1NN) hopping, i.e., switching with a matrix atom. To represent 1D SIA diffusion, multiple types of SIAs are used, each diffusing along a prescribed direction. Taking 〈111〉 1D SIA diffusion in bcc metals as an example, four types of SIAs are used, each diffusing along one of the four 〈111〉 directions by performing 1NN hopping. The 1D SIA diffusion can be turned off for 3D isotropic diffusion. Moreover, preferential 1D diffusion can be simulated by allowing one type of SIA to transform into another type with a given barrier. In this work, simulations are mostly carried out using pure 1D diffusion for the efficiency. The major conclusion holds as long as 1D diffusion is dominant, in agreement with previous work 26 . Following the residence-time algorithm 33 , in each AKMC step a list of diffusing events is built based on the jumping rate of each event i, ; ν 0 is the attempt rate and E a i the activation barrier. A random number is drawn to select one event from the list to proceed in each KMC step. The time advancement is given by the inverse of the summation of all jump rates. A constant ν 0 of 1.0 × 10 12 /s is used to scale the AKMC time to physical time. The activation barrier for vacancy diffusion is calculated by: E a = E 0 + (E f − E i )/2, and updated once the local environment is changed. Here E 0 is the diffusing barrier at the dilute concentration regime, and E f − E i is the energetic difference of the final and the initial states, describing the dependence on local environment. A constant activation barrier E 0 is used for SIA for its low concentration at the condition for superlattice formation and its low migration barrier in the materials considered here. The total energy of the system is calculated by a pairwise model: here ε α e e i j represents the bond energy between atom i (with the element type e i ) and j (e j ), within the α th nearest neighbor shell, with α being 1 or 2 here. To sufficiently represent the free energy model in eq. 4, two terms in ε α e e i j need to be non-zero. In the current model, only ε α 12 are defined, with e i /e j equaling to 1 for the matrix and 2 for vacancy, respectively. a 0 is the lattice constant. For a bcc lattice, the bond energy can be derived by using, here a 0 is the lattice parameter. After each KMC step, vacancies and SIAs located within a given distance (R iv ) from each other recombine (i.e., both changed to matrix atoms) instantaneously. To capture sink absorption, a mean free jump N s is used. Vacancies or SIAs that have jumped more than N s times will be eliminated (changed to matrix atom), corresponding to a sink strength of = k dim N r 2 2 s 0 2 34 with r 0 being the distance of each atomic jump and dim being the dimension of diffusion. The same N s applies to vacancy and SIA, assuming neutral sinks. To describe defect production, Frenkel pairs are introduced randomly by assigning two randomly selected atoms to be a vacancy and an SIA, respectively. One Frenkel pair are introduced per time span t fp , corresponding to a dose rate of P = (t fp N) −1 , with N being the total number of atoms in the system. This way of introducing defects corresponds to the electron irradiation condition. The AKMC method is implemented in the SPPARKS code 35 . For visualization the Ovito software is used 36 boundary condition (PBC) is used for all AKMC simulations in this work. Rigorous examination on the finite size effect has been carried out by reproducing the simulation results using cells with various sizes. Results Superlattice symmetry selection from AKMC simulations. To explore how 1D SIA diffusion affects superlattice symmetry, the AKMC model is parameterized using the material properties of bcc Mo in Table 1. R iv is set to be the 3rd nearest neighbor distance, and N s be 1000. Both 2D and 3D simulations are carried out. For 2D, 1D SIAs diffusion along 〈10〉 is considered for square (sq) matrix, and along 〈10〉 and 〈11〉 for hexagonal (hex) matrix. The simulation cell size is 200 a 0 by 200 a 0 with 40000 atoms for square, and 200 a 0 by 120 a 0 with 48000 atoms for hexagonal matrix, respectively. For 3D, bcc and fcc matrices are used. For bcc matrix, 1D SIA diffusion along 〈100〉/〈110〉/〈111〉 directions are considered in three separated simulations, respectively. We note that for 〈100〉 and 〈110〉 1D diffusion in bcc, 2nd and 3rd nearest neighbor hopping need to be involved. For fcc matrix, 1D SIA diffusion along 〈110〉 is considered. The simulation cell size is 40 a 0 by 40 a 0 by 40 a 0 for both bcc and fcc matrix, with 128000 and 256000 atoms, respectively. For all 2D and 3D simulations, void superlattices have been obtained with proper choices of irradiation conditions. The obtained superlattices from simulations are summarized in Table 2, consistent with all previous experimental observations. The void alignment, or the mostclosed-packed direction of voids is found to follow the direction of 1D SIA diffusion once a superlattice forms. For instance, the most-closed-packed direction of voids is 〈111〉 for 〈111〉 1D SIA diffusion in a bcc matrix, giving a bcc void superlattice in 3D. Similarly, fcc and simple cubic (sc) void superlattices are observed in 3D simulations with 1D SIA diffusion along 〈110〉 and 〈100〉 directions (see Fig. 1), and square and hexagonal superlattices in 2D simulations with square and hexagonal matrices, respectively. The above results cover a wide range of matrices including fcc, bcc, and 2D square and hexagonal. It confirms that 1D SIA diffusion can cause void alignment along SIA diffusion direction. In 3D, such alignment can take places along several symmetrical crystal orientations, e.g., 〈111〉 in bcc, resulting in void superlattice formation. This finding is consistent with previous theories 21 and previous simulations 24,26 . The superlattice symmetry is dictated by the direction of 1D SIA diffusion, against the widely accepted empirical rule of the coherency between the superlattice and host matrix crystal lattice. The matrix lattice structure does not directly determine the structure of void lattice, although it has indirect effect by affecting SIA diffusion direction. Experimentally, bcc void/ bubble superlattices have been observed in various bcc metals including Mo, W, Nb, Fe and Ta 6,8,9 . In all these metals, SIA has been predicted to perform 1D diffusion along 〈111〉 28 except for Fe, in which SIA diffuses in 3D but SIA clusters primarily perform 1D diffusion along 〈111〉 29 . In bcc U-7Mo fuel where 〈110〉 1D SIA diffusion was suggested 25 , fcc void superlattices have been reported 10 . In addition to experiments, the current AKMC results are also consistent with previous 3D objective KMC 26 and 2D phase field 24,25 simulations. Rate theory based instability analysis. The above AKMC simulations adopt production rates orders of magnitude higher than those in the previous experiments. The observed superlattice parameters are usually a few nanometers in simulations, about one order of magnitude lower than those reported experimentally in bcc Mo 9 . This discrepancy can be resolved by theoretical analysis on the effect of irradiation conditions. The analysis starts with the Fourier form of eq. 3: In the Fourier space, the mean field concentration is described by the mode , and spatial variations by non-zero k (k is the wave number). The production term is non-zero only for k = 0. Considering a small perturbation with a wave number k, its growth rate is given by . For Q = 0, this reduces to spinodal decomposition in immiscible alloys 13 ; when ″ < f 0, there always exists non-zero k with positive growth rate R(k). When Q > 0 as in the case of irradiation or reaction, the spontaneous phase separation and dR(k)/dk = 0, as shown by the green curve in Fig. 2. Accordingly, the critical concentration c v can be calculated using eq. 5. The critical wave length is given by: The appearance of Q in the denominator shows the strong coupling between phase separation and diffusion reaction (defect dynamics) in selecting void superlattice parameter. Therefore, it is critical to include both thermodynamics and defect dynamics in predicting superlattice parameter a L . Once the critical concentration is reached, slight increase in c v leads to substantial increase in k and R(k), as shown in Fig. 2. The quick, exponential growth of the first k with positive R(k) will stabilize a characteristic length given approximately by eq. 10. This wave length corresponds to the inter-plane spacing of the most-closed-packed planes of voids, i.e., {110} for bcc crystals with 〈111〉 1D SIA diffusion, and thus λ = AKMC demonstration of instability. The theory predicts a spontaneous phase separation, i.e., separation of a void phase from the matrix, which determines the superlattice parameter depending on defect dynamics. This is consistent with the conclusion in Woo et al. that "From the view of thermodynamics, void-lattice formation is a non-equilibrium phase transition in an open system" 21 . As a support to the theory, AKMC simulations are performed to directly observe the superlattice nucleation and formation process and to investigate the dependence of superlattice parameter on radiation conditions such as temperature and dose rate. For these purposes, AKMC simulations are parameterized using the materials properties for both Mo and W as listed in Table 1. 1D SIA diffusion along 〈111〉 directions in a bcc lattice is considered. The simulation cells are 80 a 0 by 80 a 0 by 80 a 0 in size with 1,024,000 atoms. Selected simulations have been repeated using 40 a 0 by 40 a 0 by 40 a 0 and 120 a 0 by 120 a 0 by 120 a 0 cells, with essentially the same results obtained on superlattice parameter and structure to exclude possible artificial effect from PBC. The dose rates are varied by two orders, being 0.98 and 98 dpa/s, respectively, to elucidate the dose rate effect. Because the AKMC simulation directly consider atomic hopping in both time and spatial scales, realistic dose rates as in real experiments are not achievable for the computation efficiency. The simulation temperature varies from 873 to 1473 K, one simulation every 100 K. To demonstrate the spontaneous separation of a void phase from the matrix, the atomic configurations at various doses from an AKMC simulation are plotted in Fig. 3, along with the radial distribution function of vacancies, g(r). The simulation is done at 1173 K with a dose rate of 98 dpa/s using the properties of Mo. Here g(r) is the number density of vacancies at the distance r from a vacancy, averaged over all vacancies in the system. As shown in Fig. 3(a,b), before the superlattice nucleates, only one peak in g(r) exists at short range, denoting the formation of individual voids. Once the critical condition for spontaneous phase separation is reached, extra and periodic peaks emerge at long range, indicating the appearance of a wave length (see Fig. 3(a)). The nucleation of a superlattice is clear in the corresponding atomic configuration in Fig. 3(c). Once a wave length is selected, its peaks grow quickly in amplitude without evolving in wave length, as shown in Fig. 3, which is typical for spontaneous phase separation. This indicates that voids are growing by absorbing mobile vacancies without coarsening, which is suppressed by the formation of a superlattice. Consequently, a L is independent of dose in the AKMC simulation. Such a process has been observed in Tantalum (Ta)+ irradiated Mo at 900 °C 37 , where a static void superlattice parameter of 46.0 nm was observed from 3.0 to 150 dpa, while the void size has kept increasing. We note that coarsening can still occur when the void lattices contain imperfections. The nucleation of void superlattice depends strongly on irradiation conditions such as temperature and dose rate. The effect of temperature can be observed in the simulation results at 1373 K and 98 dpa/s in bcc Mo. Compared to the case of 1173 K, at 1373 K there is not a clear nucleation stage, as shown in Fig. 4. In this case, individual voids form with weak alignment, as shown in Fig. 4(b,c). The alignment of voids improves as they grow larger, particularly after the critical condition for spontaneous phase separation has been reached, until a superlattice can be identified, Fig. 4(d,e). Accordingly, the g(r) curves ( Fig. 4(a)) do not clearly reflect the selection and stabilization of a wave length, different from the case of 1173 K. The change in the formation mechanism with increasing temperature is due to the increased importance of individual void nucleation and growth, which is stochastic and leads to less ordering of the superlattice. As a result, the void superlattices usually contain imperfections, such as vacant, sites and dislocations, as shown in Fig. 4(e). These defects have been widely observed in previous experiments (See pictures in ref. 9 ). It is expected that at even higher temperatures, individual void nucleation and growth become so dominant that no superlattice forms. Another notable effect is that at 1373 K, the dose needed for superlattice to form is substantially lower than that at 1173 K, due to the much stronger recombination at lower temperature. The reduction in dose rate has a similar effect as that exhibited by increasing temperature. Effectively, both of them drive the system closer towards equilibrium. Temperature and dose rate effects on wave length selection. The instability analysis above predicts strong dependence of void superlattice parameter on radiation conditions including temperature and dose rate. To validate that, eq. 10 is applied to bcc Mo and W, with the results compared to our AKMC simulations and previous experimental observations. To obtain Q it needs the transient SIA concentration and sink strength at the critical condition. Ideally, these can be obtained by solving the spatially dependent rate theory equations as used in the instability analysis 17 . A simpler estimate can be done by linearization of Q, i.e., solving eq. 1 assuming steady state and constant sink 31 for an analytical solution of c i (See Section I in the Supplemental Materials). The sink strength can be estimated based on the initial dislocation density and grain size in the samples. For comparison with experiments, the materials properties listed in Table 1 are used for Mo and W. Most of the parameters are from experiments except for k vs and R iv . Here a value of 1.42 × 10 13 /m 2 is used for k vs , corresponding to a dislocation density of 1.0 × 10 13 /m 2 with a capture radius of 5 nm given by the Wiedersich model 38 . A dose rate of 10 −6 /s typical for fission neutron irradiation condition 9 is used. The recombination radius R iv has been found to be about 2.0 a 0 for bcc Mo 39 . The same value is used for W. The same parameters are used for comparison with AKMC simulations except for the dose rates, recombination radius and sink strength. Two dose rates, 0.98 and 98 dpa/s, are used in AKMC simulations. The recombination radius is set as the third nearest neighbor distance, and the sink strength is given by N s = 1.0 × 10 5 . The predicted superlattice parameters are plotted in Fig. 5 alongside the results from previous experiments and our AKMC simulations. Two sets of At a given dose rate, for both W and Mo it is predicted that a L initially increases with temperature due to increasing mobility, and then saturates due to increasing sink absorption. a L is systematically larger in Mo than in W due to the higher vacancy mobility. These trends are consistent with experimental results from fission neutron irradiated Mo and W 40 . Notably, without using any fitting parameter in eq. 10, the predicted lattice parameters also agree well with experiments considering the uncertainties in the irradiation conditions and the materials properties. Such good comparison indicates that the theory captures the nature of superlattice formation and is capable for quantitative prediction. The discrepancies could be caused by several reasons including the uncertainty in materials properties and irradiation conditions. The initial dislocation densities are unknown in the experiments. The realistic vacancy mobility can be different due to the existence of impurities and the effect of irradiation enhanced diffusion. In fact, at low temperatures, irradiation enhanced diffusion in displacement cascades can be dominant over thermal diffusion. In such case, a L will display a weak dependence on T as observed in the experiments for W. The theoretical prediction are also in good agreement with AKMC results both qualitatively and quantitatively. a L is predicted to increase with increasing T and decreasing P, as observed from AKMC simulations. At the same temperature, a L observed from AKMC is larger with a dose rate of 0.98 dpa/s than with 98 dpa/s. As shown in Eq. 4, increasing in P will enhance defect recombination, resulting in increasing Q and thus smaller a L . The AKMC results are systematically below the theoretical curves. Two primary reasons are responsible for this minor discrepancy. First, a mean field distribution of individual vacancies is assumed in the theory. While in the simulation (and in reality) small vacancy clusters appear prior to the superlattice formation, as seen in Fig. 3. Thus the effective vacancy mobility in AKMC simulations is lower than that for individual vacancies, which is used in the theory, resulting in smaller a L from the AKMC simulations than the from theoretical prediction. This effect becomes stronger with increasing temperature or decreasing dose rate when phase separation via void nucleation and growth becomes more important. The second is due to the periodic boundary condition, which allows for only discrete wave lengths. If the predicted wave length by eq. 10 is not compatible with PBC, phase separation will be delayed until a compatible a L smaller than the theoretical one emerges. For this reason, the lattice parameters from periodic AKMC simulations should always be smaller than the theoretical prediction. Irradiation conditions to form superlattice. The theoretical analysis also predicts a low temperature boundary in the P-T diagram, which has been suggested previously by experimental data 9 . Following eqs 9 and 10, the low temperature boundary can be analytically solved for at the condition that no solution of c v exists to satisfy R(k) > 0 for any k. More rigorously, the predicted distance between nearest voids, λ 3 2 c , cannot be smaller In such a condition, 1D SIA diffusion gives the same recombination as 3D diffusion, therefore there is no biased growth for aligned voids. This condition gives (See Section II in the Supplemental Materials): here D v0 is the prefactor for diffusion and T m the melting point. The thus established low T boundary is shown in Fig. 6, with all experimental and simulation conditions located at the higher temperature side. It also gives a nearly linear dependence of ln(P) on T m /T, with a slope of −E K T /( ) v m B m , as suggested in the literature 9 . The current analytical prediction does not suggest an exact high T boundary. In fact, what we see from AKMC simulations is that with increasing temperatures, voids become less ordered gradually due to void nucleation and growth (see Fig. 4), and coarsening becomes more active. In such cases, a superlattice may not be stabilized and identified. Another factor not considered here is the rotation of SIAs which breaks 1D diffusion. It is expected that, with increasing temperature, SIA diffusion will undergo a transition from 1D to 3D 29 , so that there is no long-range ordering of voids at high temperatures. For both the above reasons, superlattice formation gradually gives its way to stochastic void nucleation and growth, without a clear temperature boundary. These two effects could also play a role at temperatures with vacancy emission from voids 9 . Discussion Applicability of the theoretical model. The theory developed here contains no fitting parameters and is thus capable for quantitative prediction. The unprecedented predictivity is demonstrated by comparison to independent experiments and present AKMC simulations. Its analytical form makes it convenient to be applied for a wide range of materials and irradiation conditions including temperature and dose rate, as shown in Fig. 5. Three important trends regarding the superlattice parameter are predicted: i) a L increases with increasing temperature, ii) a L decreases with increasing dose rate, iii) under the same irradiation condition, a L is larger in materials with higher vacancy diffusivities. The first and the third predictions are validated by the measurements in neutron irradiated Mo and W 40 , and the second one is validated by AKMC simulations. An indirect experimental support to the second prediction is that, in general, at the same temperature the void superlattice parameter produced by ion irradiation are usually smaller than that by neutron irradiation 9 , since the former is usually associated with much higher dose rates. Given the uncertainties in the experiments, quantitative comparison with experiments can be regarded as very good as well. The consistency between theory and experiments, alongside the direct support from AKMC simulations, indicates that the spontaneous phase separation based theory captures the nature of void superlattice formation. It can thus be utilized to tailor desired superlattice in experiments, e.g., by adjusting irradiation conditions and materials properties 30 . The proposed theory is for void superlattice. It may be extended to explain gas bubble superlattice formation. One important effect of gas incorporation is that gas atoms occupy vacant sites, effectively reducing vacancy diffusivity. This will lead to much smaller superlattice parameter a L according to eq. 10, consistent with previous experimental observations, where the bubble superlattice parameters are about one order of magnitude smaller than those for voids 9 . It is expected that bubble lattice will exhibit similar trends regarding temperature and dose rate. Unfortunately, so far no sufficient experimental data exists to establish the dependence of a L on P and T. In our recent experiments of He implanted bcc Mo at 573 K, the bubble superlattice parameter was measured to be 4.0 nm at the dose rate of 1 × 10 −3 dpa/s, and 4.8 nm at 1.2 × 10 −4 dpa/s. Still, more data needs to be collected to establish a trend given the uncertainties in the experiments. Because of the high activation barriers for gas substitutes, irradiation enhanced diffusion becomes important for gas atom diffusion. Consequently, a weaker dependence of a L on T is expected for bubble superlattices. These predictions are subjected to future validations. Limitation. Without the consideration of materials anisotropy, the thermodynamic instability itself does not predict a superlattice symmetry because in Eq. 10 only a wave length rather than a wave vector is predicted. In this work we rely on AKMC observations to predict superlattice symmetry. The results seem consistent with the "shadow effect" proposed in the literature 20 . It is showed that voids aligned along the 1D SIA diffusion directions receive lower annihilating SIA flux than unaligned ones 21 , resulting in superlattices in 3D. However, some AKMC simulations showed that such alignment may not be necessary during superlattice nucleation but appear after superlattice formation. This calls for further investigation on superlattice symmetry development. Actually, the absence of anisotropy in the matrix material makes the theory general for many materials, although it is demonstrated using primarily bcc metals in this work. In isotropic matrices, the voids will be randomly distributed, with the first nearest neighbor distance of a L given by eq. 10. In such a case, voids are randomly distributed, and coarsening is expected to be active without identifiable void superlattices. In anisotropic matrices, ordering of voids can appear. In the case of 1D SIA diffusion when voids are aligned along the SIA diffusion directions, the stabilization of a characteristic length during spontaneous phase separation leads to superlattice formation, as shown in Fig. 3. The ordering will be weakened when phase separation via void nucleation and growth becomes dominant, as in the AKMC simulations shown in Fig. 4. We note that 1D SIA diffusion may not be the only factor for void ordering. For instance, the void ordering in bcc Fe and fcc metals are attributed to 1D SIA cluster (such as loops) diffusion 29,41 . 1D loop motion has been employed to explain void superlattice in bcc and fcc metals in general 22 . Other factors, like 2D SIA/SIA cluster diffusion 23 and elastic anisotropy 18 , can also cause void ordering in certain ways. When elasticity is of concern, the present theory needs to be modified to include void elastic interaction in the free energy formulation 19 . The present theoretical analysis assumes mean-field distribution of vacancies and SIAs before the critical point for spontaneous phase separation. In the rate theory description, vacancy and SIA types of defects are considered in the forms of concentration fields. We note that with spatial dependence and the free energy description, clusters of defects can also be represented as local variations in concentrations. For instance, a void can be represented by region with c v = 1, taking the advantage that vacancy and voids are usually coherent with the matrix. They can be mobile driven by the total free energy as in Eq. 3, similar to classic phase field description 19,24 . Before the instability occurs, these clusters are unstable, i.e. waves with negative growth factors, and they may be annihilated by mutual recombination. Indeed, in the AKMC simulations, small vacancy clusters or voids are constantly observed (as shown in Fig. 4(b,c)) to form and vanish before superlattices nucleate. However, such an description may not be accurate for SIA clusters. Depending their size, SIA clusters may take various shapes and configurations. An accurate description requires distinction between them, as done in previous numerical approaches 17 or the more complicated cluster dynamics description. This work focuses on the instability phenomenon in the vacancy concentration field. At the critical condition for instability, the SIA concentration is usually extremely low, e.g., below 10 −6 . With this condition, an simplification is made here by using the concentration variable c i to describe SIA type of defects. For their role in recombining vacancies and absorbing SIAs, plus their possible anisotropic migration, their effects on superlattice parameter and symmetry 22 warrant further investigations. Albeit the simplifications, we expect the present theoretical analysis to hold for various irradiation conditions, as indicated by the good agreement between the theoretical predictions and previous experiments. The strong temperature dependence in the superlattice parameter is due to its direct correlation with vacancy mobility (diffusivity). There are other factors that may affect the temperature dependence, which are not included in the current theoretical analysis. In reality, both the recombination radius and the surface energy are temperature dependent. Irradiation enhanced diffusion in displacement cascades may weaken the temperature dependence, particularly under ion irradiation at low temperatures. In regards of these factors, the theory may not accurately reflect the temperature dependence of superlattice parameter observed in the experiments. Better agreement has been achieved between theory and AKMC, in both these factors are absent. The last piece of discussion centers on the AKMC method used in this work. To demonstrate the instability phenomenon predicted by the rate theory, an AKMC method consistent with the rate theory description and capable of reach high radiation doses is needed. Moreover, prescribed SIA diffusion properties is desired to show the correlation between SIA diffusion and superlattice structure selection. For these purposes, we followed the AKMC method as described in 33 by adding a description for anisotropic SIA diffusion. The AKMC simulations concern Frenkel pair production, corresponding to electronic irradiation. With this method, vacancy clusters and voids can form and migrate automatically via diffusion and clustering of individual vacancies. The clustering of SIAs is ignored for the extremely concentration at the condition for superlattice formation. The diffusion barrier of vacancy is calculated based on the local environment using a broken-bond model within the 2nd nearest neighbor distance. This model is sufficient to represent the two critical materials properties, surface energy and vacancy formation energy, to be consistent with the free energy description in the rate theory equations. Moreover, it provides the computation efficiency to reach high radiation dose with larger number of defects in the simulation domain. Therefore, it serves well the purpose of demonstrating the instability phenomenon predicted by the rate theory model. However, it may not be an ideal selection as a standalone tool for studying defect evolution for the simplified description of interatomic interaction and atomic diffusion. A more realistic description can be achieved by using an empirical potential for the interatomic interaction and advanced barrier searching methods for atomic diffusion 42,43 , in situations where computation efficiency is not a major concern. Conclusions To conclude, atomic scale AKMC simulations confirm that void superlattice can form in various crystals with 1D SIA diffusion. The superlattice forms via spontaneous phase separation, with the characteristic length dictated by both vacancy thermodynamics and defect dynamics, and the lattice symmetry by 1D SIA diffusion. Assisted by the atomistic simulations, a new theory is developed to predict the superlattice parameter. The excellent agreements in both trends and exact magnitude between theory and independent experiments demonstrate that the theory is capable to interpret the mechanisms and make quantitative predictions, without using any fitting parameter. Further guidance on the experimental conditions for superlattice formation is also suggested by the our theory. The mathematical form of the theory implies that it may have general application in cases involving spontaneous phase transition and anisotropic diffusion reaction.
2023-02-18T14:54:35.718Z
2018-04-26T00:00:00.000
{ "year": 2018, "sha1": "019cec3877441cfb083dc8d25bbfa86bfacca89e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-24754-9.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "019cec3877441cfb083dc8d25bbfa86bfacca89e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
253462580
pes2o/s2orc
v3-fos-license
Hematological and biochemical reference intervals of wild-caught and inhouse adult Indian rhesus macaques (Macaca mulatta) Background Nonhuman primates are used for research purposes such as studying diseases and drug discovery and development programs. Various clinical pathology parameters are used as biomarkers of disease conditions in biomedical research. Detailed reports of these parameters are not available for Indian-origin rhesus macaques. To meet the increasing need for information, we conducted this study on 121 adult Indian rhesus macaques (57 wild-sourced and 64 inhouse animals, aged 3–7 years). A total of 18 hematology and 18 biochemistry parameters were evaluated and reported in this study. Data from these parameters were statistically evaluated for significance amongst inhouse and wild-born animals and for differences amongst sexes. The reference range was calculated according to C28-A3 guidelines for reporting reference intervals of clinical laboratory parameters. Results Source of the animals and sex appeared to have statistically significant effects on reference values and range. Wild-born animals reported higher WBC, platelets, neutrophils, RBC, hemoglobin, HCT, MCV, and total protein values in comparison to inhouse monkeys. Sex-based differences were observed for parameters such as RBCs, hemoglobin, HCT, creatinine, calcium, phosphorus, albumin, and total protein amongst others. Conclusions Through this study, we have established a comprehensive data set of reference values and intervals for certain hematological and biochemical parameters which will help researchers in planning, conducting, and interpreting various aspects of biomedical research employing Indian-origin rhesus monkeys. Background Non-human primates (NHPs) and humans share similarities in physiology and behavior. NHPs are an important animal model in the drug development process of pharmaceuticals and biologics. [1]. Various species of primates have been used in scientific studies including various apes, New World, and Old-World monkeys [2]. Out of all NHP species employed in research, macaques are the most widely used primates in basic and translational biomedical research. Macaques and humans share about 92% of genetic makeup as compared to the 64% similarity between humans and rodents, the latter being the most commonly used species in drug discovery and development [3]. Two species groups of the genus Macaca, the M. fascicularis or the cynomolgus monkeys and the M. mulatta or the rhesus monkey find extensive use in preclinical testing of various vaccines, monoclonal antibodies, and pharmaceuticals [3,4]. The Indian-origin rhesus monkeys (Macaca mulatta) are a principal animal model for vaccine development and the study of HIV/AIDS pathogenesis [5]. They have been used as standard animal models for studying aging, neuroscience, and immunology. Benefits such as the discovery of the rhesus factor, development of certain life-saving vaccines like vaccines for rabies, polio, and smallpox, and understanding of embryonic stem cell propagation would not have been possible without the use of rhesus monkeys [5,6]. Pharmaceutical and academic research centers in India rely heavily on the use of Indian-origin rhesus monkeys for toxicological studies as the primary nonrodent species of regulatory importance for the development of biologics. Due to their increased use in biomedical research, it is necessary to understand the effect of captivity on the hematological and biochemical parameters of the rhesus monkeys. Historical reference values of clinical pathological parameters of Indian rhesus monkeys are scarcely available. Particularities such as methods of data collection, age, source of animals, methods of restraint, or environmental conditions are usually unclear or unavailable in most reports. Reference clinical pathology values are essential in the health monitoring of colonies, screening for healthy animals before employment in safety pharmacology and toxicology studies, and interpretation of laboratory data. Previous studies have reported variance in normal hematological and biochemical parameters due to differences in environmental conditions, confinement, and chemical restraint. Anaesthetization through ketamine hydrochloride is a common practice in the restraining of nonhuman primates. Published reports have described the effects of ketamine anesthesia on the hematological and biochemical parameters of rhesus monkeys [4]. In this study, clinical laboratory data has been obtained under a definite experimental set-up such as indoor housing, standard environmental and husbandry conditions, and acclimatized physical restraint procedures without anesthesia. Despite their widespread use in biomedical research, few reports are available that compare and present differences in wild-caught and inhouse Indian rhesus monkeys. Studies have recommended that investigators consider the origin and history of the rhesus monkeys before they are evaluated for experimental purposes [6]. In the present study, we have aimed to prepare and present, accurate and statistically evaluated hematological and biochemical values of acclimatized and non-anesthetized individually-housed wild-caught and inhouse Indian rhesus monkeys. Reference values for hematological parameters Different reference values for hematological parameters are reported in Table 1. Statistical analysis of reference values revealed varying degrees of statistical significance between inhouse and wild-sourced animals. Parameters such as WBC, RBC, HGB, HCT, MCH, MCHC, PLT, NEU, N%, L%, MONO, M%, EOS, and E% in males and WBC, HCT, MCV, MCH, MCHC, PLT, NEU, N%, L%. M%, E%, and EOS in females of wild-sourced animals were found to have a statistically significant difference in reference values when compared to inhouse animals. LYMP, BASO, and B% were the only comparable parameters among these animals. Markedly higher values of total leukocyte count along with increased NEU, N%, EOS, and platelet values and decreased L% and M% values were noticed in wild-sourced animals when compared to inhouse animals. The effect of sex on the hematological parameters of both inhouse and wild-sourced animals was minimal and limited to a few parameters. A statistically significant decrease was noted in the percent hematocrit of inhouse females and the mean corpuscular hemoglobin concentration of wild-sourced females when compared to inhouse and wild-sourced males respectively. Marginally decreased hemoglobin concentration in females of both inhouse and wild-sourced animals was observed when compared to respective male animals. Other parameters did not display any statistically significant difference between the sexes. Reference values for biochemical parameters Different reference values for biochemical parameters are reported in Table 2. Similar to hematology, statistical analysis of biochemical parameters reference values revealed varying degrees of statistical significance between inhouse and wild-sourced animals. Parameters such as AST, ALT, GGT, TP, ALB, GLB, CREA, K + , and Ca + in males and TCHO, GGT, TP, ALB, GLB, CREA, and K + in females of wild-sourced animals were found to have a statistically significant difference in reference values when compared to inhouse animals. Important changes like a decrease in serum GGT, albumin, and creatinine and an increase in total protein, globulin, and potassium values were noted in wild-sourced animals when compared to inhouse animals. Another notable deviation found was a significant decrease in serum AST and calcium values of wild-sourced male animals when compared to inhouse males. The effect of sex was observed in a few biochemistry parameters in both inhouse and wild-sourced animals. A marked decrease was observed in serum creatinine and calcium levels of inhouse females when compared Table 1 Reference values for hematological parameters of Indian rhesus monkeys aged 3-7 years Data are presented as mean ± SD, n = total number of samples. p < 0.05 is considered statistically significant *= indicates varying degrees of statistical significance between inhouse and wild-sourced animals $ = indicates varying degrees of statistical significance between male and female animals. to inhouse males. Total protein and albumin were also found to have marginally lower values for inhouse females when compared to inhouse males. Wild-sourced female animals showed statistically significant changes such as an increase in serum triglycerides levels and a decrease in serum phosphorus levels when compared to wild-sourced male animals. Other remaining parameters displayed comparable results between sexes. Range of reference intervals The range for reference intervals of hematological and biochemical parameters was calculated as per directions of the C28-A3 guideline. Reference intervals were compiled initially applying Grubbs' test (alpha = 0.05) to identify outliers followed by estimations at 2.5th percentile (lower limit) and 97.5th percentile (upper limit). The calculated range of reference intervals with the number of samples for hematology and biochemistry parameters is reported in Tables 3 and 4 respectively. Discussion In the present study, we have reported standard reference values and ranges for various hematological and biochemical parameters of inhouse and wild-sourced adult (3-7 years old) Indian rhesus monkeys. Previously, different studies have reported reference values for hematology and biochemistry parameters for rhesus monkeys [3,4,7,8]. It is known that these reference values are affected by various parameters such as age, sex, fasting, sedation, or methods of restraint [3]. Some studies have either reported data of purpose-bred or wild-caught animals of different species [9,10]. However, no reports are available that compare and analyze reference values for hematological and biochemical parameters in inhouse and wild-sourced Indian rhesus monkeys. Due to the increased use of rhesus monkeys in biomedical research, establishing thorough reference ranges of essential hematology and biochemistry parameters becomes essential in understanding health, biological variations, effects of drugs, and data interpretation. Hence, in this study, we have prepared, analyzed, and reported reference values and ranges for different hematological and biochemical parameters for adult (3-7 years of age) inhouse and wild-sourced Indian rhesus monkeys. Comparison between inhouse and wild-sourced animals Stark differences were noticed in certain hematological parameters between inhouse and wild-sourced animals in the present study. Parameters such as total leukocyte counts, platelets, and certain differential leukocyte count parameters like absolute and percent neutrophils and eosinophils were noticed to be statistically higher in the wild-sourced animals than inhouse animals. An altered neutrophils-lymphocytes ratio was observed in Table 3 Range of reference intervals for hematological parameters of Indian rhesus monkeys aged 3-7 years Numbers in parentheses indicate the sample size employed in calculating the reference range after omission of outliers at 5% of significance, n, total number of animals wild-sourced animals. Neutrophils and lymphocytes form a large part of the total number of leukocytes in the blood [11]. It has been reported that sudden episodes of excitement or fright can provoke physiological leukocytosis in macaques, wherein, the leukocytes shift from the marginal pool to the circulating pool within a short time causing a marked increase in total leukocyte counts [11,12]. This reaction is common for animals that are untrained and/or unanesthetized as the wild-caught animals in the present study. Higher levels of leukocytes are also a result of higher levels of circulating cortisol, a characteristic feature of capture-induced stress [12]. Lower or normalized levels of total leukocyte counts in inhouse animals can hence be attributed to increased adaptation to handling and captivity and succeedingly to, decreased stress and cortisol levels post handling or restraining. Additionally, notably higher absolute and percent leukocyte counts can be a result of exposure to various microbes in their natural environment. The increased leukocyte and platelet counts observed in the present study are similar to those observed in a study conducted on captive vervet monkeys [12]. The authors suggest that higher platelets in wild-caught animals in the present study could be attributed to capture-induced stress and subclinical infections. Acute mental stress has been observed to cause a significant increase in mental stress in humans [13]. Higher values were observed for RBC, Hemoglobin, and HCT parameters in this study compared to a different study conducted on the same species [14]. Wildsourced male animals had the highest RBC counts, while inhouse male animals had the highest hemoglobin concentrations among all animals in the present study. Additionally, increased HCT and MCV and decreased MCH and MCHC values were found in both sexes of wildsourced animals. These parameters remained comparatively stable in inhouse animals. Few biochemistry parameters showed a marked difference from others in reference values of inhouse and wild-sourced animals. Liver health biomarkers such as AST, ALT, GGT, and albumin were statistically lower in wild-sourced than in inhouse animals. Serum creatinine and serum calcium to ascertain bone health were found to be in lower quantities in wild-sourced animals when compared to inhouse animals. Lower serum creatinine is indicative of good kidney function and hepatoprotection. Slightly elevated levels of serum potassium were found in wild-sourced animals when compared to inhouse animals. Significantly increased total protein values were also noticed for wild-sourced animals. These significant differences in hematological and biochemical parameters of wild-sourced and inhouse animals can be attributed to several different factors such as type of food and feeding behavior of the animals, availability of food resources in the surroundings, physical attributes, inherent pathogenic infections, social status amongst large groups, environmental and living conditions in the wild and geographical location among others [6,[15][16][17]. Although marked differences exist between wildsourced and inhouse animals, these animals could be used for experimental purposes if the hematology and biochemistry results fall under the normal reference range established for respective sources. Nonclinical safety evaluation studies warrant background data of animals before they are employed in specific studies. In such cases, animals with a mixed source of origins can be used for safety evaluation if source-specific historical data are available as presented in this study. Sex-based differences Statistically significant differences amongst sexes were observed randomly in this study. Marginally lower hemoglobin was observed in inhouse and wild-caught females and lower HCT concentration was observed in inhouse females when compared to male animals of respective sourced animals. This difference can be owed to menstrual blood loss in females. Previously reported data from Matsuzawa 1993 showed marginally lower RBC counts in captive female rhesus monkeys, while hemoglobin concentration and hematocrit were comparable amongst sexes. Significant sex-based differences in erythrocyte count, hemoglobin concentration, and HCT have been reported for cynomolgus monkeys [18,19], squirrel monkeys [9], and Chinese-origin rhesus macaques [20]. These findings can be correlated to gender-based differences in hematological parameters observed in humans [9,21]. It is reported that lower RBC counts in female animals are a result of the inhibitory influence of estrogen on erythropoiesis. Statistically significant higher values of hemoglobin, HCT, and MCHC parameters in male animals in this study can be attributed to the production of male sex hormones and bigger muscle mass that require greater amounts of oxygen [12,22]. Sex-based differences of statistical importance were observed for creatinine, calcium, and inorganic phosphorus parameters. Lower levels of serum calcium and creatinine in inhouse females and lower serum phosphorus levels in wild-sourced females were of significance when compared to male animals of respective sources. Other studies have previously reported minor differences in mean serum calcium and phosphorus levels amongst rhesus monkeys [9], while others have reported nonsignificance amongst sexes [20]. Similar to our study, a study conducted on Chinese-origin rhesus monkeys has also reported a statistically significant difference in serum total protein, albumin, and creatinine values when compared amongst sexes. Sex-based differences in parameters of wild-caught macaques could be a result of diet differences among males and females of varying societal hierarchies in the wild. Varying amounts of protein intake can additionally affect biological markers concentration in the blood. Inhouse animals are fed with standard monkey feed that is nutrient-balanced, and thus, fewer variations are observed in serum markers on inhouse animals. Conclusions In conclusion, we have established baseline values of hematological and biochemical parameters with definite experimental conditions. The reference values presented in the present study might be representative of adult Indian rhesus monkeys housed under conditions identical to those in safety evaluation studies, and therefore, would serve as the basis for animal selection and safety/ toxicology data interpretation. Further comprehensive evaluations are required that employ a large number of animals of varying age groups to prepare a thorough historical data range that will prove extremely useful in safety evaluation. Animals Results in the present study were obtained from 57 wild-sourced animals (42 males and 15 females) and 64 inhouse animals (42 males and 22 females) aged 3-7 years, housed at the Primate Research Facility of Zydus Lifesciences Limited at Zydus Research Centre in Ahmedabad, India. Wild-caught monkeys screened for the absence of Mycobacterium tuberculosis were obtained from a CPCSEA-known vendor under an official permit from the Ministry of Environment and Forest, Government of India (GoI). These animals were quarantined for a period of 6-weeks upon arrival at the Primate Research Facility, Zydus Research Centre, Ahmedabad. All animals were considered adults based on the age classification standards for humans to macaques -dental scale method [23]. Animals were housed individually in stainless-steel apartment-type cages with enrichment items. The environment was controlled to maintain a temperature of 18-29 °C, a 12:12 h light-dark cycle, and 15 air changes per hour. Animals were fed once daily with seasonal fruits and/or vegetables and a commercial NHP maintenance diet (6029-extrudate, Maintenance diet for nonhuman primates, Altromin Spezialfutter GmbH & Co. KG, Germany). Potable drinking water (reverse osmosis followed by UV treatment) was provided ad libitum. Facility veterinarians regularly examined all animals and monitored for the occurrence of diseases and changes in normal behavior. Blood sampling and analysis Animals were fasted overnight. Blood was withdrawn from the cephalic or saphenous veins of non-anesthetized monkeys restrained in squeeze-back mechanism cages by trained personnel under the supervision of facility veterinarians. For hematology analysis, blood was collected in ready-to-use vacutainers containing K 2 -EDTA. Hematology parameters were analyzed using Advia 2120i hematology analyzer (Siemens Healthineers, USA). For serum biochemistry analysis, blood was collected in Gel + Clot activator tubes. Blood was allowed to clot for at least 30 min at room temperature before centrifugation at 4000 rpm for 10 min at 24 °C to obtain serum. Serum biochemistry parameters were analyzed using Cobas c311 analyzer (Roche Diagnostics, Switzerland). Blood samples were analyzed at Clinical Table 5. Ethics statement and accreditations The animal facility is registered with The Committee for the Purpose of Control and Supervision of Experiments on Animals (CPCSEA), a statutory committee under the Ministry of Fisheries, Animal Husbandry and Dairying (MoFAH&D), GoI (Facility Registration Number: 77/PO/ RcBi/SL/99/CPCSEA). Additionally, the test facility is accredited with a GLP certificate from the National GLP Compliance Monitoring Authority (NGCMA), GoI for the conduct of toxicity studies, and AAALAC International for animal ethics. CPL-II is accredited by National Accreditation Board for Testing and Calibration Laboratories (NABL), GoI, and NGCMA, GoI. The laboratory has an established inhouse quality control program as well as an external quality assessment program with the College of American Pathologists (CAP). Compilation of reference intervals and statistical analysis Reference intervals for hematological and biochemical parameters have been developed according to the C28-A3 guideline for reporting reference intervals of clinical laboratory parameters [24,25]. Reference intervals have been calculated by detecting outliers, performing normality tests, and employing parametric and nonparametric methods. The distribution of results of the reference population has been estimated at the 2.5th percentile (lower limit) and 97.5th percentile (upper limit). The results of hematology and biochemistry parameters were analyzed using a two-tailed Student's t-test or Mann-Whitney U test for calculating reference values (GraphPad Prism Software, Version 9.1.1(225)). Statistical analysis was performed to detect significance between individual parameters of inhouse and wild-caught animals and to analyze differences between sexes. Data are presented as mean ± SD. p-value less than 0.05 indicated statistical significance.
2022-11-12T09:35:33.494Z
2022-11-11T00:00:00.000
{ "year": 2022, "sha1": "ce15063841f640bf55ea4d2a19680a35bf9850c8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "ce15063841f640bf55ea4d2a19680a35bf9850c8", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
118927051
pes2o/s2orc
v3-fos-license
Gravastars in $f(R,\mathcal{T})$ gravity We propose a unique stellar model under the $f(R,\mathcal{T})$ gravity by using the conjecture of Mazur-Mottola [P. Mazur and E. Mottola, Report number: LA-UR-01-5067., P. Mazur and E. Mottola, Proc. Natl. Acad. Sci. USA 101, 9545 (2004)] which is known as gravastar and a viable alternative to the black hole as available in literature. This gravastar is described by the three different regions, viz., (I) Interior core region, (II) Intermediate thin shell, and (III) Exterior spherical region. The pressure within the interior region is equal to the constant negative matter density which provides a repulsive force over the thin spherical shell. This thin shell is assumed to be formed by a fluid of ultra relativistic plasma and the pressure, which is directly proportional to the matter-energy density according to Zel'dovich's conjecture of stiff fluid [Y.B. Zel'dovich, Mon. Not. R. Astron. Soc. 160, 1 (1972)], does counter balance the repulsive force exerted by the interior core region. The exterior spherical region is completely vacuum and assumed to be de Sitter spacetime which can be described by the Schwarzschild solution. Under this specification we find out a set of exact and singularity-free solution of the collapsing star which presents several other physically valid features within the framework of alternative gravity. I. INTRODUCTION Mazur and Mottola [1,2] first ever proposed a model considering the gravitationally vacuum star (gravastar) as an alternative to the system of gravitational collapse, i.e., black hole. They generated a new type of solution by extending the idea of Bose-Einstein condensation in construction of the gravastar as a cold, dark, and compact object of interior de Sitter condensate phase. The scenario of this gravastar can be envisaged as follows: the interior is surrounded by a thin shell of ultrarelativistic matter whereas the exterior region is completely vacuum and hence the Schwarzschild spacetime at the outside can be considered to fit for the system. The shell is assumed to be very thin with a finite width in the range r 1 < r < r 2 , where r 1 ≡ D and r 2 ≡ D + ǫ are the interior and exterior radii of the gravastar respectively under consideration. Therefore, we can represent the entire * Electronic address: amdphy@gmail.com † Electronic address: shnkghosh122@gmail.com ‡ Electronic address: bkguhaphys@gmail.com § Electronic address: swapan.d11@gmail.com ¶ Electronic address: rahaman@associates.iucaa.in * * Electronic address: saibal@associates.iucaa.in system of gravastar into three specific segments based on the equation of state (EOS) as follows: (I) Interior (0 ≤ r < r 1 ): p = −ρ, (II) Shell (r 1 ≤ r ≤ r 2 ): p = +ρ, and (III) Exterior (r 2 < r): p = ρ = 0. We note that related to the gravastar there are lot of works available in the literature based on different mathematical as well as physical issues. However, these works are mainly carried out by several authors in the framework of Einstein's general relativity [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Though it is well known that Einstein's general relativity is a unique tool for uncovering many hidden mysteries of Nature, yet some observational evidences of the accelerating universe along with the existence of dark matter has imposed a theoretical challenge to this theory [20][21][22][23][24][25][26]. Therefore, several alternative theories have been proposed successively amongst which f (R) gravity, f (T) gravity, and f (R, T ) gravity have received more attention. In the present project our motivation is to study the gravastar under one of the alternative gravity theories, namely f (R, T ) gravity [27] and to observe different physical features of the object -their nontriviality as well as triviality. Actually our previously performed successful works on the initial phases of compact stars under alternative gravity [28,29] motivate us to exploit the alternative formalism to the case of the gravastar, a viable alternative to the ultimate stellar phase of a black hole. It has been argued that among all other modified grav-ity theories the f (R, T ) theory of gravity can be considered as a useful formulation which is based on the nonminimally curvature matter coupling. In the f (R, T ) theory of gravity [27] the gravitational Lagrangian of the standard Einstein-Hilbert action is defined by an arbitrary function of the Ricci scalar R and the trace of the energy-momentum tensor T . One can note that such a dependence on T may come from the presence of an imperfect fluid or from the consideration of quantum effects. The application of f (R, T ) gravity theory to different cosmological [30][31][32][33][34][35][36][37][38][39][40][41][42] realm can be noted in the literature. Among several astrophysical applications it is worthy of mentioning the Refs. [43][44][45][46][47][48][49][50][51][52]. In their work [43] Sharif et al. have studied the stability of collapsing spherical body of an isotropic fluid distribution considering the nonstatic spherically symmetric line element. A perturbation scheme has been used to find the collapse equation and the condition on the adiabatic index has been constructed for Newtonian and post-Newtonian eras for addressing instability problem by Noureen et al. [44] whereas in another work [45] Noureen et al. have investigated the range of instability under the f (R, T ) theory for an anisotropic background constrained by zero expansion. Also, by applying a perturbation scheme on the f (R, T ) field equations the evolution of a spherical star has been studied by Noureen et al. [46]. Zubair et al. [47] have analyzed the dynamics of gravitating sources along with axial symmetry under the f (R, T ) gravity. Some other relevant studies on the f (R, T ) theory of gravity can be observed in the following works [48][49][50] under different physical motivations. Yousaf et al. [51] have explored the evolutionary behaviors of compact objects in the framework of f (R, T ) gravity theory with the help of structure scalars whereas they [52] have investigated the irregularity factors for a self-gravitating spherical star evolving in the presence of imperfect fluid. The outline of the present study is therefore as follows: In Sec. II the basic mathematical formalism of the f (R, T ) theory has been provided as the background of the study. Thereafter in Sec. III we discuss the field equations and their solutions in f (R, T ) gravity considering the interior spacetime, exterior spacetime, and thin shell cases of the gravastars with their respective solutions. We provide the junction conditions, which are essential in connection to the three regions of the gravastar, in Sec. IV. Several physical properties of the models, viz. proper length, energy content, entropy and equation of state, are discussed in Sec. V. Some concluding remarks are provided in Sec. VI. The action of the f (R, T ) theory [27] reads where f (R, T ) is the function of the Ricci scalar R and the trace of the energy-momentum tensor T , L m being the matter Lagrangian density, and g is the determinant of the metric g µν . Throughout the paper we assume the geometrical units G = c = 1. Varying the action (1) with respect to the metric g µν , one can obtain the following field equations of f (R, T ) gravity: is the Ricci tensor, ∇ µ provides the covariant derivative with respect to the symmetric connection associated to g µν , Θ µν = g αβ δT αβ /δg µν and the stress-energy tensor is de- The covariant divergence of (2) reads [53] It is vivid from Eq. (3) that the energy-momentum tensor is not conserved for the f (R, T ) theory of gravity unlike the general relativistic case. In the present paper we assume the energy-momentum tensor to be that of a perfect fluid, i.e., with u µ u µ = 1 and u µ ∇ ν u µ = 0. Besides these conditions we also have L m = −p and Θ µν = −2T µν − pg µν . Following the proposition of Harko et al. [27], we take the functional form of f (R, T ) as f (R, T ) = R + 2χT , with χ being a constant. One can note that this form has been extensively used to obtain many cosmological solutions in f (R, T ) gravity [30-32, 39-41, 54]. By substituting the above form of f (R, T ) in (2), we get [30,31] G µν = 8πT µν + χT g µν + 2χ(T µν + pg µν ), where G µν is the Einstein tensor. One can easily get back to the result of general relativity just by setting χ = 0 in the above Eq. (5). Moreover, Curiously, by substituting χ = 0 in Eq. (6) one can verify that the energy-momentum tensor is conserved as in the case of general relativity. III. THE FIELD EQUATIONS AND THEIR SOLUTIONS IN f (R, T ) GRAVITY For the spherically symmetric metric one can find the nonzero components of the Einstein tensors as where primes stand for derivative with respect to the radial coordinate r. Substituting Eqs. (4), (8), (9), and (10) in Eq. (5) one can get [ Now, from the equation for the nonconservation of the energy-momentum tensor in f (R, T ) theory (6) one can obtain dp dr If we consider the quantity m as the gravitational mass within the sphere of radius r, then from Eq. (11) we can write Again from Eqs. (12), (14), and (15) one can get the equation of hydrostatic equilibrium in f (R, T ) theory as considering the fact that the energy density ρ depends on the pressure p i.e. ρ = ρ(p). Also, by letting χ = 0 the standard form of the Tolman-Oppenheimer-Volkoff (TOV) equation can be retrieved as applicable in the case of general theory of relativity. A. Interior spacetime Following the proposition of Mazur-Mottola [1,2], let us assume the equation of state (EOS) for the interior region as The above EOS is a special form of p = ωρ, with the EOS parameter ω = −1 and is known as the dark energy equation of state. Again using the above EOS, and from Eq. (14) one can obtain and the pressure turns out to be Now, using Eqs. (11) and (19) one gets the metric potential λ as where A is an integration constant which is set to zero as the solution is regular at the center (r = 0). Hence we have Again, using Eqs. (11), (12), (18) and (19) one can get the following relation between the metric potentials ν and λ as where B is an integration constant. Here the spacetime metric is free from any central singularity. Also the gravitational mass M (D) can be found out to be B. Shell Let us consider that the shell consists of ultrarelativistic fluid, obeying the EOS p = ρ. Zel'dovich [55] conceived the idea of this fluid in connection to cold baryonic universe and is known as the stiff fluid. In the present context this may come from thermal excitations with negligible chemical potential or from conserved number density of gravitational quanta at zero temperature [1,2]. This type of fluid has been extensively used by several authors to study various cosmological [56,57] as well as astrophysical [58][59][60] phenomena. One can note that within the nonvacuum region, i.e., the shell it is very difficult to find solution of the field equations. However, it is possible to obtain an analytical solution within the framework of thin shell limit, i.e., 0 < e −λ ≪ 1. Physically this means that when two spacetimes join together at a place (in our case the vacuum interior and the Schwarzchild exterior) the intermediate region must be a thin shell (see the Ref. [61]). Now in thin shell as r → 0, any parameter which is function of r is, in general, ≪ 1. Under this approximation along with the above EOS as well as Eqs. (11), (12) and (13), one can find the following equations Integrating Eq. (24) we get e −λ = 2 ln r + C, where C is an integration constant and range of r is D ≤ r ≤ D + ǫ. Under the condition ǫ ≪ 1, we get C ≪ 1 as e −λ ≪ 1. Also from Eqs. (24) and (25) one can get where F is an integration constant. Also Eq. (14), along with the EOS p = ρ, yields H being a constant. As ρ ∝ r 4 , we can infer that the ultrarelativistic fluid within the shell is more dense at the outer boundary than the inner boundary. C. Exterior spacetime The exterior region obeying the EOS (p = ρ = 0) can be defined by the well-known static exterior Schwarzschild solution which is given by where M is the total mass of the gravitating system. IV. JUNCTION CONDITION It is already mentioned that the gravastar consists of three regions, i.e., interior region (I), shell (II), and exterior region (III). The interior region (I) is connected with the exterior region at the junction interface, i.e., at the shell. According to the Darmois-Israel formalism [61,62] there should be smooth matching between the regions I and III of the gravastar. The metric coefficients are continuous at the junction surface (Σ), i.e., at r = D, though their derivatives may not be continuous. However, one can determine the surface stress-energy S ij by using the above mentioned formalism. Now, the intrinsic surface stress-energy tensor S ij is given by the Lanczos equation [61][62][63][64][65][66] as where κ ij = K + ij − K − ij provide the discontinuity in the second fundamental forms or extrinsic curvatures. Here the signs "+" and "−" correspond to the interior and the exterior regions respectively. Now, the second fundamental forms [67][68][69][70][71][72] associated with the two sides of the shell are given by where ξ i are the intrinsic coordinates on the shell, n ± ν are the unit normals to the surface Σ and for the spherically symmetric static metric n ± ν can be written as with n µ n µ = 1. Using the Lanczos equation we can get the surface stress-energy tensor as S ij = diag[σ, −υ, −υ, −υ], where σ is the surface energy density and υ is the surface pressure. The surface energy density (σ) and the surface pressure (υ) can be respectively expressed as So, by using the above two equations we obtain Also, the mass of the thin shell can be written as (38) Here M is the total mass of the gravastar and it can be expressed in the following form V. PHYSICAL FEATURES OF THE MODEL A. Proper length of the shell Let us consider that the stiff fluid shell is situated at the surface r = D defining the phase boundary of region I. The proper thickness of the shell is assumed to be very small, i.e., ǫ ≪ 1. Thus the region III starts from the interface at r = D + ǫ. So, the proper thickness between two interfaces, i.e., of the shell is determined as Integrating the above equation one can get B. Energy content In the interior region we consider the EOS in the form p = −ρ which indicates the negative energy region confirming the repulsive nature of the interior region. However, the energy within the shell turns out to be Taking into account the thin shell approximation one may write the energy E up to the first order of ǫ (≪ 1) as The above relation indicates that the energy of the shell is directly proportional to the ǫ, i.e., the thickness of the shell. C. Entropy According to the prescription of Mazur and Mottola [1,2] in the interior region I, the entropy density is zero which is consistent with a single condensate state. However, within the shell the entropy is given by where s(r) is the entropy density for local temperature T (r) and may be written as [1,2] α being a dimensionless constant. We note that in the present work we assume the geometrical units, i.e., G = c = 1, and also in Planckian units k B = = 1. So, the entropy density within the shell turns out to be Integrating the above equation we get D. Equation of state The EOS, at r = D, as usual can be expressed in the following form Hence, by virtue of Eqs. (36) and (37) the equation of state parameter can explicitly be written as For ω(D) to be real it requires 2M D < 1 as well as . Now, if one examines the above expression for ω(D) then two possibilities may emerge out: VI. CONCLUSION In the present work we have proposed a unique stellar model under the f (R, T ) gravity as originally conjectured by Mazur-Mottola [1,2] in the framework of general relativity. The stellar model which they termed as gravastar, may be considered to be a viable alternative to the black hole. To fulfill the criteria of a gravastar they described the spherically symmetric stellar system by the three different regions: interior core region, intermediate thin shell, and exterior spherical region with specific EOS for each of the region. Under this type of specification we have found out a set of exact and singularity-free solution of the gravitationally collapsing system which presents several interesting properties which are physically viable within the framework of alternative gravity of the form f (R, T ). In studying the above mentioned structural form of a gravastar we have noted down several salient aspects of the solution set as can be described below: (1) Pressure-density profile: The pressure and density relationship (p = ρ) of the ultrarelativistic fluid in the shell is shown with respect to the radial coordinate r in Fig. 1 which maintains a constant variation throughout the shell. (2) Proper length: The proper length ℓ of the shell as plotted with respect to the thickness of the shell ǫ (in Fig. 2) shows a gradual increasing profile. (3) Energy content: The energy of the shell is directly proportional to the thickness of the shell ǫ (in Fig. 3). (4) Entropy: The entropy S within the shell has been plotted with respect to the thickness of the shell ǫ (in Fig. 4). This plot shows a physically valid feature that the entropy is gradually increasing with respect to the thickness of the shell ǫ and thus suggesting a maximum value on the surface of the gravastar. . Besides these important general features we have an overall observation regarding the model in f (R, T ) gravity which is as follows: unlike Einstein's general relativity there is an extra term involving χ in the present model which has a definite role and makes the fundamental differences between the expressions in both the theories, as such vanishing of this coupling constant χ provides a limiting case for getting back the results of general relativity (e.g. note the Ref. [19]). This aspect can be verified through a comparative case study between the present work and that of Ghosh et al. [73] under 4-dimensional background. In this sense f (R, T ) gravity generates more generalized solutions for gravastar than general relativity. One final comment: as a possible astrophysical implication of our results and tests to detect gravastars under f (R, T ) gravity one may study their gravitational lensing effects as suggested by several authors, solely for gravastars [74] as well as for f (R, T ) gravity [49]. According to the methodology of Kubo and Sakai one may adopt a spherical thin-shell model of a gravastar developed by Visser and Wiltshire [3], which connects interior de Sitter geometry and exterior Schwarzschild geometry. Now, assuming that its surface is optically transparent they calculate the image of a companion which rotates around the gravastar and find that some characteristic images appear, depending on whether the gravastar possess unstable circular orbits of photons (Model 1) or not (Model 2). For Model 2, Kubo and Sakai calculate the total luminosity change, which is called microlensing effects,where the maximal luminosity could be considerably larger than the black hole with the same mass. In future, if one study the similar effects under f (R, T ) gravity, then one can comparethe effects of modified gravity on the above mentioned tests with that of the results based on general theory of relativity.
2017-06-07T06:03:28.000Z
2017-02-26T00:00:00.000
{ "year": 2017, "sha1": "f1da58ddfd39b338a2be03cc54b0e7b3f4e60a9a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1702.08873", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f1da58ddfd39b338a2be03cc54b0e7b3f4e60a9a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
119085573
pes2o/s2orc
v3-fos-license
Investigating Dark Matter and MOND Models with Galactic Rotation Curve Data We study geometries of galactic rotation curves from Dark Matter (DM) and Modified Newtonian Dynamics (MOND) models in $(g_{\rm bar},g_{\rm tot})$-space ($g2$-space) where $g_{\rm tot}$ is the total centripetal acceleration of matter in the galaxies and $g_{\rm bar}$ is that due to the baryonic (visible) matter assuming Newtonian gravity. The $g2$-space geometries of the models and data from the SPARC database are classified and compared in a rescaled $\hat{g}2$-space that reduces systematic uncertainties on galaxy distance, inclination angle and variations in mass to light ratios. We find that MOND modified inertia models, frequently used to fit rotation curve data, are disfavoured at more than 5$\sigma$ independent of model details. The Bekenstein-Milgrom formulation of MOND modified gravity compares better with data in the analytic approximation we use. However a quantitative comparison with data is beyond the scope of the paper due to this approximation. NFW DM profiles only agree with a minority of galactic rotation curves. Improved measurements of rotation curves, in particular at radii below the maximum of the total and the baryonic accelerations of the curves are very important in discriminating models aiming to explain the missing mass problem on galactic scales. I. INTRODUCTION The fact that gravitational potentials on a range of astrophysical scales are deeper than predicted in Newtonian gravity is well established based on a variety of astronomical observations. These include measurements of the rotation curves of baryonic matter in galaxies [1][2][3], the velocity dispersion of galaxies in clusters [4], lensing of merging clusters [5] and measurements of the cosmic microwave background [6]. This fact is also referred to as the "missing mass problem" and observations on all the aforemetioned scales have been argued to be in overall agreement with the presence of particle dark matter as the solution. Challenges for DM models in e.g. accounting for structure on small scales, such as the cusp-core problem [7], the missing sattelites problem [8] and the too-big-to-fail problem [9] remain. The observed rotation curves of baryonic matter in galaxies also motivates modified Newtonian dynamics (MOND) as an explanation for the problem [10]. In MOND the acceleration of test particles is modified, with respect to the Newtonian prediction, below a characteristic acceleration scale a 0 ∼ cH 0 , where c is the speed of light and H 0 the Hubble constant today. This modification accounts for the approximately flat asymptotic velocities of the galactic rotation curves at large radii [11][12][13][14][15] and the correlation of this asymptotic velocity with the total baryonic mass in the galaxy, i.e. the baryonic Tully-Fisher relation [16,17]. On larger scales it has been found that MOND cannot account for the entire missing mass in galaxy clusters [18] or the dynamics of cluster mergers [19,20]. Nor is it obvious if MOND can account for cosmological observations [21][22][23]. For a recent review of the observational status of MOND see [24]. Here we study galactic rotation curve data and the predicted curves in (g bar , g tot )-space (g2space) from MOND and DM models with g tot (r) being the total observed centripetal acceleration of matter in a rotationally supported galaxy as function of radial distance r from the center. Similarly g bar (r) is the centripetal acceleration arising from the baryonic (visible) matter distribution assuming Newtonian gravity. We consider the predictions from two variants of MOND known as MOND modified inertia (MI) models [10,25] which have been extensively employed to fit rotation curves [11][12][13][14][15]26] and MOND modified gravity (MG) models in the Bekenstein-Milgrom formulation [27]. In the latter case we employ an analytic approximation for the predicted rotation curves [28]. For DM we consider the Navarro-Frenk-White [29] and the quasi-isothermal density profiles. In this study we find that MOND modified inertia, independent of the specific model used, is disfavoured by the data at more than 5σ. More generally this holds for any model yielding a monotonically increasing function in g2-space. This paper is organized as follows: In section II we illustrate different g2-space geometries using a simple exponential disk model of the baryonic content of galaxies in Fig 1. We give a global classification of geometries using the relative locations of r bar and r tot -the radii of maximum baryonic and total accelerations respectively -summarized in table I. We then consider ratios of accelerations,ĝ bar (r) ≡ g bar (r)/g bar (r bar ) andĝ tot (r) ≡ g tot (r)/g tot (r bar ) and illustrate theĝ2-space geometries in Fig. 2. In section III we present our analysis of the SPARC rotation curve data [42] using the full inferred baryonic matter distribution, including disk, bulge and gas components. The data is shown in g2space andĝ2-space in Fig. 3. The latter eliminates systematic uncertainties on inclination angles and galaxy distances and reduces systematic uncertainties on mass-to-light ratios in the data. We first show that the prediction r bar = r tot from MOND modified inertia models, and consequently thatĝ bar,tot (r bar ) =ĝ bar,tot (r tot ), is in disagreement with data at more than 5σ. This is summarized in table II. We next group the galaxies in SPARC according to the relative locations of r bar and r tot , summarized in table III and show the distribution of data inĝ2-space at radii above and below r bar for the full SPARC data set and for each of these groups in Fig. 4. The averageĝ2-space values of the full data set displays the characteristic geometry of DM with an isothermal density profile. This geometry is shared by the Bekenstein-Milgrom formulation of MOND modified gravity in the approximation used here. However the spread in data is significant. A minority of galaxies -which by selection have data only at large radii -display the characteristic geometry of MOND modified inertia on average while another minority displays that of DM with an NFW profile. In section IV we summarize results and briefly discuss the limitations of our data analysis with respect to MOND modified gravity models and the relevance of improved measurements of rotation curves at small and moderate radii to probe the solution to the missing mass problem. II. MODEL GEOMETRIES IN g2-SPACE We begin by illustrating the geometry of MOND and DM models in g2-space in a simplified setting with the baryonic matter modelled purely as an infinitely thin disk with an exponential surface mass density where Σ 0 is the central surface mass density and r d is the scale length. For all quantitative results later we instead use the inferred baryonic accelerations from the SPARC database [42]. We distinguish between two classes of MOND models that yield distinct geometries in g2-space, namely MOND modified inertia models (MI) [10,25] -in which the Newtonian equation of motion is modified but Newtonian gravity is not -and MOND modified gravity models (MG) in the formulation of Bekenstein-Milgrom [27] in which the law of gravity itself is modified. Below we will refer to the total centripetal acceleration of a test mass in the midplane of a disk galaxy, of an unspecified model, as g tot . The acceleration stemming from the visible matter assuming Newtonian gravity is termed g bar . Finally when discussing specific models we will refer to the total acceleration with subscripts corresponding to that model, like g MI for the total acceleration in a MOND modified inertia model. MOND Models: In MOND modified inertia models the total centripetal acceleration, g MI , on a test mass in the galactic plane is related to the Newtonian one, g bar , via the relations where g 0 ∼ 10 −10 m s 2 is the characteristic acceleration scale of MOND. The interpolation function µ(x) smoothly interpolates between the deep Mondian regime µ(x) x for x 1 and the Newtonian regime µ(x) 1 for x 1, but is otherwise undetermined at this level where a complete model of MOND modified inertia is not specified. The inverse interpolation function is ν(y) ≡ I −1 (y)/y with In the Bekenstein-Milgrom formulation of MOND modified gravity models [27] the total centripetal acceleration is determined via a modified Poisson equation for the MOND potential field where the properties of the undetermined interpolation function is as above for MOND modified inertia. By noting that 4πGρ = ∇ · g bar , solutions to this equation are of the form where h is a generic vector field. An approximate expression for the resulting acceleration g MG in MOND modified gravity, analogous to that in Eq. 2, for an exponential disk galaxy is derived in [28]: g + MG = I −1 (g + bar ), g + bar (g bar , r) = g 2 bar + (2πGΣ(r)) 2 . Due to the radial dependence of the fiducial quantities g + bar,M G the MOND modified gravity acceleration g MG (g bar , r) is not a single valued function of the baryonic acceleration g bar . A number of interpolation functions µ(x) and inverse interpolation functions ν(y) have been considered in the literature, e.g. [43,44]. For our analysis the details of the interpolation function are not central and we therefore focus on the inverse interpolation function from [24,45] which was used to fit the SPARC galaxy data in [13,14]: In order to classify g2-space geometries and rotation curve data we define two reference radii, r bar and r tot as the radii at which g bar and g tot are maximum respectively, g bar (r bar ) = max{g bar (r)}, g tot (r tot ) = max{g tot (r)} . We also define the curve segments C ± above and below r bar (similarly we could use r tot as reference radius) of a given model in g2-space as In the left panel of Fig. 1 we show the MOND modified inertia curve from Eq. (2) (solid line) and the approximate modified gravity curve from Eq. (6) (dotted and dashed curves). The reference radii r bar,obs are indicated with dots while the grey curve segments correspond to C + and the black curve segments to C − . In MOND modified inertia models r bar = r tot and the two curve segments coincide, ie. C − = C + , as consequences of the MOND modified inertia function g MI (g bar ) being single valued. Equivalently, the area enclosed by the MOND modified inertia curve C MI is zero, A(C MI ) = 0 as discussed in [46]. In the MOND modified gravity approximation of Eq. (6) it follows that r bar < r tot and the curve segment C + is above the curve segment C − in g2-space. Equivalently, the enclosed area of the MOND modified gravity curve is non-zero A(C MG ) > 0. We summarize these properties in the first two rows of table I. In Fig. 1 we have used the exponential disk in Eq. (1) for the baryonic matter distribution, for which r bar 0.41r d and the interpolation function corresponding to Eq. (7). Since g bar (r = 0) = g bar (r = ∞) = 0 the curves shown are closed with curve parameters 0 ≤ r ≤ ∞. The scale length r d of the exponential disk does not influence the geometry of the curves but only how much of the curve is traced up to a given radius r. The central surface density Σ 0 scales the maximum values of g bar , and g tot and therefore stretches or shrinks the curves. For MOND modified inertia, curves with smaller Σ 0 coincide with a part of those with a larger Σ 0 . For MOND modified gravity we illustrate the shrinking and stretching by plotting two different values of Σ 0 1 . In both cases, for a given interpolation function and acceleration scale g 0 , the g2-space curves are completely determined for all galaxies by the baryonic matter distribution. . Dark Matter: In DM models the total centripetal acceleration g DM (r) = g bar (r) + g halo (r) is a sum of the contributions from the baryonic and DM density distributions -here assumed to be a spherical halo for simplicity. To illustrate the g2-space geometry of the considered dark matter models we again employ the exponential disk in Eq.(1) for the baryonic matter and two different DM density profiles where ρ 0,NFW , ρ 0,ISO are mass densities and r s , r c are scale lenghts respectively. The Navarro-Frenk-White profile ρ NFW (r) is motivated by fits to the density of halos in simulations of cold collisioness DM [47] and leads to a cuspy central DM density profile at small radii scaling as ρ NFW (r) ∼ r −1 . The quasi-isothermal DM density profile ρ ISO (r) may be physically realized (at small radii) in models with sizeable DM self interactions and leads to a cored DM density profile at small radii scaling as ρ ISO (r) ∼ r 0 . It has recently been proposed that the diversity of galactic rotation curves [48] can be accomodated in a model of self interacting DM where the resulting DM density profile is approximately quasi-isothermal profile at small radii, set by the DM density and self-interaction cross-section, while following the NFW profile at large radii [49,50]. For both density profiles the 1 If the galactic mass is kept fixed r d and Σ 0 cannot be varied independently centripetal accelerations in the midplane of a disk galaxy g N F W (g bar , r), g ISO (g bar , r) are not single valued functions of g bar . We show examples of DM model curves in g2-space for the quasi-isothermal and NFW profiles respectively in the middle and right panels of Fig. 1. The curve segments C + are shown in orange and cyan respectively while the curve segments C − are shown in red and blue respectively. The full curves in the quasi-isothermal case are closed curves, since also g ISO (r = 0) = g ISO (∞) = 0 while the area of the curve is non-zero A(C ISO ) > 0 as discussed in [46]. The width of the curve is controlled by ρ 0 , as seen by comparing the solid thick and solid thin curves, while the steepness of the curve near r = 0 is controlled by r c as seen by comparing the dashed and dotted curves. . The NFW curves are distinct by not being closed due to the divergence of the profile at small r -the cuspyness of the NFW profile translates into g tot,DM (r = 0) > 0 -and by the fact that the curve segments C + lie below the curve segments C − . The width of the NFW curve is controlled by Models Reference radii Curve segments Curve Area a MOND-MI The curves are closed and the areas spanned by the curves, A(C), are defined for the first three models provided the baryonic accelerations satisfy g bar (r = 0) = g bar (r = ∞) = 0 as is the case for an exponential disk. the concentration parameter c = r vir rs , where r vir is the virial radius, as seen by comparing the solid and dashed curves. We summarize the characteristics of the model geometries in Table I. For MOND modified inertia models r tot = r bar and C − = C + and consequently A(C MI ) = 0. For MOND modified gravity and quasi-isothermal DM models r tot > r bar and the curve segments C + lie above C − in g tot values and consequently A(C ISO ) > 0. Finally for NFW DM models the curve segments C − lie above C + in g tot (with r tot < r bar barely visible) and the area is undefined. The degeneracy of the MOND modified gravity approximation and DM-ISO geometries with respect to these basic characteristics, does not imply the geometry is identical as is evident from Fig. 1. In particular the shape of the DM-ISO curves is controlled by the scale length of the DM density an additional free parameter as compared to the MOND modified gravity approximation. Normalizedĝ2-space: In order to display the average geometry of several galactic rotation curves and to reduce systematic uncertainties it is relevant to consider ratios of accelerations in a normalizedĝ2-space by definingĝ bar,tot (r) ≡ g bar,tot (r)/g bar,tot (r bar ). Another possibility here would be to use r tot as a reference radii in the denominator above. We replot the MOND and DM model geometries from Fig. 1 in the rescaledĝ2-space in Fig (2). III. DATA ANALYSIS We study rotation curve data from the 175 galaxies in the SPARC database [42]. The database provides the observed total rotational velocities v obs (r j ), as a function of observed radii points r j . The database also provides the inferred rotational velocities v disk (r j ), v bul (r j ), v gas (r j ) from the baryonic matter components of the galaxies, divided into stellar disks, bulges and gas components. From this we compute the inferred baryonic acceleration g bar (r j ), and the total observed acceleration g obs (r j ) at each radii r j as We adopt as central values for the mass to light ratios Υ disk = 0.5 M L and Υ bulge = 0.7 M L . The SPARC data base also provides the corresponding (random) uncertainties δv obs (r j ), as well as the uncertainties δi and δD on the galaxy inclination angle i and distance D. Following [42] we further adopt a 10 percent uncertainty on v gas and 25 percent uncertainties on Υ disk,bulge , i.e. δv gas = 0.1v gas and δΥ disk,bulge = 0.25Υ disk,bulge . With this input we compute the δg bar , δg obs uncertainties δg bar (r j ) = (2v gas (r j )) 2 δv 2 gas + v 4 disk (r j )δΥ 2 disk + v 4 bulge (r j )δΥ 2 bulge r j . where we note that the inferred g bar (r j ) are independent of distance D and inclination angle i [15]. remaining uncertainties, δi, δD, δΥ disk,bulge are systematic errors, rescaling all data points within a galaxy in the same direction. To reduce these systematic uncertainties we will analyze ratios of accelerations as in Eq. (11), defining: g(r j ) bar,obs = g(r j ) bar,obs g(r bar ) bar,obs ,ĝ(∆r) bar,obs = g(∆r) bar,obs g(∆r bar ) bar,obs ; g(∆r) bar,obs ≡ 1 N ∆ j∈∆r g(r j ) bar,obs where ∆r denotes an interval centered on r that we average g over within a galaxy, ∆r bar is an equivalent interval around r bar and N ∆ denote the number of points in the interval. The ratiosĝ obs (r j ) andĝ obs (∆r) eliminate the systematic uncertanties δi, δD in galaxy inclination angle i and galaxy distance D, up to any significant variation of inclination angle with radius within a single galaxy [15], whileĝ obs (∆r), reduces the systematic error introduced by the single normalization point g obs (r bar ) inĝ obs (r j ) when averaging over several galaxies. As we show explicitly in the appendixĝ bar (r j ) andĝ bar (∆r) reduce the systematic uncertainties in δΥ i significantly, especially near r bar by construction, where we are particularly interested in the geometry. These three sources of systematic uncertainties were found to be the dominant sources of scatter in previous analysis [15]. With the above construction there is only a small remaining systematic error onĝ bar (r j ) from mass to light ratios contained in the small quantity ∆Υ, in Eq. (A5). This means we can to a good approximation take the errors ofĝ obs,bar (r j ) andĝ obs,bar (∆r) from different galaxies to be uncorrelated, even if the error on the mass to light ratios should be correlated for different galaxies. There is also a possible systematic uncertainty onĝ obs (∆r) from data points which may be included in both numerator and denominator when ∆r and ∆r bar overlap. This part of the error budget for g obs (∆r) is however completely uncorrelated between different galaxies under the assumption that v obs values are uncorrelated. The details of the errors are discussed in the appendix A. A. Data Selection We begin with the 175 galaxies in the SPARC database and discard 22 galaxies based on the same quality criteria applied in [13,14]. Ten of these are face-on galaxies with inclination angle i < 30 0 that are rejected to minimize corrections to the observed velocities and twelve are galaxies with asymmetric rotation curves that do not trace the equilibrium gravitational potential. We discard one more galaxy, UGC01281, with large negative inferred speeds v gas for the gas component leaving [13,14]. We only include this additional requirement when explicitly stated, e.g in the data sample N G 2 discussed below, and otherwise keep all the 3143 data points. We show (a part of) the collection of SPARC data in g2-space from these 152 galaxies in the top left panel of Fig. 3 (gray dots) across 3 orders of magnitude in g bar . Also shown in the figure panel are the curves of individual galaxies with error bars that were highlighted in [46]. These error bars include both random and systematic errors from Eq. 14. The blue line is the MOND modified inertia function in Eq. (7) with g 0 1.2 × 10 −10 m s 2 . This value of g 0 is the best fit value to the entire data set found in [13,14] with the additional data requirement of δv obs /v obs < 0.1. The top right panel shows the same figure with this requirement δv obs /v obs < 0.1 imposed. Finally the bottom panels show the same data in the normalizedĝ2-space. While the entire collection of data traces the MOND modified inertia curve, as observed and quantified in [13,14], it also appears that individual galaxies deviate significantly from this curve. In order to test the geometry of the data we therefore first consider 3 subsets of data points from the 152 galaxies, N 1,2,3 . We denote the set of 152 data points with radii r j = r obs by N 1 and the remaining 146 data points after first requiring δv obs /v obs < 0.1 as in [13,14] by N 2 . Computing the averages ĝ obs,bar on these sets we find that the N 1 and N 2 data sets yield 3σ and more than 5σ discrepancy respectively with the MOND modified inertia prediction ĝ obs (r obs ) MI =1. The discrepancies with the prediction ĝ bar (r obs ) MI =1 are larger as summarized in the significance we consider a larger data set N 3 with a range r j ∈ ∆r obs,bar of points around r obs,bar defined here via r obs,bar + 1 ≥ r j ≥ r obs,bar − 1 . We computeĝ obs (∆r obs ) as defined above using these points for each galaxy and finally the galaxy averages ĝ obs,bar (∆r obs ) over all galaxies with this data. Here we find more than 5σ discrepancy from the MOND modified inertia prediction of unity with both theĝ obs,bar observables. The results are summarized in the last row in Table II. The numbers summarized in Table II imply that MOND modified inertia does not correctly describe the SPARC data, even if the overall scatter around the fitting function (7) was found to be small in [13,14]. To study the geometry of the SPARC data further we group the entire data set into points N + G at r ≥ r bar and points N − G at r < r bar . We further divide the galaxies into 3 groups G 1,2,3 , motivated by the theoretical characterization in Table I. Galaxies in G 1 satisfy r bar = r tot , galaxies in G 2 satisfy r bar < r tot , and galaxies in G 3 satisfy r bar > r tot . The set of data points in G 1 is N G 1 while we divide each set of data points within G 1,2 into subsets N + G 2 , N + G 3 with r j > r bar and N − G 2 and N − G 3 with r j < r bar . We summarize the datasets in galaxy data groups N ± G , N G 1 , N ± G 2,3 we bin the normalized baryonic accelerationsĝ bar (r j ) in 4 bins of widthĝ bar,k −ĝ bar,k−1 = ∆ĝ bar = 0.25 with k = 1, ..., 4 and compute the average values ĝ bar,obs N ± G i,k and associated errors δ ĝ bar,obs N ± G i,k discussed in the appendix. We show the data groups N ± G , N G 1 , N ± G 2,3 together with the binned averages of each corresponding data set in Fig. 4. On all 4 panels the solid black line is the MOND modified inertia prediction while the solid and dashed gray lines are the predictions from the Bekenstein-Milgrom MOND modified gravity approximation at radii above and below r bar . We keep the discussion below qualitative as we have already presented the quantitative discrepancy with MOND modified inertia and because our treatment of MOND modified gravity relies on the approximation for purely disk galaxies in [28]. The top left panel shows data from the full group of SPARC galaxies, equivalent to Fig 3, but with data divided into the two groups N ± G . The data (light purple and purple dots) is seen to display the geometry characterized by r tot > r bar in table I on average. MOND modified inertia (black line) is a good description of the average values of N + G (light purple points with errors) but not in N − G (purple points with errors) at large accelerations. Also the panel shows a large overall spread in data inĝ2-space compared to the data errors on the averages. MOND modified gravity (solid gray for r ≥ r bar and dashed gray line for r < r bar ) is a better description of data except for points at r < r bar with small accelerations. The panel also shows a large overall spread in data in g2-space compared to the data errors on the averages. The top right panel displays the same quantities but for the data set N G 1 where galaxies have r tot = r bar . Here MOND modified inertia is a very good description of the averaged data -which by selection only samples radii r ≥ r bar . Both the average measurement error and the spread in data is smaller than for the full data set (both at r < r bar and at r ≥ r bar ) on the left panel. The bottom left panel is for the N + G 2 and N − G 2 data sets with r tot > r bar which is true for most of the galaxies (86) and these galaxies are driving the overall geometry and and data spread seen in the top left panel. Despite this spread and the greater average errors there is a clear difference between the two data sets N + G 2 and N − G 2 with MOND modified inertia a poor description of N − G 2 data. Again the MOND modified gravity prediction is clearly a better match to the data, but as opposed to the full data set in the top left panel, it is now the data at r < r bar and large accelerations that yields the biggest deviations. Finally the right hand panel shows the results for the N + G 3 and N − G 3 data sets with r tot < r bar . Here only the spread and errors of the N − G 3 set is big as compared to the N 2 set, with MOND modified inertia model match to the average values of both data sets. Inevitably the MOND modified gravity approximation is also a poor match to the average values of the N − G 3 as the MOND modified gravity approximation always leads to r tot > r bar Again we do not here quantify the deviations of the MOND modfied gravity approximation, as this approximation was developed for an infinitely thin disk galaxy geometry [28] and also does not take into account the external field effect [51] in MOND modified gravity, which might be important for some non-isolated galaxies, see e.g. the recent discussion of the 'dark matter less' dwarf galaxy NGC-1052-DF2 and MOND [52,53]. The analysis does show that the disagreement is driven by the majority of galaxies exhibiting geometries with r tot > r bar but it is offset by a minority of galaxies exhibiting r tot < r bar . This, together with the fact that most data is measured at r > r bar , means that as a whole the SPARC rotation curve data exhibits moderate and Gaussian residuals around the function (7) as found in [13]. This however does not reflect the average geometry of the rotation curves. Our analysis therefore highlights the need to further study MOND Modified gravity models, beyond the MOND modified inertia models most often used in the literature, in order to establish if MOND can account for rotation curve data. Top left panel: The full SPARC data set (shown without errors) divided into points in N + G with r > r bar (light purple) and those in N − G with r < r bar (purple). Also shown are the average data values and their errors computed within the 4ĝ bar bins in N ± G (light purple and purple error bars) as discussed in the text. Finally we show the averaged prediction from MOND MI (black curve) and MOND MG for r > r bar (gray solid) and for r > r bar (gray dashed ). Top right panel: The same as top left but for all data in N G 1 (galaxies where r obs = r bar ) without distinguishing between r > r bar or r < r bar . Bottom left panel: The same as top left but for data in N G 2 (galaxies where r obs > r bar ). Bottom right panel: The same as top left but for data in N G 3 (galaxies where r obs < r bar ). We have shown the g2-space geometry of selected MOND and DM models for disk galaxies with exponential mass densities for the visible baryonic mass distribution in Fig. 1 -these are MOND modified inertia and an approximate description of Bekenstein-Milgrom MOND modified gravity models as well as DM models with NFW and quasi-isothermal DM density profiles. We have classified the g2-space geometry of these models in Figs. 1 and 2 using global characteristics: The location of the maximum acceleration due to the baryonic matter and the maximum of the total predicted acceleration, r bar and r tot , whether the curve is closed or open and the area of the closed cuves A(C). MOND modified inertia models, DM models with NFW profiles and DM models with quasi-isothermal profiles can be organized in distinct categories according to these global characteristics, while MOND modified gravity models in the approximation used is degenerate with DM models with quasi-isothermal profiles as summarized in table I. Rotation curve data may also be organized according to this classification. Applying this classification to rotation curve data from the SPARC data base we find that MOND modified inertia, independent of the specific interpolation function used, is in disagreement with the data at more than 5σ. A previous analysis finding disagreement between MOND modfied inertia and SPARC data was presented in [46]. In the current analysis we have considered ratios of accelerationŝ g bar,obs (r) ≡ g bar,obs (r)/g bar,obs (r bar ) with respect to some reference acceleration, here chosen as g(r bar ) in order to reduce the systematic uncertainties in data stemming from galaxy inclination angles i and distances D on g obs as well as mass to light ratios Υ disk,bulge on g bar . If there is a strong radial dependence of these quantities within individual galaxies, and/or between galaxies this can still affect our results. However, changing the conclusion that MOND modified inertia models do not fit the data would require significant radius variations from r bar to r tot . A detailed study of this is beyond the scope of this paper, but e.g. a monotonically decreasing dependence of mass to light ratios with radius [54,55] will not change our result that MOND modified inertia is not in agreement with data. We have presented the rotation curve data from the SPARC data base organized according to the relative location of r bar,tot inĝ2-space in table III and Fig. 4. In addition to the quantitative results on MOND modified inertia, these figures establish qualitatively that subsets of galaxies display different geometric characteristics and neither MOND modified inertia nor MOND modified gravity describe all data subsets. If all data is joined together a fit to MOND modified inertia with gaussian errors and moderate scatter can be obtained [13] since the average data from the data sets N − G 2 and N − G 3 deviate in opposite directions from the MOND modified inertia prediction and since most data points are measured at r > r bar where deviations from MOND modified inertia are not as significant. Since the global geometrical characteristics of the other considered models, both MOND modified gravity (in the approximation employed), DM with isothermal density profile and DM with NFW density profile, differ from MOND modified inertia exactly for data points at r < r bar it is important to investigate these separately. In summary we find that MOND modified inertia models, frequently used to fit rotation curve data, are not in agreement with data, while further study of MOND modified gravity models would be required to establish those as a viable explanation of data. Further we find that the detailed geometry in g2-space is useful to probe different DM density distributions, with e.g. only a minority of galaxies exhibiting the global characteristics of NFW profiles. This latter conclusion is well known in the guise of the cusp-core problem. However the g2-space analysis makes it apparent how in particular future improvements in rotation curve data at small radii is extremely useful in probing the DM density profile. This may yield new insights on the required particle physics characteristics of DM, e.g. DM self interactions. More generally the g2-space analysis offers a very useful and striking characterization of models for the missing mass problem. Therefore δg bar is independent of distance D and inclination angle i as discussed in e.g. [15], with the resulting scalings of g obs,bar (r j ) being g bar → g bar = g bar , g obs → g obs = D D sin(i ) 2 sin(i) 2 g obs (A2) Once we form the ratiosĝ bar,obs then alsoĝ obs is independent of distance D and inclination angle i such that under a change of distance D and angle i we havê g bar =ĝ bar ,ĝ obs →ĝ obs =ĝ obs (A3) We include the systematic uncertainty inĝ bar from the mass to light ratios Υ disk,bulge via propagation of errors including covariance, such that where Cov(x a , x a ) = δx 2 a is the error of x a , Cov(x a , x b ) = 0 for uncorrelated errors x a,b , Cov(x a , x b ) = δx a δx b for fully correlated errors x a,b and similar for the functions f k,l . The functions f k,l are the entire set of accelerations g bar,obs (r j ) and from this covariance matrix we find the errors onĝ bar,obs (r j ) and errors on averages ĝ bar,obs (r j ) which we discuss explicitly below. First the errors δĝ bar,obs (r j ) following from Eq. (A4) are δĝ obs (r j ) =ĝ obs (r j ) 2δv obs (r j ) v obs (r j ) 2 + 2δv obs (r bar ) v obs (r bar ) 2 for r j = r bar δĝ bar (r j ) =ĝ bar (r j ) 2v gas (r j )δv gas (r j ) v bar (r j ) 2 2 + 2v gas (r bar )δv gas (r bar ) v bar (r bar ) 2 2 + (∆Υ(r j )) 2 , for r j = r bar , while δĝ obs (r bar ) = δĝ bar (r bar ) = 0. It follows from Eq. (A5) thatĝ bar (r j ) is insensitive to the systematic uncertainties in δΥ k near r bar by construction, where we are particularly interested in the geometry. In summary the ratiosĝ bar,obs eliminate the systematic uncertainties in galaxy distance and disk inclination and significantly reduce that from mass to light ratios. These three sources of systematic uncertainties were found to be the dominant sources of scatter in previous analysis of SPARC data [15]. We have checked explicitly that the error ∆Υ onĝ bar (r j ) is indeed small and while we keep it in all error calculations this means we can takeĝ obs,bar (r j ) values from different galaxies to be uncorrelated even if δΥ k are correlated between different galaxies -of course if mass to light ratios between different galaxies vary randomly then so doĝ obs,bar (r j ) regardless of this residual error being small. 5while values within a galaxy are still correlated via the same normalization pointŝ g obs,bar (r bar ). The averages and errors on the average ofĝ obs (and similarly withĝ bar ) for points within a galaxy G can be written as ĝ obs G = 1 N G j∈Gĝ obs (r j ), δĝ obs G = 1 N G r j =r bar ĝ(r j ) 2δV obs (r j ) V obs (r j ) 2 +   r j =r barĝ (r j ) 2δV obs (r bar ) V obs (r bar ) where the error will typically be dominated by the last term, which is O(1) in the number of points N G while the first term is O(1/ √ N G ) due to the single normalization point in the denominator. To improve on this we also employ the average g bar,obs (∆r bar ) for the last results with data set N 3 in where again N G is the number of galaxies used in the average ∆r bar,obs are the intervals around r bar,obs and N ∆ obs,G are included to correct for cases when either ∆r bar or ∆r obs contain less than 3 points. Finally the errors on the binned averages over the points N ± i,k in Fig. 4 are computed by first computing the error on the points G ∩ N ± i,k in N ± i,k from a given galaxy G as in Eq. (A7) which then are uncorrelated between galaxies such that the weighted errors are: bar,obs (r j ) (A10) δ ĝ bar,obs N ±
2018-05-27T22:53:54.000Z
2018-05-27T00:00:00.000
{ "year": 2018, "sha1": "cd7e27ec16c883b56f8481b7324618076f3d2863", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a566a536c140bae50efa4c910ed3cbb0cdf1535d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252471801
pes2o/s2orc
v3-fos-license
Optimized Preparation of Methyl Salicylate Hydrogel and Its Inhibition Effect on Potato Tuber Sprouting : Potato tuber sprout results in nutrient loss and solanine production. Essential oils have been mentioned to reduce sprouting; however, they can easily evaporate and decompose, thus restricting their application. In this paper, the inhibition effect of methyl salicylate (MeSA) as the main component of wintergreen essential oil on tuber sprouting was evaluated, and MeSA hydrogel was prepared by using the ionic gel method to improve the sprout inhibition efficiency. Based on SEM, FTIR, XRD, and DSC images, MeSA was encapsulated successfully in calcium alginate hydrogel, and the thermal stability of hydrogel was improved. MeSA direct fumigation released sharply on the first day, while MeSA in hydrogel released slowly and steadily; the release of MeSA content was 0.0085 mg mL − 1 on the 7th day. The optimized formulations of MeSA hydrogel were as follows: 1.9% of sodium alginate, 2.2% of CaCl 2 , 1.9:1 of core–wall ratio, and 0.15% of Tween-80. The inhibition effect of MeSA hydrogel was better than that of pure MeSA at 18 days, the sprouting rates of the MeSA and MeSA hydrogel were 42.50% and 13.33%, and the corresponding sprouting indexes were 8.57% and 2.86%, respectively. MeSA was found to inhibit potato tuber sprouting for the first time in this paper; MeSA hydrogel can enhance the inhibitory effect of MeSA on potato sprouting. Introduction Potato (Solanum tuberosum L.) is an annual herb of Solanaceae, which is recognized as one of the most ubiquitous crops in the world, after rice, maize, and wheat. The global production of potatoes is about 368 million tons, with more than 5000 known varieties [1]. Sprouting is the main cause of loss during post-harvest storage and logistics, since it damages tuber nutrients and increases water loss of the tuber surface [2] and the production of the toxic substance solanine; therefore, the development of effective sprouting-inhibition methods is urgently needed in order to reduce tuber sprouting. Many methods have been explored for inhibiting potato sprouting, such as reducing storage temperature [3], using chemical reagents [4] and irradiation technology [5]. During low-temperature storage (about 0 • C), a large amount of reducing sugar is accumulated in tuber, and this depends on the acid invertase activity [6], resulting in browning of the potato and the formation of substances such as acrylamide, which seriously reduces the commercial value of potatoes and endangers the human health. Isopropyl N-(3-chlorophenyl) carbamate (CIPC) is widely used in potato storage due to its low cost and good inhibition effect on potato sprouting, but its degradation products may produce harmful substances to human body and pollute the environment [7]. The minimum residual amount of CIPC is limited to 30 mg kg −1 by the Food and Drug Administration (FDA) and US Environmental Protection Agency (EPA), and the national food safety standard in China also stipulates the same; however, CIPC is no longer approved for use in the European Union [8]. The effects of radiation treatment are irreversible, and gamma irradiation has harmful effects on potato quality and is currently prohibited in the European Union [9]. In recent years, essential oils have been used to inhibit potato sprouting, such as garlic essential oil [10], citronella essential oil [11], and Rosmarinus officinalis essential oil [12]. Methyl salicylate (MeSA) is the main component of wintergreen essential oil, which is classified as safe in the US (FDA) and China (GB28355-2012). Studies have shown that MeSA can enhance cold tolerance of apricots [13], control aphids [14], and enhance the resistance of rice to Xanthomonas oryzae pv. Oryzae [15]. However, the inhibitory effect of MeSA on potato sprouting has not been reported. Additionally, essential oils are expensive, and they can easily evaporate and decompose in response to air, temperature, and light; thus, they need to be replaced every few weeks during commercial storage. Therefore, the development of transport systems that can prolong the release of essential oils will effectively suppress potato sprouting during storage and logistics. At present, essential oils are commonly encapsulated in microcapsules [16], microspheres [17], and hydrogels [18] to achieve slow release. Hydrogel is a hydrophilic polymer formed by physical or chemical crosslinking, which is widely used in drug delivery [19], tissue engineering [20], and the food industry [21]. Polysaccharide hydrogels have become a research hotspot due to their structural diversity, good biocompatibility, and biodegradability [22]. Therefore, polysaccharides, including cellulose [23], sodium alginate [24], chitosan [25], and hyaluronic acid [26], are widely used in hydrogels. Sodium alginate is a natural anionic polysaccharide compound extracted from brown algae and seaweed, consisting of β-d-mannuronic acid (M) and α-l-guluronic acid (G) [27]. Sodium alginate can be crosslinked with divalent cations to form hydrogels, such as Ba 2+ , Sr 2+ , Ca 2+ , and Zn 2+ [28]. In addition, sodium alginate has been used as wall materials to encapsulate essential oil or bioactive components because of its good biocompatibility and biodegradability. In recent years, many researchers have used calcium alginate hydrogels to embed essential oils and easily oxidized components. Shin et al. [29] encapsulated volatile and insoluble thyme white essential oil with sulfonated cellulose nanocrystals, and then embedded sodium alginate to form hydrogel beads. Ae. albopictus larvae had the highest mortality rate when treated with SA/PEs hydrogel beads formed by 0.50% CaCl 2 . In addition, the incorporation of emulsified oils into hydrogels protects sensitive bioactive components, such as ω-3 fatty acids, from chemical degradation [30]. Potiwiput et al. prepared dualcrosslinked Alg/CMC hydrogels by using ionic crosslinking and electrostatic interaction for loading drugs tetracycline hydrochloride and silver sulfadiazine [31]. Due to the good anti-bud activity and volatility of essential oils, many researchers have developed anti-bud products in recent years. Ge et al. [32] chose HPβCD to form an inclusion complex with s-(+)-carvone to improve its instability properties and obtain a better sprout-inhibition effect; among them, the s-(+)-carvone/HPβCD complex with host-guest ratio was 1:1. The s-(+)-carvone/HPβCD composite treatment can effectively inhibit the potato sprouting; at the storage of 70 d, the sprouting rate is still less than 20%. Arnon-Rips et al. [33] prepared reactive carboxymethyl cellulose films containing coarse emulsions or nanoemulsions of citral and used them as potato packaging. After 28 days of storage, nano-emulsified citral carboxymethyl cellulose films inhibited sprouting by 80%, resulting in less weight loss and maintaining the organoleptic properties of potato tubers. In this study, we found that MeSA had the inhibition effect on potato tuber sprouting for the first time. However, MeSA is extremely susceptible to evaporation and decomposition, and has a higher price than conventional budding suppressors (such as CIPC), and this seriously limits its application. MeSA hydrogel was prepared by ionic crosslinking of sodium alginate and calcium chloride in order to improve the utilization rate of MeSA and reduce the amount of essential oil. Taking the encapsulation efficiency as the response value, the optimum preparation conditions of MeSA hydrogel were obtained by a single-factor test and response-surface optimization test, and the MeSA hydrogel was thereafter used for the inhibition test of potato sprouting. According to the release of MeSA content in hydrogel, the prepared MeSA hydrogel has the advantage of slow release, which has a better inhibition effect on sprouting than using pure MeSA to treat potato tubers and reduces the amount of MeSA at the same time. We aim to provide a new idea for inhibiting potato sprouting during storage and logistics after dormancy release. MeSA is derived from plants, and its safety has been confirmed; therefore, MeSA hydrogel is expected to be widely used as a green and safe potato-bud suppressor. Materials Potato tubers were purchased from a local market (Jinan, China) in January 2022; they were harvested in August 2021 and stored in a refrigerator at 3 ± 0.5 • C. Potato tubers of the early variety "Fovorita", which had passed the dormant stage, were used for all experiments. The potato tubers were transported to the laboratory and screened; then uniformly sized potatoes with no mechanical damage and no pests and diseases were selected and sorted into 10 L plastic baskets. Inhibition Test of Potato Sprouting Potatoes were placed in plastic baskets (10 L), with each basket containing 2 kg of potatoes, and with three baskets for each treatment. Potato tubers were treated with pure MeSA at a dose of 1.0 mL kg −1 . The specific operation was as follows: pure MeSA was dropped onto filter paper, and the filter paper was pasted on the outside of the plastic basket for airtight fumigation. Potatoes were sealed and stored for 18 days at room temperature (25 ± 1 • C); then the sprouting rate and sprouting index were measured at 3-day intervals. Preparation of MeSA Hydrogel The preparation of hydrogel was based on the method of Mokhtari et al. [35], which was improved and optimized. Sodium alginate was added to distilled water and stirred continuously on a magnetic stirrer until dissolved to obtain a 1.0% (w/v) of sodium alginate solution. The obtained solution was sonicated for 20 min to remove the bubbles. Then 0.2% (v/v) of Tween-80 and MeSA were added to the solution and magnetically stirred for 5 min at 1000 rpm. The 1.5% (w/v) of calcium chloride (CaCl 2 ) solution was made by dissolving anhydrous calcium chloride in distilled water. A mixture of sodium alginate and MeSA was filled in a 1 mL syringe, extruded in calcium chloride solution, and then left for 2 h. The fabricated samples were washed with distilled water 3 times to remove excess of the crosslinker. RSM Design for Optimization of Encapsulation Efficiency Combined with the results of single-factor experiments, Response Surface Methodology (RSM) based on Box-Behnken Design (BBD) was used to optimize the encapsulation efficiency. Taking the sodium alginate concentration (A), CaCl 2 concentration (B), and core-wall ratio (C) as independent variables, and the encapsulation efficiency (Y) as a response variable, we studied the effects of various factors on the encapsulation efficiency of MeSA. Table 1 shows the response surface factors and the coded levels. Table 2 shows the specific experimental scheme, in which seventeen experiments were conducted in triplicate. Determination of Encapsulation Efficiency The encapsulation efficiency of MeSA hydrogel was determined by a UV-Vis spectrophotometer [36]. MeSA hydrogel (0.2 g) and anhydrous ethanol solution (95%, 10 mL) were placed in a mortar and ground until completely broken. The mixed solution was sonicated for 5 min to release the encapsulated MeSA and centrifuged at 12,000 rpm for 10 min at 4 • C. The supernatant was collected and diluted to a suitable concentration, and the absorbance of the samples was measured at a wavelength of 309 nm by UV-Vis spectrophotometer. The concentration of MeSA was determined by using an appropriate calibration curve for MeSA in ethanol (y = 24.674x − 0.0649; R 2 = 0.9969), and the encapsulation efficiency was calculated by using the following equation: The macro-morphology of MeSA hydrogel was observed by a digital camera. The Surface morphology and the microstructure of the product were examined by using a field emission scanning electron microscope (FE-SEM Supra 55 V P, Carl Zeiss, Overkochen, Germany). Prior to analyses, the prepared hydrogels were frozen at −80 • C for 12 h and then freeze-dried for 24 h. Dried samples were glued with conductive adhesive and coated with 15 nm platinum coating. Fourier-Transform Infrared (FTIR) Spectroscopy The molecular structure of sodium alginate, empty hydrogel and MeSA hydrogel composite were characterized by a Fourier-transform infrared (FTIR) spectroscopy in the range of 500 to 4000 cm −1 , at a resolution of 4 cm −1 (FTIR spectroscopy, Nicolet iS 10, Thermo Scientific, Newington, NH, USA). The samples were ground with KBr and then placed on a diamond crystal plate for scanning. Differential Scanning Calorimetry (DSC) Analysis The thermal stability of samples was carried out by using differential scanning calorimeter from 20 • C to 200 • C, at a heating rate of 10 • C/min [37]. Approximately 5 mg of the samples was loaded into a closed aluminum crucible and placed in a sample tank for DSC testing (DSC, Q600, TA Instrument, New Castle upon Tyne, DE, USA). X-ray Diffraction (XRD) Spectroscopy The freeze-dried gel particles were poured into the groove to obtain a flat surface without cracks. The crystallinity of the sample was determined by X-ray diffractometry (XRD, XRD-6100, Shimadzu, Shnaghai, China), using Cu-Kα radiation. The test conditions of the sample were a voltage of 30 kV and a current of 20 mA. The diffraction angle ranges from 5 • to 50 • , and the scanning speed is 8 • /min. Release Properties of MeSA Hydrogel The release properties of MeSA hydrogel were slightly modified according to previous method [38]. In the same way, the predicted values of hydrogel were adjusted to 1.9% of sodium alginate, 2.2% of CaCl 2 , 1.9:1 of core-wall ratio, and 0.15% of Tween-80, and a release test of MeSA was conducted. The 1 g of MeSA hydrogel was immersed in a mixture of 2 mL distilled water and 5 mL ethanol at 25 • C for 24 h; MeSA was also treated under the same conditions. The amount of MeSA in the supernatant was measured by using UV-Vis spectrophotometer at a wavelength of 309 nm. Inhibition of MeSA Hydrogel on Potato Sprouting The hydrogels were prepared under the conditions corresponding to the lowest encapsulation efficiency, the highest encapsulation efficiency, and the optimal value of encapsulation efficiency in the response surface-optimization experiment. The specific preparation parameters were as follows: (1) lowest encapsulation efficiency: 1.00% of sodium alginate, 1.5% of CaCl 2 , 2.0:1 of core-wall ratio, and 0.15% of Tween-80; (2) highest encapsulation efficiency: 1.75% of sodium alginate, 2.0% of CaCl 2 , 2.0:1 of core-wall ratio, 0.15% of Tween-80; (3) optimal value of encapsulation efficiency: 1.9% of sodium alginate, 2.2% of CaCl 2 , 1.9:1 of core-wall ratio, and 0.15% of Tween-80. Potatoes were treated with the kinds of MeSA hydrogels, and pure MeSA was used as the control; all the treatment doses were 0.5 mLkg −1 . After preparation of the MeSA hydrogel, the water on the surface was absorbed by filter paper and placed in a Petri dish. Potatoes were placed around the Petri dish and stored for 18 days at room temperature (25 ± 1 • C); in the control group, pure MeSA was dropped on filter paper, and the filter paper was pasted on the outside of the plastic basket for 18 days. The above treatments were sealed with polyethylene bags of 0.03 mm thickness, and then the sprouting rate and sprouting index were measured at 3-day intervals. Statistical Analysis All samples were analyzed in triplicate, and the SPSS software (25.0, IBM, Armonk, NY, USA) was used to analyze the data and evaluate its significance. Differences were considered significant at a level of 95% (p < 0.05). The experimental design and response surface optimization analysis were performed by using Design Expert 12.0 (Stat-Ease Inc., Minneapolis, MN, USA, licensed to ICAR-CMFR). Based on the preliminary results of the single-factor experiments, we determined the variables and their ranges. The independent variables (sodium alginate concentration, CaCl 2 concentration, and core-wall ratio) and response variable (encapsulation efficiency) were subjected to an ANOVA and regression analysis to assess the significance of the constructed model. According to the ANOVA analysis results, the significance of linear terms, interaction terms, and quadratic terms can be obtained. Inhibition Test of Potato Sprouting The tuber sprouting morphology during the storage of potatoes treated with pure MeSA is shown in Figure 1A. On the 18th day of storage, the number of potato tuber sprouts in the control check (CK) was much more than that in the pure MeSA treatment, and the terminal sprouts in the treatment were black. At the end of storage (18 d) for potato tubers, the sprouting rate and sprouting index of the pure MeSA treatment were 42.50% and 8.57% respectively, while those of the CK were 100.00% and 24.86%, respectively. These results showed that MeSA could effectively inhibit potato tuber sprouting, and the inhibitory effect of MeSA on potato tuber sprouting was first reported in this study. Single-Factor Experiments The sodium alginate concentration, CaCl 2 concentration, Tween-80 concentration, and core-wall ratio were taken as variables to investigate the effects of these factors on the encapsulation efficiency of the hydrogel. In this study, the sodium alginate concentration increased from 0.5% to 3.25%, and the encapsulation efficiency increased first and then decreased (Figure 2A). When the sodium alginate concentration was 1.75%, the encapsulation rate reached 77.12%. An increase in cohesion property by enhancing the sodium alginate concentration results in more MeSA being encapsulated in the matrix [35]; however, Choi et al. reported that a high sodium alginate concentration (2.5-5%) increased the viscosity of the solution and produced elongated beads [39]. The same phenomenon is also observed in this experiment, and this may be the reason why the concentration of sodium alginate is higher and the encapsulation efficiency is lower. Sevda et al. believed that increasing Na + concentration reduced the pore space in the beads and, thus, encapsulated less MeSA [40]. The encapsulation efficiency was the highest when the Tween-80 concentration was 0.2% ( Figure 2B). Due to the interaction between the emulsifier and oil molecule, the interfacial tension changed, and the oil entered the water phase to form a stable emulsion, which was embedded under the action of the ionic crosslinker. When the core-wall ratio was 2:1, the encapsulation efficiency reached the maximum value (78.24%) and then showed a decreasing trend ( Figure 2D). The reason may be that the amount of essential oil added was too high to be completely emulsified, thus forming an uneven emulsion. Optimized Formulation of Encapsulation Efficiency The effects of various factors on the encapsulation efficiency are shown in the ANOVA results in Table 3. The value of the coefficient of correlation (R 2 ) was 0.9921, indicating that 99.21% changes in response values were related to the selected factors. The adj. −R 2 values of the model was 0.9819, which revealed that the model fit well. The analysis of variance suggested that the model was significantly different (p < 0.0001), whereas the difference in the lack-of-fit term was not significant (p > 0.05). Sodium alginate and 2 2 2 The encapsulation efficiency increased with the increase of CaCl 2 concentration and reached a maximum value when the CaCl 2 concentration was 2.0% ( Figure 2C). These results are supported by Soliman et al., who found that the increase of CaCl 2 concentration from 0.125% to 0.5% leads the encapsulation efficiency to reach 23% for thyme. Nevertheless, the CaCl 2 concentration increased over 2.0%, and the encapsulation efficiency decreased gradually [41]. By increasing the CaCl 2 concentration, the pores in the alginate matrix become smaller, causing a decrease in the encapsulation efficiency. The encapsulation efficiency was the highest when the Tween-80 concentration was 0.2% ( Figure 2B). Due to the interaction between the emulsifier and oil molecule, the interfacial tension changed, and the oil entered the water phase to form a stable emulsion, which was embedded under the action of the ionic crosslinker. When the core-wall ratio was 2:1, the encapsulation efficiency reached the maximum value (78.24%) and then showed a decreasing trend ( Figure 2D). The reason may be that the amount of essential oil added was too high to be completely emulsified, thus forming an uneven emulsion. Optimized Formulation of Encapsulation Efficiency The effects of various factors on the encapsulation efficiency are shown in the ANOVA results in Table 3. The value of the coefficient of correlation (R 2 ) was 0.9921, indicating that 99.21% changes in response values were related to the selected factors. The adj. −R 2 values of the model was 0.9819, which revealed that the model fit well. The analysis of variance suggested that the model was significantly different (p < 0.0001), whereas the difference in the lack-of-fit term was not significant (p > 0.05). Sodium alginate and CaCl 2 , as well as the quadratic terms A 2 , B 2 , and C 2 , reached extremely significant levels. According to the influence of each factor on the encapsulation efficiency, the order was A > B > C, namely sodium alginate > CaCl 2 > core-wall ratio. After analyzing the data in Table 3, we determined that the quadratic multinomial regression equation was as follows: As shown by the 2D contour plots and 3D response surface plots (Figure 3), with the increase of the sodium alginate concentration and CaCl 2 concentration, the encapsulation efficiency firstly increased and then decreased slowly. The interaction between the sodium alginate concentration (A) and CaCl 2 concentration (B) was not significant. The addition of Ca ions to the sodium alginate solution caused the formation of an egg-box structure, and the viscosity of the solution increased with the increase of Ca ion concentration, which caused the MeSA to become encapsulated and difficult to release. The effect of the CaCl 2 concentration and core-wall ratio on the encapsulation efficiency is seen in Figure 3C,F. The 2D contour plot was an ellipse, and the slope of the 3D response surface plot was large, indicating a significant interaction between the two factors. For the same reason, the interaction between CaCl 2 concentration (B) and core-wall ratio (C) was significant. Accuracy of Predictive Models The reliability of the model was validated by comparing the predicted value with the actual value. According to the model, the optimized formulation of MeSA hydrogel was composed by 1.91% of sodium alginate, 2.2% of CaCl 2 , 1.87:1 of core-wall ratio, 0.15% of Tween-80, and with an encapsulation efficiency of 83.25%. The adjustment conditions were 1.9% of sodium alginate, 2.2% of CaCl 2 , 1.9:1 of core-wall ratio, 0.15% of Tween-80. Under these conditions, three validation tests were carried out, the measured encapsulation efficiency was 83.06%. These actual values were close to the predicted values, indicating that the model had high reliability. dium alginate concentration (A) and CaCl2 concentration (B) was not significant. The addition of Ca ions to the sodium alginate solution caused the formation of an egg-box structure, and the viscosity of the solution increased with the increase of Ca ion concentration, which caused the MeSA to become encapsulated and difficult to release. The effect of the CaCl2 concentration and core-wall ratio on the encapsulation efficiency is seen in Figure 3C,F. The 2D contour plot was an ellipse, and the slope of the 3D response surface plot was large, indicating a significant interaction between the two factors. For the same reason, the interaction between CaCl2 concentration (B) and core-wall ratio (C) was significant. Morphology and Size of the Hydrogel The macroscopic morphology of the empty hydrogel (without MeSA) and MeSA hydrogel is illustrated in Figure 4A,B, respectively. The empty hydrogel is transparent and smooth, while the MeSA hydrogel is milky white and not smooth. As clearly seen, the surface of the empty hydrogel is flatter, with only a few bumps and gaps, whereas MeSA hydrogel is uneven and folded ( Figure 4C,D), and this may be caused by the surface shrinkage of the MeSA hydrogel after vacuum freeze-drying. Additionally, porous structures appeared on the surface of the MeSA hydrogel, indicating the presence of water between the alginate polymers before being dried [42]. Fourier-Transform Infrared (FTIR) Spectroscopy The FTIR spectra of sodium alginate, empty hydrogel, and MeSA hydrogel in the wavenumber region of 4000-500 cm −1 are shown in Figure 5A. The signals in the range of 3600-3000 cm −1 were assigned to the stretching vibrations of the hydroxyl group (O-H). The MeSA hydrogel exhibited a broad and strong hydroxyl stretching band at 3172 cm −1 ; however, the hydroxyl stretching vibration peak of sodium alginate appeared at 3250 cm −1 . The characteristic peaks of sodium alginate at 1591 cm −1 and 1397 cm −1 were asymmetric and symmetric stretchings of COO − , respectively. These two peaks moved to 1588 cm −1 and 1437 cm −1 in the MeSA hydrogel, respectively. The peak of the MeSA hydrogel was 1041 cm −1 , which suggested the symmetric stretching vibration of C-O-C [43]. Compared with the infrared spectrum of sodium alginate, no new chemical bands appeared in the hydrogel, thus indicating that no new chemical bond formed between the alginate and MeSA. smooth, while the MeSA hydrogel is milky white and not smooth. As clearly seen, the surface of the empty hydrogel is flatter, with only a few bumps and gaps, whereas MeSA hydrogel is uneven and folded ( Figure 4C,D), and this may be caused by the surface shrinkage of the MeSA hydrogel after vacuum freeze-drying. Additionally, porous structures appeared on the surface of the MeSA hydrogel, indicating the presence of water between the alginate polymers before being dried [42]. Release Properties of MeSA Hydrogel The release properties of MeSA are closely related to its ability to inhibit potato sprouting. When released slowly and steadily for a certain period, MeSA is beneficial for controlling potato sprouting. The time-dependent-release properties of MeSA are shown in Figure 6; MeSA in hydrogel released slowly and steadily evidently, and the release of MeSA was 0.0085 mg mL −1 on the 7th day, while MeSA direct fumigation released sharply on the first day, and MeSA content decreased to 0.0083 mg mL −1 on 4th day and 0.0011 mg mL −1 on 7th day, respectively. This indicated that MeSA hydrogel can continue to be released during a week of monitoring. Studies have shown that the calcium concentration and crosslinking time could affect the release amount of encapsulated substances in hydrogel; the amount of MeSA released from the hydrogel decreased as the calcium concentration was increased, while the amount of MeSA released was positively correlated with the time of crosslinking [45]. Therefore, in this study, the optimized MeSA hydrogel not only had a high encapsulation efficiency but also was released slowly and stably after Differential Scanning Calorimetry (DSC) Analysis The DSC thermograms of sodium alginate, empty hydrogel, and MeSA hydrogel are given in Figure 5B. The sodium alginate has a strong and narrow absorption peak at 141.92 • C, while the empty hydrogel (without MeSA) and MeSA hydrogel have a wide and weak absorption peak at 148.73 • C and 171.79 • C, respectively, which may be due to the constant evaporation of water molecules in the hydrogel matrix. This may be attributed to the hydrophilic and water-retaining of sodium alginate, resulting in a large amount of water inside the hydrogel being lost as the temperature increases. The temperature corresponding to the absorption peak of the MeSA hydrogel was higher than that of the empty hydrogel, thus indicating that the stability of the hydrogel was improved after the inclusion of MeSA. The polymer chain begins to decompose at 250 • C and is completely decomposed at 300-400 • C. The maximum temperature in this study was 200 • C, so no endothermic peak of the polymer was found [44]. X-ray Diffraction (XRD) Spectroscopy The XRD patterns of sodium alginate, empty hydrogel, and MeSA hydrogel are shown in Figure 5C. As shown in the figure, characteristic peaks appeared in sodium alginate and MeSA hydrogel, and the positions of their characteristic peaks were different. The typical characteristic of hydrogel is that the hydrogel formed by sodium alginate is amorphous. The characteristic peaks of sodium alginate appeared at 2θ = 13.65 • and 22.71 • , while those for the MeSA hydrogel appeared at 2θ = 13.77 • and 22.96 • , respectively. The width of the characteristic peak is closely related to the crystallinity of the material; the crystallinity increases with the decreasing characteristic peak width. The characteristic peak of sodium alginate is wide, while the characteristic peak of MeSA hydrogel is narrow and steep, indicating that the hydrogel crystal strength increases after encapsulating MeSA. The above X-ray diffraction diagram shows that MeSA is successfully encapsulated in sodium alginate hydrogel. Release Properties of MeSA Hydrogel The release properties of MeSA are closely related to its ability to inhibit potato sprouting. When released slowly and steadily for a certain period, MeSA is beneficial for controlling potato sprouting. The time-dependent-release properties of MeSA are shown in Figure 6; MeSA in hydrogel released slowly and steadily evidently, and the release of MeSA was 0.0085 mg mL −1 on the 7th day, while MeSA direct fumigation released sharply on the first day, and MeSA content decreased to 0.0083 mg mL −1 on 4th day and 0.0011 mg mL −1 on 7th day, respectively. This indicated that MeSA hydrogel can continue to be released during a week of monitoring. Studies have shown that the calcium concentration and crosslinking time could affect the release amount of encapsulated substances in hydrogel; the amount of MeSA released from the hydrogel decreased as the calcium concentration was increased, while the amount of MeSA released was positively correlated with the time of crosslinking [45]. Therefore, in this study, the optimized MeSA hydrogel not only had a high encapsulation efficiency but also was released slowly and stably after encapsulation, thus achieving the purpose of sustained drug supply. Release Properties of MeSA Hydrogel The release properties of MeSA are closely related to its ability to inhibit potato sprouting. When released slowly and steadily for a certain period, MeSA is beneficial for controlling potato sprouting. The time-dependent-release properties of MeSA are shown in Figure 6; MeSA in hydrogel released slowly and steadily evidently, and the release of MeSA was 0.0085 mg mL −1 on the 7th day, while MeSA direct fumigation released sharply on the first day, and MeSA content decreased to 0.0083 mg mL −1 on 4th day and 0.0011 mg mL −1 on 7th day, respectively. This indicated that MeSA hydrogel can continue to be released during a week of monitoring. Studies have shown that the calcium concentration and crosslinking time could affect the release amount of encapsulated substances in hydrogel; the amount of MeSA released from the hydrogel decreased as the calcium concentration was increased, while the amount of MeSA released was positively correlated with the time of crosslinking [45]. Therefore, in this study, the optimized MeSA hydrogel not only had a high encapsulation efficiency but also was released slowly and stably after encapsulation, thus achieving the purpose of sustained drug supply. Inhibition of MeSA Hydrogel on Potato Sprouting At the end of storage (18 d) for potato tubers, it was clear that the control check (CK) had longer sprouts, and the MeSA direct fumigation treatment had more sprouts, but less Inhibition of MeSA Hydrogel on Potato Sprouting At the end of storage (18 d) for potato tubers, it was clear that the control check (CK) had longer sprouts, and the MeSA direct fumigation treatment had more sprouts, but less than the CK ( Figure 7A). For the lowest encapsulation efficiency (69.82%), the highest encapsulation efficiency (83.21%), and the optimal value treatment, the lowest encapsulation efficiency treatment had the largest number of sprouts, mainly concentrated in the terminal sprouts, while the highest encapsulation efficiency and optimal value treatment had fewer and slender sprouts. Meanwhile, the sprouting rate and sprouting index of the optimal value treatment were 13.33% and 2.86%, while those of the CK were 100.00% and 28.57%, respectively, which had a significant difference level (p < 0.05). It should be emphasized that the effective dose of hydrogel is 0.5 mL kg −1 , which is only half of the previous effective dose of pure MeSA (1.0 mL kg −1 ). Under the same storage conditions, the inhibition effect on the sprouting of hydrogel is significantly better than that of pure MeSA fumigation. After direct fumigation with pure MeSA (1.0 mL kg −1 ), the sprouting rate and sprouting index of potato tubers at 18 days were 42.50% and 8.57%, respectively. These results showed that optimal value treatment effectively delayed potato sprouting and maintained the sprouting rate and sprouting index at a low level. than the CK ( Figure 7A). For the lowest encapsulation efficiency (69.82%), the highest encapsulation efficiency (83.21%), and the optimal value treatment, the lowest encapsulation efficiency treatment had the largest number of sprouts, mainly concentrated in the terminal sprouts, while the highest encapsulation efficiency and optimal value treatment had fewer and slender sprouts. Meanwhile, the sprouting rate and sprouting index of the optimal value treatment were 13.33% and 2.86%, while those of the CK were 100.00% and 28.57%, respectively, which had a significant difference level (p < 0.05). It should be emphasized that the effective dose of hydrogel is 0.5 mL kg −1 , which is only half of the previous effective dose of pure MeSA (1.0 mL kg −1 ). Under the same storage conditions, the inhibition effect on the sprouting of hydrogel is significantly better than that of pure MeSA fumigation. After direct fumigation with pure MeSA (1.0 mL kg −1 ), the sprouting rate and sprouting index of potato tubers at 18 days were 42.50% and 8.57%, respectively. These results showed that optimal value treatment effectively delayed potato sprouting and maintained the sprouting rate and sprouting index at a low level. Conclusions In this study, we first found that MeSA had an inhibitory effect on potato tuber sprouting. The MeSA hydrogel prepared by the RSM had an encapsulation efficiency of 83.25%, and it released slowly and steadily; the release of MeSA was 0.0085 mg mL −1 on the 7th day, whereas MeSA direct fumigation released sharply on the first day, and the MeSA content decreased to 0.0083 mg mL −1 on 4th day and 0.0011 mg mL −1 on 7th day. The optimized formulations of MeSA hydrogel were as follows: 1.9% of sodium alginate, 2.2% of CaCl2, 1.9:1 of core-wall ratio, and 0.15% of Tween-80. After 18 days of storage, the sprouting rate and sprouting index of potato tubers in the CK group was 100.00% and 28.57%, and it was 42.50% and 8.57% after direct fumigation with pure MeSA of 1.0 mL Conclusions In this study, we first found that MeSA had an inhibitory effect on potato tuber sprouting. The MeSA hydrogel prepared by the RSM had an encapsulation efficiency of 83.25%, and it released slowly and steadily; the release of MeSA was 0.0085 mg mL −1 on the 7th day, whereas MeSA direct fumigation released sharply on the first day, and the MeSA content decreased to 0.0083 mg mL −1 on 4th day and 0.0011 mg mL −1 on 7th day. The optimized formulations of MeSA hydrogel were as follows: 1.9% of sodium alginate, 2.2% of CaCl 2 , 1.9:1 of core-wall ratio, and 0.15% of Tween-80. After 18 days of storage, the sprouting rate and sprouting index of potato tubers in the CK group was 100.00% and 28.57%, and it was 42.50% and 8.57% after direct fumigation with pure MeSA of 1.0 mL kg −1 , whereas the MeSA hydrogel treatment of 0.5 mL kg −1 showed to be the more efficient, which was 13.33% and 2.86%, respectively (p < 0.05). The results showed that, after the optimized preparation, MeSA hydrogel was more effective than that of pure MeSA in potato-sprouting inhibition, and the usage of MeSA can be reduced by more than half. The result suggests a promising technology for essential oils in potato-tuber-sprouting inhibition. Future Work MeSA, as a plant-derived essential oil, is non-toxic and harmless to the human body; it can be used as a new green and safe potato-sprout suppressant for promotion. However, there are still some problems to be solved in the future in theory and applications, such as (1) the specific physiological, biochemical, and molecular mechanisms of how hydrogels inhibit potato sprouting need to be further studied; (2) how to reduce or mask the odor of MeSA after use to make consumers accept it; (3) the preparation process of MeSA hydrogel is not easy to operate, so it is necessary to improve the operation process on the basis of optimizing the formula for the promotion of MeSA hydrogel sprout suppressant; and (4) at present, the application effect of MeSA hydrogel is only carried out in the laboratory. In the next step, appropriate optimization and validation of the test conditions will be carried out, and relevant experiments will be conducted under actual transportation or storage conditions, thus paving the way for the production of green and effective potato-sprout suppressant.
2022-09-24T15:21:35.324Z
2022-09-22T00:00:00.000
{ "year": 2022, "sha1": "bc0c4ce0aacaa2967d85e11797ed8ec2d0dae6d4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2311-7524/8/10/866/pdf?version=1663838737", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "dcb07331488150cd7baa2a9e4dffbfa3fd9b459d", "s2fieldsofstudy": [ "Materials Science", "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
132041019
pes2o/s2orc
v3-fos-license
Estimation of Design Flood for Rivers of Saurashtra Region contributing into the Gulf of Khambhat Design flood has been estimated for rivers of Saurashtra region contributing into the Gulf of Khambhat using deterministic as well as statistical approach for planning, design and management of hydraulic structures. By comparing the results obtained by these approaches, one can easily estimate the flow rate or peak discharge to a given design return period and can establish the suitability of approach for this study area. Nine river basins with 20 dams of Saurashtra region were analyzed in this study. Though Saurashtra is one of the most water scarce regions of India yet it suffers from the flooding problem, as the numbers of rainy days are very less and the rainfall intensity is very high. Due to being a regulated basin, dam wise study was preferred. Deterministic approach was carried out using synthetic unit hydrograph (SUH) and regional flood formulae (RFF) methods for subzone-3a provided in Central Water Commission (CWC) report, 2001. Statistical approach was carried out using Rainfall frequency analysis employing Gumbel’s EV1distribution. As there is no spill by these hydraulic structures and the annual flood data for the nine river sites are heavily affected by the storage dams in the upstream. Hence these data violate the basic principle of virgin flow. Hence the analysis of these data was not attempted further. The main objective of study was to carry out the rainfall frequency analysis for these river basins to get 24 hour rainfall for a return period of 25, 50 and 100 years for an individual basin instead of using the value obtained by iso-pluvial map to estimate the design flood. The overall results reveals that due to construction of number of dams in 9 river basins, design flood estimation on each dam by using deterministic approach is more feasible. Revised design floods using SUH and RFF method on the basis of estimated rainfall indicates over-estimated and under-estimated design floods. Since the percentage difference is very less between revised SUH and revised RFF method. So, for safety purpose one with higher value should be used. intRODuctiOn Flood, a natural disaster is responsible for loss of life and property world over.Floods damage property and endanger the lives of humans and animals and also affect the environment and aquatic life negatively.Floods have been occurring repeatedly in India.Approximately 40 million ha area (12%) in India has been identified as flood prone 18 .For mitigating the flood disasters, various structural and non-structural measures are adopted.Structural measures include protection works and flood embankments while non-structural measures include flood forecasting, flood warning and flood plain zoning.Design flood estimates are required for the design of various hydraulic structures such as weirs, barrages, dams, embankment etc. and flood protection / relief schemes 5,14 .Flood forecasts are required for operation of various flood control structures, for taking emergency measures such as maintenance of flood levees, evacuating the people to safe localities etc. Whenever rainfall or river flow records are not available at or near the site of interest, it is difficult for hydrologists or engineers to derive reliable flood estimates directly.In such situation, flood formulae developed for the region are one of the alternative methods for estimation of design floods, particularly for small-to-medium catchments.The conventional flood formulae developed for different regions of India are empirical in nature and do not provide estimates for a desired return period.A number of studies have been carried out for estimation of design floods for various structures by different Indian organizations.Among these the prominent studies are carried out jointly by the Central Water Commission (CWC), Research Designs and Standards Organization (RDSO), and India Meteorological Department (IMD) using the method based on synthetic unit hydrograph and design rainfall, considering physiographic and meteorological characteristics for estimation of design floods 3 and regional flood frequency studies carried out by RDSO using the USGS and pooled curve methods 12 for various hydrometeorological subzones of India.The concept of the geomorphologic instantaneous unit hydrograph (GIUH) was introduced by Rodriguez-Iturbe and Valdes 17 .The topographic and geometric properties of the watershed and its drainage channel network are reflected by geomorphology 6 .Snyder (1938) proposed synthetic unit hydrograph approach (SUH) for ungauged basin 21 .A desirable method should satisfy the requirements of universal acceptability; ease in use with a minimum of data; robustness in nature; and reliability 14 .Now a days GIS and remote sensing techniques are being used extensively to monitor the disasters like droughts and floods 7 . Practically in the design of all hydrologic structures the peak flow that can be expected with an assigned frequency (say 1 in 100 years) is of primary importance to adequately design the structure to accommodate its effect.The design of bridges, culvert waterways and spillways for dams and estimation of scour at a hydraulic structure are some examples wherein flood-peak values are required.To estimate the magnitude of a flood peak Fig. 2: location of G&D sites and rain-gauge in river basin map the following methods are available: (1) Rational method; (2) Empirical method; (3) Unit-hydrograph technique and (4) Flood-frequency studies 10 .The use of a particular method depends upon (i) the desired objective, (ii) the available data and (iii) the importance of the project.Further, the rational method is applicable only to small-size (<50 km 2 ) catchments and the unit-hydrograph method is normally restricted to moderate-size catchments with areas less than 5000 (km 2 ) 13,15 . In present study, design floods for various structures in the 9 river basins namely Wadhavan-Bhogavo, Limbdi-bhogavo, Sukhbhadar, Utavali, Padalio, Khalkhalia, Ghelo, Keri and Kalubhar have been estimated.Deterministic approach based on unit hydrograph theory developed by CWC 4 and statistical approaches based on frequency analysis has been used for the design flood estimation. Study area and data availability Saurashtra basin is a region of western India, located on the Arabian Sea coast of state of Gujarat.Saurashtra is bounded on three sides by waters of sea, namely in the north by the Gulf of Kutch with some part by the little Rann, in the west and south by the Arabian Sea and in the South-East by the Gulf of Khambhat; while in the east is the Mainland of Gujarat and are shown in Figure 1,8,9,19 .The area covered by Saurashtra region is 59,360 sq.km. of which 9000 sq.km.area is under study 20 .Suarashtra basin lies between latitude 20°N to 24°N and longitude 69°E to 73°E.The rivers of Saurashtra region under study are: Wadhavan-Bhogavo, Limbdi-Bhogavo, Sukhbhadar, Utavali, Khalkhalia, Padalio, Keri, Ghelo and Kalubhar.There are 20 dams situated in these river basins.Details of river basins and dam situated in these river basins are shown in Table 1 and 4. Basin maps with dam site are shown in Figure 3 to 10. There are 13 rain gauge stations and 9 G&D stations in these river basins which are shown in Figure 2. The rainfall data are collected from IMD as well as Kalpasar Department and G&D data are collected from Kalpasar Department of Gujarat.Details of G&D stations and raingauge stations are shown in Table 2 and 3.For Synthetic Unit Hydrograph analysis, data related to catchment like river length, catchment area and equivalent slope are required and the same are computed using SWAT model and Arc-GIS.SRTM data of 90 m resolution are used for this purpose. mEthODOlOGy In this study, deterministic approach based on unit hydrograph theory and statistical approaches based on frequency analysis are used for design flood estimation. Deterministic approach Due to paucity of data, regional approach based on synthetic unit hydrograph developed by Central Water Commission (CWC), 1987 has been used 2 .The study area falls under the subzone 3(a). Synthetic unit hydrograph (Suh) method The following relationship for SUH method has been developed by CWC (1987): ...(3) W 50 = 2.284/(q p ) 1.00 ...(4) Q p =q p * A ...( 5) W 75 = 1.331/(q p ) 0.991 ...( 6) WR 50 = 0.827/(q p ) 1.023 ...( 7) WR 75 = 0.561/(q p ) 1.037 ...(8) T m =t p + 0.5 ...( 9) Where, A = Total catchment area in km 2 L = Length of longest main stream along the river course in km S c = Equivalent stream slope in m/km t p = Time from the centre of effective rainfall duration to the peak in hr.q p = Peak rate of discharge in cumec per sq.km.Q p = Peak discharge of U.G. in m 3 /s T B = Base width of U.G. in hr.T m = time from the start of rise to the peak of U.G. in hr.W 50 = Width of U.G. measured at 50% of peak discharge ordinate in hr.W 75 = Width of U.G. measured at 75% of peak discharge ordinate in hr.W R50 = Width of rising limb of U.G. measured at 50% of peak discharge ordinate in hr.W R75 = Width of rising limb of U.G. measured at 75% of peak discharge ordinate in hr. Regional flood formulae method The regional flood formulae have been developed by CWC to estimate 25, 50 and 100 year return period flood values.The meteorological variability has been accounted from region to region in these formulae.The others factors such as shape of the catchment, slope of the stream etc, which have influence on the peak, have also been included in these formulae thereby improving over most of the limitations of the empirical / rational formula.Thus to estimate design flood for sub-zone 3(a), Regional flood formula is given as 2 : Statistical approach The statistical approach, otherwise also called frequency analysis, may be performed on the past recorded data of annual peak data series.Frequency analysis is carried out on the available record of annual flood peak discharge or annual rainfall events of the region. Frequency analysis for individual gauged sites Frequency analysis study interprets a past record of events to predict the future probabilities of occurrence and estimate the magnitude of an event corresponding to a specific return period 1 .If the event records are of sufficient length and reliability, they may yield satisfactory estimates.The method, however, does not provide a hydrograph shape but gives only a peak discharge of known frequency.The processed data series are to be analysed to ensure that the fundamental assumption of frequency analysis are satisfied.The data series is to be checked for randomness, presence of trend and outliers.The presence of trend is tested by using Kendall's rank correlation test and Turning point test.The presence of randomness and outliers is tested by Anderson's correlogram test and Chow test respectively.Detailed at site flood frequency analysis is carried out by using various distributions like Normal, Log-Normal, Pearson type III, Log-Pearson type III, Gumbel's Extreme value distribution 9 .Gumbel EV1 is the commonly used distributions and the details about these distributions are given below 1,15,16 . Gumbel EV-1 type distribution It is one of the most commonly used distributions in flood frequency analysis and was introduced by Gumbel in 1941.It is widely used for extreme values in hydrologic and meteorological stud ies for prediction of flood peaks, maximum rainfalls, maximum wind speed, etc.It is the double exponential distribution (known as Gumbel's distribution or extreme value type 1 or Gumbel's EV-1 distribution).The CDF of EV-1 distribution is defined as Regional flood frequency analysis Kumar (2009), developed the Regional flood frequency relationship using L-moment approach for ungauged catchments for 17 Subzones hydro-meteorologically homogeneous.Out of 17 subzones, Saurashtra region falls under Subzone 3(a) and the relationship for this subzone developed by Kumar (2009) is given as follows 11 : Where, Q T = Flood estimate for an ungauged catchment in m 3 /s for T year return period C T = a regional coefficient A = Catchment area in km 2 b = a regional coefficient, for subzone 3(a) this value is 0.383.Value of C T for Various return period for Subzone 3(a) are shown in Table 5. RESultS anD DiScuSSiOn In this study initially the above approach are used for 20 dams as well as for 9 river basins on the basis of 24 hour rainfall for T year return period given in the iso-pluvial map.After rainfall frequency analysis, it is revised only for dams because these basins are heavily affected by dams situated on upstream.The result obtained by the above approach by the use of 24 hour rainfall for a T year return period given in the iso-pluvial map (IMD, Pune) are shown 8. Thus this estimated value of 24 hour rainfall for return period of 25, 50 and 100 years is used to revise design floods for the dams present in these river basins.Revised design floods for dams in these river basins for return period of 25, 50 and 100 years are computed and tabulated in Table 9 and from Table 9 it is found that the % difference is very less between revised SUH and revised RFF method. By using the relationship developed by Kumar (2009), the design flood estimates for return period of 25, 50 and 100 years for dams and rivers are computed below in Table 10 and 11.From Table 10 and 11 it is found that the % difference is very large between L-moment and revised SUH method.L-moment method underestimates the design floods for dams as well as river basins. The annual flood data for the nine river sites are heavily affected by the storage dams in the upstream.Hence these data violate the basic principle of virgin flow.Hence the flood frequency analysis of these data was not attempted further. cOncluSiOnS After the analysis of these river basins and dams situated on it, the following conclusions are drawn: • For the study area, 24 hr rainfall for the return period of 25, 50 and 100 years are different for 9 river basins which also differs from isopluvial map recommended by IMD, Pune for this region.• Revised design floods using SUH and RFF method on the basis of estimated rainfall indicates over-estimated and under-estimated design floods. • Due to construction of number of dams in 9 river basins, design flood estimation on each dam by using deterministic approach is more feasible. • The percentage difference is very less between revised SUH and revised RFF method.So, for safety purpose one with higher value will be used. • Regional flood frequency relationship based on L-moment under-estimates the design floods with average percentage difference of 32.023% for dams and 28.28% for river basins. • The reason for large average percentage difference was investigated and the data analysis reveals that there are large storages in these basins and hence application of either RFF or L-moment based methods may not be applicable. table 9 : Revised design flood for t year return period by Suh and RFF methods for 1119 Table 6 and 7 as well as developed by rainfall frequency analysis for basin wise are shown in Table8 and 9. From Table6, it can be seen that design flood estimates for return period of 25, 50 and 100 years for dams namely Wadhavan-Bhogavo, Limbdi-Bhogavo, Sukhbhadar, Utavali, Khalkhalia, Padalio, Keri and Kalubhar are underestimating except Ghelo which is overestimating when compares with the result obtained from Table8.The reason behind this variation in result is the use of value T year return period 24 hour rainfall.By rainfall frequency analysis it has been found that the river basins namely Wadhavan-Bhogavo, Limbdi-Bhogavo, Sukhbhadar and Kalubhar have higher value of rainfall from what
2018-12-27T10:57:18.195Z
2016-12-25T00:00:00.000
{ "year": 2016, "sha1": "ca863dfd51e7dec89beab4a0c75ff22450ed4a4d", "oa_license": "CCBY", "oa_url": "http://www.cwejournal.org/pdf/vol11no3/Vol11_No3_p_869-882.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ca863dfd51e7dec89beab4a0c75ff22450ed4a4d", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
271318789
pes2o/s2orc
v3-fos-license
Novel Piperazine Derivatives of Vindoline as Anticancer Agents A series of novel vindoline–piperazine conjugates were synthesized by coupling 6 N-substituted piperazine pharmacophores at positions 10 and 17 of Vinca alkaloid monomer vindoline through different types of linkers. The in vitro antiproliferative activity of the 17 new conjugates was investigated on 60 human tumor cell lines (NCI60). Nine compounds presented significant antiproliferative effects. The most potent derivatives showed low micromolar growth inhibition (GI50) values against most of the cell lines. Among them, conjugates containing [4-(trifluoromethyl)benzyl]piperazine (23) and 1-bis(4-fluorophenyl)methyl piperazine (25) in position 17 of vindoline were outstanding. The first one was the most effective on the breast cancer MDA-MB-468 cell line (GI50 = 1.00 μM), while the second one was the most effective on the non-small cell lung cancer cell line HOP-92 (GI50 = 1.35 μM). The CellTiter-Glo Luminescent Cell Viability Assay was performed with conjugates 20, 23, and 25 on non-tumor Chinese hamster ovary (CHO) cells to determine the selectivity of the conjugates for cancer cells. These compounds exhibited promising selectivity with estimated half-maximal inhibitory concentration (IC50) values of 2.54 μM, 10.8 μM, and 6.64 μM, respectively. The obtained results may have an impact on the design of novel vindoline-based anticancer compounds. Piperazine (2), as a privileged structure, is found in many drugs [14] and natural compounds [15].It is especially used in pharmaceutical research due to its excellent physicochemical characteristics; it provides beneficial pharmacodynamic and pharmacokinetic effects (e.g., solubility, bioavailability, etc.) to the molecule to which it has been coupled [16,17]. In summary, the aim of this work was to combine a Vinca alkaloid with one of the Based on these results and the literature data, we continued our project, during which we coupled various N-substituted piperazines to position 17 of vindoline (1) via two types of linkers.A more flexible linker was formed using 4-bromobutyric acid, while a slightly more rigid linker was formed with succinic anhydride.To study the structure-activity relationships, the piperazine derivatives were also attached to position 10 of vindoline (1); in this case, the linker was built with chloroacetyl chloride.The piperazine pharmacophores planned for coupling were selected based on the work of ˙Ibiş et al. [16].In this project, piperazine-oxazole hybrids were produced, which demonstrated remarkable cytotoxicity on all examined cell lines with IC 50 values in the range of 0.09-11.7 µM.We chose the following six cheap and easily available piperazine derivatives: 1-methylpiperazine (5), 1-(4-trifluoromethylphenyl)piperazine ( 6), 1-[4-(trifluoromethyl)benzyl]piperazine (7), 1-(4-fluorobenzyl)piperazine ( 8), 1-bis(4-fluorophenyl)methyl piperazine (9), and 1-(2furoyl)piperazine (10) (Figure 2). In summary, the aim of this work was to combine a Vinca alkaloid with one of the most frequent pharmacophore molecules, which is used in a wide range of fields of biological action.It was presumed that even the ineffective vindoline could possess important antitumor activity when connected with piperazines.Moreover, we had another goal, namely, the investigation of the cytotoxic activity of the new compounds not only on cancerous cells but also on non-tumor cells to determine the selectivity.biological action.It was presumed that even the ineffective vindoline could possess important antitumor activity when connected with piperazines.Moreover, we had another goal, namely, the investigation of the cytotoxic activity of the new compounds not only on cancerous cells but also on non-tumor cells to determine the selectivity. Preparation of the Linker-Containing Vindoline Derivatives The substitution possibilities for forming linkers on the vindoline skeleton are primarily positions 10 and 17 (Scheme 1).The linkers were built according to known procedures. The synthesis of 10-chloroacetamidovindoline (12) was previously presented by us [26] through the N-acylation reaction of 10-aminovindoline (11) with chloroacetyl chloride.Although a three-step synthesis of 10-aminovindoline (11) was also described by our research group, in this project, a simpler and shorter synthetic procedure was applied [26].The nitrosation of vindoline (1) with sodium nitrite in an acidic medium and the following reduction by sodium borohydride resulted in the desired amino derivative (11) in a better overall yield (92%) than the previously described process [27]. Chemistry 2.1.1. Preparation of the Linker-Containing Vindoline Derivatives The substitution possibilities for forming linkers on the vindoline skeleton are primarily positions 10 and 17 (Scheme 1).The linkers were built according to known procedures.12) was reacted with the corresponding piperazine derivative (5-10) (Figure 2) in acetonitrile solution in the presence of potassium carbonate (Scheme 2) in N-alkylation reactions, and this resulted in the 16-21 vindoline-piperazine conjugates in medium to excellent yields.The synthesis of 10-chloroacetamidovindoline (12) was previously presented by us [26] through the N-acylation reaction of 10-aminovindoline (11) with chloroacetyl chloride.Although a three-step synthesis of 10-aminovindoline (11) was also described by our research group, in this project, a simpler and shorter synthetic procedure was applied [26].The nitrosation of vindoline (1) with sodium nitrite in an acidic medium and the following reduction by sodium borohydride resulted in the desired amino derivative (11) in a better overall yield (92%) than the previously described process [27]. Coupling of the Linker-Containing Vindoline Derivatives With Piperazines 10-Chloroacetamidovindoline ( 12) was reacted with the corresponding piperazine derivative (5-10) (Figure 2) in acetonitrile solution in the presence of potassium carbonate (Scheme 2) in N-alkylation reactions, and this resulted in the 16-21 vindoline-piperazine conjugates in medium to excellent yields. Similarly, compounds 3 and 22-26 were prepared from the 17-O-4-bromobutanoyl derivative ( 14) with the given piperazines (5-10) (Scheme 3) using two methods, namely, with triethyl amine in dichloromethane solution (Method A) and in acetonitrile in the presence of potassium carbonate (Method B).Method B was used in cases where conversion was low with Method A. As mentioned, compound 3 had already been synthesized before by us [25]; however, now it was obtained with a slightly better yield.The latter was also necessary because we planned to subject it to a more extensive biological screening (NCI60) than before.Similarly, compounds 3 and 22-26 were prepared from the 17-O-4-bromobutanoyl derivative (14) with the given piperazines (5-10) (Scheme 3) using two methods, namely, with triethyl amine in dichloromethane solution (Method A) and in acetonitrile in the presence of potassium carbonate (Method B).Method B was used in cases where conversion was low with Method A. As mentioned, compound 3 had already been synthesized before by us [25]; however, now it was obtained with a slightly better yield.The latter was also necessary because we planned to subject it to a more extensive biological screening (NCI60) than before. with triethyl amine in dichloromethane solution (Method A) and in acetonitrile in the presence of potassium carbonate (Method B).Method B was used in cases where conversion was low with Method A. As mentioned, compound 3 had already been synthesized before by us [25]; however, now it was obtained with a slightly better yield.The latter was also necessary because we planned to subject it to a more extensive biological screening (NCI60) than before.Thus, since the same six piperazine derivatives (5-10) were coupled to different sites of vindoline, these piperazine conjugates through various types of linkers make a preliminary study of the structure-activity relationship in connection with the anticancer activities possible.Thus, since the same six piperazine derivatives (5-10) were coupled to different sites of vindoline, these piperazine conjugates through various types of linkers make a preliminary study of the structure-activity relationship in connection with the anticancer activities possible. The screening results are given in the Supplementary Materials (Tables S1-S3), where the biological activities were determined for the 10 −5 M concentration.The percentages of growths show the amount of living cancer cells compared to a reference.The negative numbers indicate a significant decrease in the cell number.As expected, a notable antiproliferative effect was not shown by vindoline (1), the linker-containing vindoline derivatives (14 and 15), and their precursors (11 and 13), except 10-chloroacetamidovindoline (12), which exerted moderate cytostatic activity. The experimental data obtained for compounds 16-21 containing the piperazine side chain coupled at position 10 of vindoline (1) are presented in Table S1.One of them, derivative 17, in which the piperazine nitrogen atom contains a 4-trifluoromethylphenyl substituent, proved to be highly effective in several types of cancer.In the case of colon cancer, the growth percent rate was found to be −84.40% on the KM12 cell line.For CNS cancer, more than −80% reduction was obtained on SF-539 and SNB-75 brain tumor cell lines.For melanoma, 17 was very effective on almost all cell lines, and the two most outstanding values were −98.17% (on SK-MEL-5) and −95.37% (on LOX-IMVI).In the case of breast cancer for the cell line MDA-MB-231/ATCC, a −86.10% growth rate was obtained.The 1-bis(4-fluorophenyl)methyl piperazine-containing compound (20) showed moderate antiproliferative activity on some colon, CNS, and melanoma cell lines. Data of compounds 27-32 coupled with vindoline (1) in position 17 through an amide bond are shown in Table S3.The activity of compound 28 containing the 4trifluoromethylphenyl substituent is also noteworthy in this context.Similar to 1-[4-(trifluoromethyl)benzyl]piperazine-containing derivative 29, compound 28 is highly effective and rather selective in the cases of colon cancer (COLO-205, −90.33%) and melanoma (SK-MEL-5, −92.46%).The 1-bis(4-fluorophenyl)methyl piperazine-containing compound 31 should also be mentioned; it proved to be effective on several types of cancer, particularly on the leukemia MOLT-4 cell line (−98.81%). Since compounds 17, 20, 22-25, 28, 29, and 31 showed significant antiproliferative effects on several cancer cell lines during the one-dose test, they were subjected to a fivedose screening.The 50% growth inhibition (GI 50 ) and their mean values are given in Table 1.Among them, the ([4-(trifluoromethyl)benzyl]piperazine-containing derivative 23 and the (1-bis(4-fluorophenyl)methyl piperazine-containing compound 25 were the most potent agents.The latter two derivatives resulted in less selectivity but outstanding cytotoxic activity with GI 50 < 2 µM on almost all cell lines.The most significant activity was shown by compound 23 on the MDA-MB-468 cell line of breast cancer (GI 50 = 1.00 µM), while compound 25 had the most significant activity on the HOP-92 cell line of non-small cell lung cancer (GI 50 = 1.35 µM).In addition, compounds 22, 28, and 31 exhibited mean GI 50 values below 4 µM.It should also be emphasized that compound 24 had a GI 50 value of 1.00 µM on the renal cancer RXF 393 cell line and that derivative 28 had a GI 50 value of 1.17 µM on the leukemia MOLT-4 cell line. Effect of Selected Conjugates on Cell Viability of Non-Tumor Chinese Hamster Ovary (CHO) Cell Lines Three promising conjugates (20,23,25) were selected for testing on the non-tumor CHO cell line in the CellTiter-Glo Luminescent Cell Viability Assay (Promega Corporation, Madison, WI, USA) to reveal their selectivity for cancer cells.Treatment of CHO cells for 48 h with the compounds in the 10 −7 -10 −5 M concentration range resulted in a concentrationdependent decrease in the luminescent signal proportional to the amount of ATP produced by living cells as an indicator of cell viability.Piperazine conjugate treatment did not affect CHO cell viability in 10 −7 and 10 −6 M concentrations, while treatment in 10 −5 M concentrations resulted in significantly decreased cell viability with values of 1.25 ± 0.77%, 52.76 ± 7.25%, and 33.45 ± 19.62% for compounds 20, 23, and 25, respectively (Figure 3).Based on these data, IC50 values of compounds 20, 23, and 25 can be roughly estimated to be 2.54 μM, 10.8 μM, and 6.64 μM, respectively; these results show the promising selectivity of the compounds on tumor cells, as a significant inhibitory effect on non-tumor cell viability is exerted at remarkably higher concentrations compared to the investigated tumor cell lines. General All chemicals were purchased from Sigma-Aldrich (Budapest, Hungary) and were used as received.Melting points were measured on a VEB Analytik Dresden PHMK-77/1328 apparatus (Dresden, Germany) and were uncorrected.IR spectra were recorded on Zeiss IR 75 and 80 instruments (Thornwood, NY, USA).NMR measurements were performed on a Bruker Avance III HDX 400 MHz NMR spectrometer equipped with a 31 P-15 N{1H- 19 Based on these data, IC 50 values of compounds 20, 23, and 25 can be roughly estimated to be 2.54 µM, 10.8 µM, and 6.64 µM, respectively; these results show the promising selectivity of the compounds on tumor cells, as a significant inhibitory effect on non-tumor cell viability is exerted at remarkably higher concentrations compared to the investigated tumor cell lines. General All chemicals were purchased from Sigma-Aldrich (Budapest, Hungary) and were used as received.Melting points were measured on a VEB Analytik Dresden PHMK-77/1328 apparatus (Dresden, Germany) and were uncorrected.IR spectra were recorded on Zeiss IR 75 and 80 instruments (Thornwood, NY, USA).NMR measurements were performed on a Bruker Avance III HDX 400 MHz NMR spectrometer equipped with a 31 P-15 N{1H-19 F} 5 mm CryoProbe Prodigy BBO probe, a Bruker Avance III HDX 500 MHz NMR spectrometer equipped with a 1 H{ 13 C/ 15 N} 5 mm TCI CryoProbe, a Varian VNMRS 600 MHz NMR System NMR spectrometer, and a Bruker Avance III HDX 800 MHz NMR spectrometer equipped with a 1 H-19 F{ 13 C/ 15 N} 5 mm TCI CryoProbe (Bruker Corporation, Billerica, MA, USA). 1 H And 13 C chemical shifts are given on the delta scale as parts per million (ppm) relative to tetramethyl silane.One-dimensional 1 H, and 13 C spectra and two-dimensional 1 H-1 H COSY, 1 H-1 H NOESY, 1 H-13 C HSQC, and 1 H- 13 C HMBC spectra were acquired using pulse sequences included in the standard spectrometer software package (Bruker TopSpin 3.5, Bruker Corporation).NMR spectra were processed with Bruker TopSpin 3.5 pl 6 (Bruker Corporation, Billerica, MA, USA) and ACD/Spectrus Processor version 2017.1.3(Advanced Chemistry Development, Inc., Toronto, ON, Canada).ESI-HRMS and MS-MS analyses were performed on a Thermo Velos Pro Orbitrap Elite (Thermo Fisher Scientific, Bremen, Germany) system.The ionization method was ESI and operated in positive ion mode.The protonated molecular ion peaks were fragmented by CID (collisioninduced dissociation) at a normalized collision energy of 35-65%.For the CID experiment, helium was used as the collision gas.The samples were dissolved in methanol.EI-HRMS analyses were performed on a Thermo Q Exactive GC Orbitrap (Thermo Fisher Scientific, Bremen, Germany) system.The ionization method was EI and operated in positive ion mode.The electron energy was 70 eV, and the source temperature was set at 250 • C. Data acquisition and analysis were accomplished with Xcalibur software version 4.0 (Thermo Fisher Scientific).The reactions were followed by analytical thin layer chromatography (TLC) on DC-Alufolien Kieselgel 60 F 254 (Merck, Budapest, Hungary) plates.Preparative TLC analyses were performed on silica gel 60 PF 254+366 (Merck) glass plates.Column chromatography was carried out using Silica gel 60 (0.040-0.063 mm) (Merck). NCI60 Screening A detailed description of the NCI screening procedures [30][31][32][33][34], including the onedose and five-dose tests, can be found in the Supplementary Materials, on the website of NCI [35], and in our previous work [26]. CellTiter-Glo Luminescent Cell Viability Assay on Non-Tumor CHO Cells Compounds 20, 23, and 25 were dissolved in DMSO in 10 mM stock solutions and stored frozen until use.CHO cells were cultured in complete Dulbecco's Modified Eagle's Medium (DMEM, low glucose (1 g/L)) supplemented with 10% fetal bovine serum, 1% Gibco GlutaMAX-I (100×) solution, 1% Gibco MEM non-essential amino acid solution (100×), and 0.1% penicillin-streptomycin mixture.Cells were grown in T-25 size cell culture flasks under standard cell culture conditions (37 • C, 5% CO 2 ) and passaged at 70-80% confluency.The assay was performed according to the manufacturer's protocol, as described previously [36][37][38].Briefly, CHO cells were seeded to opaque 96-well culture plates at a density of 5000 cells/100 µL complete DMEM/well.The side rows and columns of the plate were filled with sterile phosphate-buffer saline to avoid an edge effect.Following overnight incubation (37 • C, 5% CO 2 ), the culture medium was aspirated from the cells and replaced with increasing concentrations of drug solutions (10 −5 , 10 −6 , and 10 −7 M) diluted from the stock solution in sterile complete DMEM (100 µL/well).Untreated cells served as a control, and complete DMEM served as a luminescent background control.Drug-treated cells were incubated for 48 h (37 • C, 5% CO 2 ), and then equilibrated to room temperature in 30 min.At the end of the incubation period, 100 µL of room temperature CellTiter-Glo reagent was added to each well, and the plates were placed on an orbital shaker for 2 min and incubated for additional 10 min at room temperature without shaking.The luminescent signal was measured using an EnSpire AlphaLisa microplate reader (Perkin Elmer, Inc., Waltham, MA, USA).The normalized luminescent values of the treated cells were compared to those of the untreated control.The statistical analysis and calculation of IC 50 values were performed in GraphPad Prism 8.0.1 (GraphPad, La Jolla, CA, USA).Estimated IC 50 values were calculated by using non-linear regression by fitting a sigmoidal dose-response curve to the data points. Conclusions Monomer Vinca alkaloid vindoline, which does not show any anticancer effect by itself, was coupled via different positions and linkers with N-substituted piperazine derivatives.The latter were chosen because the piperazine skeleton is a well-known pharmacophore, widely used in pharmaceutical research and the field of medicine for different indications.The substituents on the nitrogen atom of piperazines were alkyl, aryl, aralkyl, and heterocyclic groups.The products were prepared using simple, three-step synthetic routes.Among the compounds synthesized, several derivatives presented significant and excellent antiproliferative activity during the in vitro NCI-60 cell line screening, especially the derivatives with N-[4-(trifluoromethyl)benzyl] and N-bis(4-fluorophenyl)methyl substituents on the piperazine ring.Compound 23 was identified as the most potent antitumor candidate, exhibiting a growth inhibition (GI 50 ) value of 1.00 µM on the breast cancer MDA-MB-46 cell line.In addition, several other conjugates showed low micromolar GI 50 values against most of the examined cell lines.It is important to emphasize that a particularly valuable result is the selectivity that the most effective compounds showed on non-tumor cells with compound 23 having a half-maximal effective concentration (EC 50 ) of 10.8 µM.The results obtained in this study are promising for further development; in particular, with the involvement of other piperazines, a more complete SAR could be elaborated.For example, an exciting continuation of the work could be the synthesis and biological evaluation of the N-bis [4-(trifluoromethyl)phenyl]methyl analog by hybridizing the piperazine units of the two most effective derivatives (23 and 25).Furthermore, the elucidation of the mechanism of action of these types of molecules would also be an interesting scope of study.Finally, we would like to highlight that this study may have a significant impact on the design of new Vinca alkaloid-based antitumor agents. 14 2. 2 . 2 . Int. J. Mol.Sci.2024, 25, 7929 11 of Effect of Selected Conjugates on Cell Viability of Non-Tumor Chinese Hamster Ovary (CHO) Cell Lines Three promising conjugates (20, 23, 25) were selected for testing on the non-tumor CHO cell line in the CellTiter-Glo Luminescent Cell Viability Assay (Promega Corporation, Madison, WI, USA) to reveal their selectivity for cancer cells.Treatment of CHO cells for 48 h with the compounds in the 10 −7 -10 −5 M concentration range resulted in a concentration-dependent decrease in the luminescent signal proportional to the amount of ATP produced by living cells as an indicator of cell viability.Piperazine conjugate treatment did not affect CHO cell viability in 10 −7 and 10 −6 M concentrations, while treatment in 10 −5 M concentrations resulted in significantly decreased cell viability with values of 1.25 ± 0.77 %, 52.76 ± 7.25 %, and 33.45 ± 19.62 % for compounds 20, 23, and 25, respectively (Figure 3). 25 *Figure 3 . Figure 3.Effect of compounds 20, 23, and 25 on the viability of native CHO cells.The percentage values of viable cells after 48 h treatment are presented as mean ± SD of three independent experiments in technical triplicates (n = 9), One-way ANOVA, Dunnett's post hoc test; * p < 0.0001 vs. control). Figure 3 . Figure 3.Effect of compounds 20, 23, and 25 on the viability of native CHO cells.The percentage values of viable cells after 48 h treatment are presented as mean ± SD of three independent experiments in technical triplicates (n = 9), One-way ANOVA, Dunnett's post hoc test; * p < 0.0001 vs. control).
2024-07-22T15:11:18.820Z
2024-07-01T00:00:00.000
{ "year": 2024, "sha1": "d68d7329330ce21044add10ff7c63c54bd5688e9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms25147929", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ccdcc8b7fb1e0721297dee09c36d7388fabbfe0f", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [] }
255943273
pes2o/s2orc
v3-fos-license
Utrecht Gender Dysphoria Scale – Gender Spectrum in a Chinese population: scale validation and associations with mental health, self-harm and suicidality Background Individuals with gender dysphoria display an incongruence between birth-assigned gender and gender expression. However, there is no existing Chinese measure for gender dysphoria. Aims This study aims to validate the Utrecht Gender Dysphoria Scale – Gender Spectrum (UGDS-GS) in a Chinese population, and compare the psychometric properties of the UGDS-GS with one frequently used scale for gender dysphoria measurement, the Gender Identity/Gender Dysphoria Questionnaire for Adolescents and Adults (GIDYQ-AA). Method A total of 2646 Chinese participants were recruited. The following information was collected: sociodemographic variables, gender identity, sexual orientation, gender dysphoria measured by the UGDS-GS and the GIDYQ-AA, anxiety, depression and suicide assessment. Principal component analyses and confirmatory factor analysis (CFA) were conducted to test the fitness of the model. Discriminant validity was tested with one-way analysis of variance. Results The UGDS-GS showed good psychometric properties, with the GIDYQ-AA demonstrating slightly better psychometric properties than the UGDS-GS. UGDS-GS also showed strong internal consistency (Cronbach's α = 0.89), and good convergent validity and criterion validity. Exploratory factor analysis showed a one-factor structure (Kaiser-Meyer-Olkin test, 0.93; χ2 = 13 342.50; d.f. = 153; P < 0.001). The UGDS-GS was positively associated with anxiety symptoms, depressive symptoms, suicidal ideation, attempted suicide and self-harm. We also found the results were robust in different samples. Conclusions The validated UGDS-GS can significantly stimulate and promote gender dysphoria assessment in Chinese populations, allowing for assessment in a more diverse subset of gender minorities. Gender dysphoria has been a central focus in transgender healthcare. 1 It is well-documented that individuals with gender dysphoria experience distress from multiple avenues, including in their personal, social and occupational lives. 2 The DSM-5 3 defines gender dysphoria as an marked incongruence between an individual's experienced gender and birth-assigned gender. However, not all people who experience gender dysphoria meet the DSM-5 gender dysphoria diagnosis because not all individuals seek gender-affirmative treatment. 4 In addition, the standardised diagnostic instrument Structured Clinical Interview for the DSM requires trained clinical professionals and is time-consuming, 5 which is difficult to apply efficiently in the general population. To achieve an in-depth understanding of gender dysphoria and promote the health of individuals with gender dysphoria, it is important to provide valid and reliable gender dysphoria assessment, outside of the DSM diagnosis criteria. The Utrecht Gender Dysphoria Scale (UGDS) 6 and the Gender Identity/Gender Dysphoria Questionnaire for Adolescents and Adults (GIDYQ-AA) 7 are the two most widely used scales to assess for gender dysphoria, using two versions of the measures, one male and one female, which are based on birth-assigned gender. Gender dysphoria assessment The UGDS is a 12-item screening measure for gender dysphoria in both adults and adolescents. 6 The GIDYQ-AA is a 27-item scale for gender identity and gender dysphoria in both adolescents and adults. 7 Both the UGDS 4,8,9 and GIDYQ-AA [10][11][12][13] have been validated and widely applied in various settings, with different age groups. Furthermore, those two scales are significantly correlated with each other. 14,15 Recent research has moved beyond focusing on assigned gender and binary conceptualisation of transgender identity, to be inclusive of non-binary transgender identities. 12,16,17 However, non-binary people may feel uncomfortable responding to either a male or female version of the gender dysphoria scales based on their fluid identity. 18 In addition, researchers have noted that gender dysphoria scales with distinct male and female versions, such as the UGDS, are less than ideal for detecting gender dysphoria in a genderqueer or genderfluid individual. 4 Moreover, to support people with disorders of sex development (DSDs)/intersex conditions, instruments are required to specifically measure gender dysphoria taking nonbinary gender identity into account. 8 The DSM-5 defined DSDs as a specifier of gender dysphoria, that is, gender dysphoria with or without a DSD. 3 Researchers have commented that this change in the DSM-5 was unprecedented and saw DSDs subsumed under psychiatric disorders, with an emphasis placed on the psychiatric conditions of people with DSDs. 19 It is therefore necessary that suitable psychiatric and mental health measurements for people with DSDs, especially for gender dysphoria, are validated. The Utrecht Gender Dysphoria Scale -Gender Spectrum (UGDS-GS) is an adapted version of the original UGDS, which combines both versions of the UGDS to create a 18-item genderneutral measurement assessing gender dysphoria on a continuum spectrum. 18 The UGDS-GS reconstructed the original UGDS to provide more fluid movement along the gender spectrum, making it suitable to measure for gender dysphoria in non-binary individuals, individuals undergoing gender affirmation surgery and people with DSDs. The UGDS-GS is a newly developed scale, which has yet to be validated in other countries/languages. This study aimed to validate the UGDS-GS in a Chinese population, and examined the applicability of the UGDS-GS and the GIDYQ-AA for gender dysphoria. The two scales were compared in terms of gender dysphoria conceptualisation, psychometric properties and application in different groups, including transgender, non-binary, genderqueer, cisgender sexual minority and heterosexual individuals. We hypothesised that the Chinese version of the UGDS-GS would demonstrate the same factor structure as the English version, with good psychometric properties. We further hypothesised that the UGDS-GS and GIDYQ-AA would demonstrate different prediction properties in the different groups, especially for non-binary and genderqueer groups, and that the UGDS-GS would outperform the GIDYQ-AA in assessing gender dysphoria in non-binary and queer individuals. Finally, we hypothesised that there would be different predictions in the mental health outcomes of individuals with gender dysphoria. Participants and procedure This study was conducted from 26 October to 6 November 2020 in the Ningxia Province, China. Adolescent and young adults from local colleges were invited to complete an online survey by distribution of a questionnaire link on the platform 'Wenjuanxing', which provides a data collection function. All participants remained anonymous and participants were informed that they could withdraw from the survey at any time before submitting their responses. All participants provided informed consent before they completed the survey, which took on average 10-20 min to complete. A total of 2663 participants completed the survey; 17 samples were excluded owing to incomplete information, leaving 2646 (99.4%) study participants. The authors assert that all procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and with the Helsinki Declaration of 1975, as revised in 2008. All procedures involving human patients were approved by Research Ethics Review Committee of Central University of Finance and Economics, China (approval number: CUFE-20200930-0001). Participants' informed consent was signed online as written consent. Measures Sociodemographic characteristics and sexual orientation Sociodemographic characteristics included age, birth-assigned sex, ethnic group, residence type, family economic status, whether they were the only child in the family, any history of psychiatric disorders and medication status. Sexual orientation was assessed through gender identity and sexual attraction. Gender identity was measured by a single question: 'Which of the following best describes your gender?' Responses included six categories: male, female, transgender female, transgender male, non-binary and genderqueer. Sexual attraction was assessed by another question: 'Which of the following best describes your sexual attraction?'. Responses were classified into five categories: heterosexual, bisexual, homosexual (lesbian/gay), queer and other (e.g. asexual). Gender identity/dysphoria The UGDS-GS was used to measure the level of gender dysphoria. 18 It consists of 18 items on a five-point Likert scale ranging from 1 ('disagree completely') to 5 ('agree completely'). Example items include 'I prefer to behave like my affirmed gender' and 'Every time someone treats me like my assigned sex, I feel hurt'. All item scores were added to generate a total score, with a higher score indicating a higher degree of gender dysphoria. UGDS-GS is composed of two subscales: a 14-item dysphoria subscale and a four-item gender affirmation subscale. The adaptation of the Chinese version of the UGDS-GS was authorised by the author of the original English version. The process of translation followed the recommended procedures for cross-cultural scale adaptation. Initial translation was conducted by two bilingual native Chinese translators, synthesis of translation by a third bilingual Chinese translator, back translation by two bilingual native English speakers and then an expert review by several psychologists, psychiatrists and medical staff. We also conducted a pre-test with convenience sampling, before testing the final proposed measure. The GIDYQ-AA 7 was also used to measure gender dysphoria, to allow for comparison of the two scales on psychometric properties and actual application in Chinese populations. The GIDYQ-AA consists of a male version and a female version, with 27 items for each version. For the male version, it includes items such as 'In the past 12 months, have you felt satisfied being a man?' and 'In the past 12 months, have you disliked your body because it is male (e.g. having a penis or having hair on your chest, arms and legs)?' For the female version, example items include 'In the past 12 months, have you felt satisfied being a woman?' and 'In the past 12 months, have you disliked your body because it is female (e.g. having breasts or having a vagina)?' In this study, we reversecoded the 27 items to a new scoring that ranged from 1 ('never') to 5 ('always'), for easier understanding and statistical comparison with the UGDS-GS. We calculated the total score by adding all item scores together (Cronbach's α = 0.90), with a higher score indicating higher gender dysphoria. In our previous study, the recommended cut-off score was 48 for the Chinese version of the GIDYQ-AA. 20 Mental health outcomes Mental health-related indicators of anxiety symptoms, depressive symptoms, suicidal ideation, attempted suicide and self-harm were measured. Anxiety was measured with the seven-item Generalised Anxiety Disorder Scale (GAD-7), which is a selfreport screening scale used to measure anxiety symptoms. 21 It has been validated in China. 22 It is composed of seven items, and participants are asked to indicate the frequency of the occurrence of symptoms (e.g. 'feeling nervous, anxious or on edge', 'not being able to stop or control worrying') over the past 2 weeks on a fourpoint scale (0 = not at all, 1 = several days, 2 = more than half of the days, 3 = nearly every day). We calculated a composite anxiety score by summing all item scores (Cronbach's α = 0.93). Higher scores indicate more severe anxiety symptoms. The nine-item Patient Health Questionnaire (PHQ-9) was used to assess depressive symptoms. 23 Similar to the GAD-7, the PHQ-9 has been validated in the Chinese context. 24 It includes nine selfscreening items concerning the frequency of depressive symptoms over the past 2 weeks. For example, 'little interest or pleasure in doing things' and 'thoughts that you would be better off dead or of hurting yourself in some way'. Participants were asked to rate symptoms on a four-point scale, varying from 0 ('not at all') to 3 ('nearly every day'). All items are summed to generate a composite depression score (Cronbach's α = 0.92), with higher scores indicating more severe depressive symptoms. Suicidal ideation was assessed through a single question: 'How often have you had suicidal thoughts over the past 12 months?' Participants were asked to respond on a four-point scale (1 = never, 2 = once, 3 = twice, 4 = more than twice). Attempted suicide was measured by a single question: 'Have you ever attempted suicide?' The responses was rated on a four-point scale ranging from 1 ('never') to 4 ('more than twice'). Self-harm behaviours was also assessed by a single question (i.e. 'In the past 12 months, have you ever intentionally harmed yourself without wanting to die?'). Response options were rated on a six-point scale (1 = never, 2 = once, 3 = two to five times, 4 = six to ten times, 5 = 11-20 times, 6 = more than 20 times). Validation of the UGDS-GS All statistical analyses were conducted with the following Windows software: IBM SPSS version 23.0, Mplus version 8.3 (https://www. statmodel.com/) and R version 4.0.2 (https://cran.r-project.org/). Descriptive statistics were generated for each item score and the sociodemographic characteristics. To evaluate the construct validity of the two-factor UGDS-GS in China, we split the sample randomly half by half. An exploratory factor analysis (EFA) including half of the sample was conducted with principal component analyses (PCA) and direct oblimin rotation. A confirmatory factor analysis (CFA) including the other half of the sample was performed by maximum likelihood estimates, to confirm the fitness of the model derived from EFA. The goodness-of-fit model was evaluated by a number of statistics, i.e. χ 2 /d.f. ratio, root mean square error of approximation (RMSEA), comparative fit index (CFI), Tucker-Lewis index (TLI) and standardised root mean residual (SRMR). 25 Acceptable goodness-of-fit model parameters were defined as RMSEA < 0.08, CFI > 0.90, TLI > 0.90 and SRMR < 0.08. 26 To assess discriminant validity, group difference regarding the total UGDS-GS mean score was compared by a one-way analysis of variance (ANOVA), using the Scheffe's procedure as a post hoc test. Cronbach's alphas were calculated to check the reliability of the Chinese version of the UGDS-GS and the two subscales, with α = 0.80-0.90 indicting a good fit and α > 0.90 indicating excellent internal consistency reliability. An item analysis was also performed to calculate corrected item-total correlation coefficients. To assess the criterion-related validity, Pearson correlations were performed by calculating the correlation between the UGDS-GS score and other mental health variables. Meanwhile, Pearson correlations between UGDS-GS and GIDYQ-AA were calculated to assess the convergent validity of the UGDS-GS. Sensitivity and specificity of the Chinese version of the UGDS-GS were assessed by receiver operating characteristic (ROC) curves. Based on Youden's index, the maximum value of J (sensitivity + specificity -1) was calculated as the optimum cut-off score in the Chinese version. 27 The statistical significance level was set at two-sided 0.05 P value in this study. Comparison of the UGDS-GS and GIDQY-AA To compare the psychometric properties and application of the UGDS-GS and the GIDYQ-AA in China, we compared the reliability, discriminant validity, criterion-related validity and ROC curves of the two scales. Cronbach's alpha was calculated to assess internal consistency reliability. Cohen's kappa coefficient was calculated to measure interrater reliability between the two scales, with κ < 0.40 defined as poor agreement, κ = 0.40-0.75 as fair to good agreement and κ > 0.75 as excellent agreement. 28 Discriminant validity of the two scales was compared by performing a one-way ANOVA and paired t-test among different gender identity groups and different sexual attraction groups. Criterion validity was compared by Pearson correlations between UGDS-GS, GIDYQ-AA and mental health outcomes. According to Youden's J-statistic, the optimistic cut-off scores of both the UGDS-GS and GIDYQ-AA were calculated and corresponding sensitivity, specificity and area under the curve (AUC) were compared between the two scales. Sociodemographic characteristics A total of 2646 participants constructed the final sample ( Table 1). The age ranged from 15 to 28 years (mean 19.30, s.d. = 1.20). The majority of the participants were birth-assigned female (65.6%), ethnic Han (54.7%), urban dwellers (72.8%), with moderate family economic status (66.0%) and they were not the only child in the family (83.4%). There were 4.6% participants who were diagnosed with psychiatric disorders, and 1.6% were on psychiatric medication during the survey. Psychometric properties of the UGDS-GS We first tested the ceiling and floor effects in the Chinese version of the UGDS-GS. 29 The total score ranged from 18 to 90. The results showed that 2.6% scored 18 and 0.3% scored 90 (both <15%), indicating that the Chinese version of the UGDS-GS did not demonstrate ceiling or floor effects, which indicated good sensitivity of this instrument. Construct validity Half of the sample was randomly chosen to conduct EFA. Results of the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy test and the Bartlett test of sphericity showed that the data was suitable for EFA (KMO = 0.93, χ 2 = 13 342.50, d.f. = 153, P < 0.001). Using PCA and based on the criterion of Eigenvalues being >1, three factors were exacted, accounting for 61.5% of the total variance. 30 To be consistent with the original two-factor structure, a fixed two-factor model was performed by using PCA and oblimin rotation, with pairwise deletion of missing data. The exacted two factors explained 55.6% of the total variance. In addition, the original factor names (i.e. dysphoria, gender affirmation) were unchanged in the Chinese version. Dysphoria factors indicated distress about one's physical characteristics, expected behaviours and sense of self in their assigned gender; the gender affirmation factor indicated complete agreement with the benefits of living in the affirmed gender. 18 Table 2 shows the factor loadings and community of each item. All items loaded on the two factors the same as the original version except for item 10 and 14, which both loaded on gender affirmation factors in the Chinese version but loaded as dysphoria factors in the original version. To further confirm the rationality of the two-factor structure, we performed CFA with the maximum likelihood method. Based on the theoretical framework and model modification indices, we constructed a two-factor model, which showed fair fit (χ 2 /d.f. = 9.52, RMSEA = 0.080, CFI = 0.924, TLI = 0.908, SRMR = 0.074). The Chinese version of the UGDS-GS kept the same factor loadings as the original scale, except for the item 5, which was loaded on the dysphoria factor. To test the robustness of the two-factor model, we re-conducted CFA with another sample, and found that the results were robust in these different samples (see Supplementary Material available at https://doi.org/10.1192/bjo.2022.617). Discriminant validity We verified the discriminant validity of the UGDS-GS for gender identity and sexual attraction. No significant differences were observed between the natal males and natal females (P = 1.0), transgender females and transgender males (P = 0.52), or non-binary and genderqueer groups (P = 0.99); thus, we tested three groups: cisgender, transgender and non-binary/genderqueer. Levene's test of homogeneity of variances showed that the variance was homogeneous (P = 0.49), so we used the one-way ANOVA and Scheffe's post hoc test to compare the total UGDS-GS scores among the three gender identity subgroups ( Fig. 1(a)). Similarly, we compared the total scores among the three sexual attraction subgroups ( Fig. 1 (b)). Results of ANOVA showed that total score of the UGDS-GS was significantly different among the three gender identity subgroups (F(2, 2643) = 14.04, P < 0.001). Scheffe's post hoc test showed that the transgender group (mean 51.21, s.d. = 10.51) showed significantly higher gender dysphoria scores than the cisgender group (mean 44.28, s.d. = 11.81; P < 0.001). No significant differences were found between the non-binary/genderqueer group (mean 48.79, s.d. = 11.48) and the cisgender group (P = 0.06), or between the non-binary/genderqueer group and the transgender group (P = 0.60). Furthermore, the results showed that there was a significant difference among the three sexual attraction subgroups (F(2, 2643) = 17.44, P < 0.001). The heterosexual group Table 2, the trend of the gender identity group difference for most items was equivalent to the group differences for the total UGDS-GS scores, except for a few items (i.e. items 1, 3, 4, 11, 13, 14, 16 and 18). Reliability The reliability of the Chinese version of the UGDS-GS was good (Cronbach's α = 0.89). The internal consistency and reliability of two subscales were also calculated, with the dysphoria subscale having excellent reliability (Cronbach's α = 0.91) and the gender affirmation subscale having good reliability (Cronbach's α = 0.83). Comparison between the UGDS-GS and GIDYQ-AA The total score of the Chinese version of the GIDYQ-AA ranged from 27 to 135, with no ceiling (0% scored 135) or floor effects (5.3% scored 27). The mean total scores of the UGDS-GS and GIDYQ-AA were 44.53 (s.d. = 11.84) and 44.51 (s.d. = 14.14), respectively. The Cronbach's alpha was 0.89 for the Chinese version of the UGDS-GS and 0.90 for the GIDYQ-AA, which suggested that both scales had good reliability. Using the optimal cutoffs of 46 for the UGDS-GS and 48 for the GIDYQ-AA, there was a significant but poor agreement between the two scales (κ = 0.32, P < 0.001). We verified the discriminant validity of the GIDYQ-AA with both gender identity and sexual attraction (Fig. 3). In terms of gender identity, the ANOVA showed that the total GIDYQ-AA scores were significantly different among the three gender identity subgroups (F(2, 2643) = 67.43, P < 0.001): the transgender group (mean 61.84, s.d. = 17.71) showed significantly higher gender dysphoria than the cisgender group (mean 43.89, s.d. = 13.67; P < 0.001), and the non-binary/genderqueer group (mean 54.97, s.d. = 14.79) showed significantly higher dysphoria than the cisgender group (P < 0.001), but there was no significant difference between the transgender and non-binary/genderqueer group (P = 0.099). It is worth noting that the non-binary/genderqueer group also showed significantly higher dysphoria than the cisgender group (P < 0.001) in the GIDYQ-AA, but this was not significant (P = 0.060) in the UGDS-GS. As for sexual attraction, the heterosexual group (mean 43.19, s.d. = 13.42) demonstrated significantly lower GIDYQ-AA scores than both the bisexual/homosexual group (mean 55.07, s.d. = 15.18; P < 0.001) and the queer/other group (mean 53.39, s.d. = 15.77; P < 0.001). No significant difference was observed between the bisexual/homosexual and queer/other group (P = 0.704). The results were consistent with that of the UGDS-GS. That is, the total scores of the UGDS-GS were positively associated with that of the GIDYQ-AA (r = 0.29, P < 0.001) ( Table 3). Similar to the UGDS-GS, the GIDYQ-AA score was positively associated with anxiety symptoms (r = 0.24, P < 0.001), depressive symptoms (r = 0.28, P < 0.001), suicidal ideation (r = 0.14, P < 0.001) and attempted suicide (r = 0.12, P < 0.001). Figures 2(b) and 2(c) show the ROC curves of the UGDS-GS and GIDYQ-AA in gender identity and sexual attraction, respectively. Comparison of AUC statistics showed that the GIDYQ-AA had better diagnostic power than the UGDS-GS with both gender identity (AUC GA = 0.79, AUC UG = 0.66) and sexual attraction (AUC GA = 0.74, AUC UG = 0.62). The optimum cut-off score of the GIDYQ-AA was 48, based on the maximum of Youden's index (J = 0.47), which was consistent with previous studies. On the basis of the optimum cut-offs scores, the GIDYQ-AA (sensitivity 0.76, specificity 0.71) had higher sensitivity and specificity than the UGDS-GS (sensitivity 0.69, specificity 0.56). Discussion This is the first validation study for the UGDS-GS, which also compared the psychometric properties between the GIDYQ-AA and the UGDS-GS. The Chinese version of the UGDS-GS demonstrated good psychometric properties with high internal reliability and good validity. The Chinese version of the UGDS-GS showed a consistent two-factor structure (i.e. dysphoria, gender affirmation) that was the same as the original scale, with slight deviations on the item loadings on factors. Unlike the hypothesis, UGDS-GS did not outperformance the GIDYQ-AA in assessing gender dysphoria in non-binary and queer individuals. Our results showed that GIDYQ-AA was more sensitive for assessing the gender dysphoria differences between non-binary/genderqueer group and cisgender, outperforming the UGDS-GS. The current study continued the novel contributions of the UGDS-GS in gender dysphoria research, which further expanded knowledge on gender dysphoria in non-binary, queer, LGBTQ and cisgender heterosexual people in a Chinese population. Both the GIDYQ-AA and the UGDS-GS demonstrated good psychometric properties, with the GIDYQ-AA showing relatively better psychometric properties than the UGDS-GS. However, the UGDS-GS has a gender-neutral version for gender dysphoria measurement, which has special application values for groups such as non-binary. Non-binary individuals identify differently from the traditional female and male binary categories; they may identify with both genders, outside the gender binary or no gender. 31,32 Although there are approximately a third of transgender individuals who identify as non-binary, 32 recent research indicates that non-binary individuals experience gender dysphoria in a unique way. 12 Research highlights that clinical definitions of gender dysphoria primarily centred on gender binary conceptualisation, and gender dysphoria assessments that reflect non-binary experiences, are needed. 12 From this perspective, the UGDS-GS has historical importance for trying to capture the gender dysphoria experiences of non-binary individuals. The results showed that transgender individuals had significantly higher scores on the UGDS-GS than cisgender individuals; however, no significant difference was found between the cisgender and non-binary/genderqueer groups, and between the transgender and non-binary/genderqueer groups. That is, although we cannot capture the significant difference in gender dysphoria experience in the transgender and non-binary/ genderqueer groups, the results showed that the level of gender dysphoria experienced was different in the three groups. Moreover, the UGDS-GS was also suitable for individuals after gender-affirmative care, such as a gender confirmation survey or someone in the process of transitioning (if that is their goal). In addition, the factor loadings of the original version and the Chinese version also showed slight differences. Item 5 'A life in my affirmed gender is more attractive for me than my assigned sex' loaded in the gender affirmation subscale in the original version, but loaded in the dysphoria subscale in the Chinese version. In addition, the current results also showed that compared with the dysphoria subscale, the gender affirmation subscale had relatively lower scores when compared with the total score. This could be because of the differences in social contexts and cultural environment. In Chinese society, the transgender group is marginalised and faces considerable social discrimination. 17,33 As a result, being able to live in the affirmed gender could be more important for decreasing gender dysphoria, rather than having a sense of gender affirmation. A previous study in Finland showed that adolescent boys were more likely to have gender dysphoria than girls. 13 However, the current study results did not find significant differences between males and females in both the cisgender and transgender groups. Another previous study indicated that adolescents referred for gender dysphoria are more likely to have emotional problems than non-referred individuals. 34 This study's results were consistent with previous research and showed gender dysphoria was significantly positively associated with anxiety symptoms, depressive symptoms, suicidal ideation and suicide attempt. Research indicates a noticeable difference in the mental health problems of transgender people, which could be a consequence of stigma and minority stress. 1,35,36 These results showed that when compared with the heterosexual group, the sexual minority groups experienced higher levels of gender dysphoria. This could be because of gendered stereotypes that aim to categorise gender into specific social roles, 37 and the incongruence with expected social roles in the sexual minority group. Several limitations in this study need to be noted. First, the participants in the current sample were young, which is not representative for all age groups. We recommend that the UGDS-GS should be further validated in different age groups. Second, marital status was not collected; however, the mean participant age was 19.3 years. The legal age for marriage in China is 22 years for males and 20 years for females. Marriage is not prevalent during the college period in China. 38 Thus, almost all participants were unmarried in this study and future research should aim to explore marital status. Third, for the sensitivity and specificity of the UGDS-GS, we used the self-reported gender identity rather than a gender dysphoria clinical diagnosis. However, according to Ashley, 1 a diagnosis of gender dysphoria can pathologize gender dysphoria, and diagnosis should not be clinically required to access transitionrelated affirmative interventions as gender identities develop naturally, and transgender identities are non-pathological. Furthermore, a screening tool, such as the UGDS-GS, can measure distress, which may be more cost-effective than a diagnosis because there are a limited number of psychiatrists fluent in Chinese (as part of the limited target medical services available), and few medical facilities provide transgender care. 39 Fourth, we did not measure the onset age of gender dysphoria. Research has indicated that individuals with an early onset of gender dysphoria could experience higher gender dysphoria than individuals with a late onset, because individuals with late onset are usually older and potentially better at coping with distress. 14 Future studies should investigate the influence of gender dysphoria onset and level of gender dysphoria by using the UGDS-GS, to further explore the UGDS-GS measurement. In conclusion, the Chinese version of the UGDS-GS demonstrated good psychometric properties, and showed the association between gender dysphoria and mental health problems. This Chinese version of the UGDS-GS provided the first validated gender-neutral assessment of gender dysphoria for use in Chinese populations, which could promote the understanding of gender dysphoria assessment and transgender research in China and in Chinese-speaking populations around the world. Funding None. Declaration of interest None.
2023-01-18T14:04:52.009Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "1c5d863336ee22731a1e4a853de184b8a3268ad9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Cambridge", "pdf_hash": "1c5d863336ee22731a1e4a853de184b8a3268ad9", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
211526385
pes2o/s2orc
v3-fos-license
Spatial–Temporal Evolution of Drought Characteristics Over Hungary Between 1961 and 2010 Historically, Hungary has witnessed numerous waves of drought episodes, causing significant agro-economic loss. Over the recent decades, the intensity, severity and frequency of drought occurrence have dramatically shifted, with undisputable upward tendencies across many areas. Thus, the main aim of this study was to characterize drought trends, intensity and duration over Hungary during 1961–2010. To attain the study goals, the present analyses utilized climate datasets obtained from Climate of the Carpathian region project-CARPATCLIM for 1045 gridded points covering entire Hungary. Meanwhile, a well-known drought index, namely; standardized precipitation index (SPI) and the standardized precipitation evapotranspiration index (SPEI) at 12-month timescales were employed for drought characterization. Furthermore, the sub-set regions of drought in Hungary were identified using S-mode of the principal component analysis. The Mann–Kendall trend test analysis showed a significant negative SPI-12 trend (P < 0.05) in 11.5% of the total points over the western part of Hungary. In comparison, 43.2% of the total numbers of the SPEI-12 time series gridded points showed a significant negative trend (P < 0.05) over the similar locale. However, both indices’ trends highlighted the fact that the northeastern region is less sensitive to drought despite experiencing the highest of total drought duration. Results also suggested that the SPI-12 indicates that no significant change can be detected from 1961 to 2010 over Hungary. In contrast, the SPEI-12 exhibits that the drought waves that hit Hungary were more pronounced, with a significant positive (P < 0.05) trend of + 1.4% per decade being detected for the area affected by very extreme drought. All in all, this study is one of the primary steps toward a better understanding of drought vulnerability assessment in Hungary. Introduction Recently, European countries have been warming rapidly in comparison with many other parts of the world (Hernández-Morcillo et al. 2018). Moreover, the European Environmental Agency EEA (2017) indicated that the average temperature from 2006 to 2015 increased by 1.5°C, warmer than the pre-industrial level, while heat-waves increased in frequency and length. On the other hand, precipitation has recorded a significant decline in recent times in the southern parts of Europe (Vicente-Serrano et al. 2014), while the intensity and frequency of rainstorms in the northern parts of the continent have increased. Interestingly, drought episodes have become more vigorous and longer, associated with increased temperature and low rainfall, especially in the center of the Europe (i.e. Hungary) (Bussay and Szinell 1996;Bartholy et al. 2013;Kern et al. 2016). Historically, drought episodes have hit Europe many times, causing major damage in the economic and agricultural sectors (Vicente-Serrano et al. 2014), with the yearly impact reaching 5.3 billion € in Europe since 1991 (Feyen and Dankers 2009). Interestingly, the European Environment Agency reported that over the past 30 years 17% of the European Union's lands area has been were affected by water scarcity, resulting in losses of 100 billion € (EEA 2009). Hungary is one of the European countries located in the Carpathian region affected by drought episodes and climate change, as are other countries (Gálos et al. 2007). In the last few decades, specifically since the 1980s, drought has become a recurrent feature of Hungary's climate, and it seems the drought trends will extend towards the end of the 21st century (Gálos et al. 2007). Bartholy et al. (2013) predicted a significant change in the future rainfall patterns of Hungary (i.e. 2070-2100) with the summer becoming drier. Similarly, Blanka et al. (2013) highlighted the positive future trend of drought in Hungary where the Great Hungarian Plain will be subjected to an increase in the drought hazard by the end of the 21st century which will gravely affect the agricultural system. In a similar vein, Sábitz et al. (2014) emphasized the remarkable drought trends in the Carpathian region, where the necessary steps toward drought mitigation should be taken. Interestingly, Kertész (2016) argued that Hungary is more susceptible to desertification due to different factors such as decreasing precipitation and increasing temperature associated with extreme events, as a direct result of climate change. Historically, between 1983 and 1995 drought episodes over Hungary were responsible for 36% of all agricultural losses (Szinell et al. 1998;Szalai et al. 2000). Moreover, the drought of 2003 caused more than 55 billion HUF of economic damage, with the temperature breaking the previous record and reaching 45°C in the national average (Puskás et al. 2012). Fiala et al. (2014) reported a significant yield loss of between 40 and 50% in the southern Great Plain of Hungary, due to drought and heatwaves. In recent decades, many studies have been conducted to investigate drought trends and their relation to other variables in Hungary (e.g. Szabó et al. 2018;Kern et al. 2016;Mika et al. 2005;Horvath et al. 2005). To the best of our knowledge, none of them have applied the SPI nor the SPEI methods as a main indicator of drought. Therefore, this study is the first of its kind to provide a spatially explicit study of drought characterization in Hungary, with special emphasis on drought on the sub-regional scale as a basis for any future climate change adaptation and resilience plans. All in all, the main objectives of this research were to: (1) identify regions in Hungary with the same temporal variability of drought, by using the SPI-12 and the SPEI-12 for 1045 gridded points covering the majority of Hungary; (2) identify regions exposed to drought, and (3) highlight area that was most susceptible to drought in Hungary between 1961 and 2010, by examining the temporal evolution of area affected by different drought categories. The remaining section of this study is organized as follows: Sect. 2 elaborates the description of the study domain, data and methods employed while Sect. 3 explicitly presents the results and discussions, respectively. Finally, Sect. 4 gives conclusion and possible recommendation for future study. Study Area and Data Collection Hungary is located in the center of Europe between latitudes 45°55 0 N-48°60 0 N and longitudes 16°10 0 E-22°50 0 E, covering an area of 93,000 km 2 . The climate is characterized as a continental climate, in which winters (December, January, February) are cold and snowy, and summers (June, July, August) are hot and dry (Hungarian metrological service: https:// www.met.hu/en/idojaras/). Generally, the climate of Hungary is influenced by its location in the Carpathian Basin, and can be characterized as a continental climate, meaning warm and dry summers, and cold and wet winters (Á cs et al. 2015;Breuer et al. 2017). According to Szabó et al. (2018), rainfall can occur due to three factors (1) the Mediterranean factor in the eastern part, (2) the oceanic factor in the western part, and (3) continental factors in the Great Hungarian Plain; thus, any changes in the world climate will have an effect on precipitation patterns in Hungary. The landscape can be divided into plains (the Kisalföld and the Great Hungarian Plain), hills (the Transdanubian Hills) and mountains (the Transdanubian Mountains, The Northern Hungarian Mountains and the Alpine foothills (Alpokalja, omitted from this study due to lack of data) (Szabó et al. 2018). The rainfall data (monthly and yearly) from 1961 to 2010 as well as the SPI-12 and SPEI-12 time series were collected from the Climate of the Carpathian region project-CARPATCLIM (CARPATCLIM 2019). The project was financed by the European Commission (Szalai and Vogt 2011) and developed by a several institutions from nine countries in the Carpathian region, jointly with the European Commission's Joint Research Center (JRC). The final output of the project was the climate atlas of the region ). Data of 1045 gridded points covering the majority of the country was used, as can be seen in Fig. 1. This gridded database at spatial resolution: 0.1°9 0.1°grid (10 km 9 10 km) interpolated from dataset of meteorological stations. The grids represent the SPI/SPEI computed over a twelve-month period values across the study area in the form of two-dimensional array. The data were obtained without checking their homogeneity, since the homogeneity and quality were ensured by the CARPATCLIM team (2012) (i.e. Bihari and Szentimrey 2013; Spinoni et al. 2015). Drought Indicators: the SPI and SPEI Although, precipitation is a critical indicator of the availability of water, but also both of precipitation and temperature together have an important role that influence in availability and stability of water. Therefore, they effect on the urban, agricultural, and ecosystems water supply, as well as, on agricultural production and forest stress, by control in the ratio of actual and potential evapotranspiration (Zhang et al. 2019;Chang et al. 2018;Novick et al. 2016;Williams et al. 2013). A several parameters such as rainfall, temperature, soil moisture, streamflow, river discharge, vegetation condition, and ecosystem responses can be used as indicators of drought (Vicente-Serrano et al. 2010;Narasimhan and Srinivasan 2005;Nalbantis and Tsakiris 2009;Sohrabi et al. 2015;Jiao et al. 2016;Anyamba and Tucker 2012;Chang et al. 2018). These indicators are transformed to drought indices at multiscale, which reflect the different characteristics of the drought (Vicente-Serrano et al. 2010). In this research we used both the SPI and the SPEI for drought monitoring over Hungary (Vicente-Serrano et al. 2010;McKee et al. 1993). The SPI is based only on monthly rainfall data; so, geographical and topographical differences are not considered (Mathbout et al. 2018). Meanwhile, SPEI is a newly improved index developed from the same background as SPI but based on rainfall and potential evapotranspiration (PET) (i.e. the monthly climatic water balance) (Vicente-Serrano et al. 2010;Wang et al. 2015;Tan et al. 2015). However, both of them are statistical indices and can be calculated for any time scale (i.e. for 1-, 3-, 6-, 9-, or 12-month time scales). The choice of the time scale is, in practice, dependent on the goal of the study. If it is related to agriculture drought then a 1-, 3-, or 6-month scale should be chosen, while a 9-, or 12-month scale is used for monitoring hydrological drought (Tan et al. 2015). In our study we used a SPI-12 and SPEI-12-time scale for detecting droughts over a long-term interval. SPI and SPEI values for drought can be classified, as can be seen in Table 1. The positive values indicate wet conditions, while negative values indicate drought conditions (less than median rainfall) (Bordi and Sutera 2001). Interestingly, the SPEI is superior to the SPI in term of drought characterization and climate change monitoring due to the fact that the SPEI takes into consideration both temperature and soil moisture content (used to compute PET) (Spinoni et al. 2013;Li et al. 2012). Nevertheless, it is important to mention here that the data obtained (i.e. SPEI-12 and SPI-12) were computed by performing a Gamma distribution (shifted version) for easy comparison between the SPI and the SPEI. Paerson III and loglogistic distribution are similarly performed. However, the shifted Gamma distribution method were chosen to compare the SPI and the SPEI in the best way (Spinoni et al. 2013). The well-known non-parametric Mann-Kendall (MK) statistical test (Kendall 1975;Mann 1945) is frequently used to detect trends in hydro-meteorological time series (i.e. rainfall, temperature, drought indicators, etc.) (Tan et al. 2015; Tian and Quiring The study area and gridded points from the CARPATCLIM dataset Table 1 Drought categories based on Agnew's scheme (2000) SPI and SPEI values Drought category Very extreme drought K. Alsafadi et al. Pure Appl. Geophys. 2019). The MK test is used because it is not affected by outliers and is robust for trends detection with non-normally distributed temporal data (Ö nöz and Bayazit 2003). Further information about the M-K test can be found in Kumar et al. (2009). In our study we used the MK test to detect whether there is statistically significant increasing or decreasing trends in the SPI-12 and SPEI-12 time series at the 95% confidence level. As many scholars (e.g. Tan et al. 2015;Tian and Quiring 2019) indicate results of an MK test may be affected by autocorrelations of the time series. Thus, pre-whitening was applied before conducting the MK test to reduce the effects of the autocorrelation on the trend detection. This method removes the influence of serial correlation from the time series (Yue and Wang 2002). Herein, this procedure produced the same result. On other hand, the trends magnitude or the time series extent of the SPI and the SPEI was determined using Theil-Sen slope estimator (Thiel 1950;Sen 1968). In the final step, the results of the MK trend test for the SPI and the SPEI time series were keyed to ArcMap software to produce spatial distribution of the gridded points that show significant trends, as well as the decadal changes of the SPI/SPEI-12. Temporal and Spatial Variability of Drought In order to identify the common temporal variability and patterns of drought, the principle component analysis (PCA) in S-mod was applied to the SPI-12 and SPEI-12 time series (Rencher 1998). PCA is a non-parametric multivariate technique which reduces the observed variables to a few newly reproduced representative data values called principal components (PCs). The first PC has the highest variance, while the second represent the second highest, and so on. Thus, the first leading components of PCs contain the highest values of the total variance. Such a transformation is a linear one and depends on the eigenvectors of a covariance or correlation matrix (Mathbout et al. 2018;Xie et al. 2013). In order to produce more localized spatial regions, the Varimax orthogonal rotation method was applied to the ''loadings'' (the correlation matrix between the SPI time series at single stations and the corresponding PCA); because it simplifies the structure of the patterns by forcing the value of the loading coefficients towards zero or ± 1 (Hannachi et al. 2007;Raziei et al. 2009;Tian and Quiring 2019). In the next step, the loadings scores were illustrated using ArcGIS software as a thematic map (i.e. after rotation, each SPI/SPEI grid point was assigned to the PCs on which it has the highest correlation or loadings). Final step involved converting the gridded vector points to raster layer at same spatial resolution, without using the interpolation techniques. Total Drought Duration (TDD) and Spatial Extent Drought duration is defined as a total number of months when the SPI/SPEI is less than 0 for a specific continuous period (Wang et al. 2014;Tan et al. 2015). While, the total drought duration (TDD) is expressed for all drought events where SPI/SPEI is \ 0 over the whole study period N i or for different drought categories (i.e. to study the drought frequencies with different intensity (Guo et al. 2017;Fang et al. 2018). For example, a mild TDD is computed with a calculation involving the total number of months in which -0.84 \ SPI/SPEI \ -0.5 either for the whole studied period or for a short period i.e. a specific drought episode as follows: where n i is the number of drought events, N i is the total number of months for the study period, and i is a studied location. The drought-prone areas were examined by the percentage of the number of drought locations in the total study area (%) for different drought categories. Therefore, it indicates the percentage of area affected by drought (Li et al. 2012;Tan et al. 2015) as follows: The SEoD is the spatial extent of the drought, where i is a month, m i is the number of drought points when the SPI/SPEI is \ 0, or for a specific intensity in month i, and M i is the total number of points included. Drought: SPI-12 and SPEI-12 Trends To track drought episodes in Hungary, the SPI-12 and the SPEI-12 were analyzed for the 1045 gridded points by using the M-K test. Results indicate a significant negative SPI-12 trend (P \ 0.05) at 121 gridded points (11. 5% of the total points) over the western part of Hungary (Fig. 2), while 359 gridded points (34.4% of the total points) show a significant positive trend (P \ 0.05) over the eastern part; nonetheless, no trend was detected in the rest of the studied points. Although the total number of SPEI-12 time series gridded points with a significant trend were 457 (i.e. 43.7% of the total points), only 5 points had a significant positive trend, and 452 (98.9%) showed a significant negative trend (P \ 0.05) (Fig. 2). Noticeably, Fig. 2 depicts the dynamic role of temperature through the SPEI-12 index, which contributed appreciably to amplifying and magnifying drought over Hungary. Strictly speaking, negative changes per decade (i.e. drought tendency) were extended towards the western part of Hungary and covered almost 60% of the territory. Nonetheless, the significant negative SPI-12 and SPEI-12 trends remained concentrated in the eastern and middle parts of the country. 3.2. Drought: SPI-12 and SPEI-12 Spatial Pattern in Hungary The PCA analysis was applied to the SPI-12 and the SPEI-12 time series [in geoscience, this analysis is also called the empirical orthogonal function (EOF) analysis]. Following this, the first six components (PCs) account for 91.7% and 92% of the total variance for the SPI-12 and the SPEI-12, respectively (Figs. 3, 4). For the SPI, the first principle component (PC1) makes the largest contribution to the total variance (28.5%), followed by the PC2 (24.8%), then the rest of the PCs-i.e. PC3, PC4, PC5, PC6-which contributed 18.9%, 17.2%, 1.4% and 0.9%, respectively (Fig. 3). Figure 5 demonstrates the spatial distribution of the rotated loading for each drought index (i.e. the SPI and the SPEI), suggesting that Hungary is composed of six different sub-sets; characterized by different drought variability. However, PC1 dominates in the western part of Hungary which covers the Transdanubia region (Central Transdanubia, Western Transdanubia, Southern Transdanubia).; Interestingly, this pattern occurs in Fig. 2, which has a significant negative trend according to the M-K test (see Figs. 2, 5, PC1). On the other hand, PC2 dominates in the eastern part of Hungary (Northern Hungary and Northern Great Plain), as is shown in Fig. 5, where it explained about 8.9% of the SPIs' Figure 2 The SPI/SPEI-12 Sen's slope estimator (changes per decade), and its trends (M-K statistic test) for Hungarian territory from 1961 to 2010 (black points were statistically had a significant trend at P \ 0.05) total variance. This pattern covers the subregion that has a significant positive trend according to the M-K test (Figs. 2, 5, PC2). PC3 dominates in the southern part of the country (Southern Great Plain), and PC4 the northern part (Central Hungary), while PC5 features in the southern part of the Southern Transdanubia region (near to the Croatian border), and PC6 covers the city of Gy} or, in the far northwest near the Slovakian border. As is shown in Fig. 5, PC1 dominates in the western part of Hungary which covers the Transdanubia region (Central Transdanubia, Western Transdanubia, Southern Transdanubia), while PC2 dominates in the southern part (Southern Great Plain) and PC3 in the northern part (Northern Hungary). PC4 dominates in the eastern part of Hungary (Northern Hungary and Northern Great Plain), PC5 in the southern part of the Southern Transdanubia region (near to the Croatian border), while PC6 covers Central Hungary. To investigate further the vulnerability of the various sub-regions to different classes of drought, the spatial-temporal evolution of both the SPI-12 and the SPEI-12 and the occurrence of drought were assessed from the PC scores during the period 1961-2010, as can be seen in Figs. 6 and 7. Following this, the results showed different susceptibilities to drought among different components for both indices. Total Drought Duration (TDD) Analysis In this part of the research, the spatially distributed percentage of the total drought duration TDD on a 12-month time scale was calculated, in order to study the different intensities of drought frequency (see Table 1). Figure 8 illustrates the spatial distribution of total drought (%) for different drought categories (i.e., no drought, mild, moderate, severe and extreme) on a 12-month time scale. The spatial distribution of SPI-12 based TDDs indicates that very extreme droughts tend to occur in the southwest of the study area (i.e. Somogy and Baranaya), and in the western of the study area (Veszprém), with values of about 5-7%, while most of the central parts and the far east of the country are characterized by lower drought frequencies (about 1.5-3%) for the same category. Interestingly, the very extreme TDDs have a quite distinctive spatial pattern and high spatial variation, whereas the extreme and severe droughts tend to occur in the central study area (i.e. Jász, Nagykun, Szolnok, and Pest) and in the western part of the study area (Veszprém), and also in the eastern parts of Bács-Kiskun, with values of about 9-7% for extreme TDDs, and 14-16% for severe TDDs. Noticeably, the mild and moderate TDD values account for the majority of the total drought duration, and more frequently over the northern and southern west parts of the studied area, but they have a random spread pattern. On the other hand, the spatial distribution of SPEI-12 based TDDs indicate that very extreme droughts tend to occur in the southwestern (i.e. Somogy and Baranaya) and central parts of the study area (Jász, Nagykun, Szolnok and Pest), as well as in the western part (Veszprém), with values of about 7-8% for severe TDDs. One of the most striking characteristics is that both indices have the same spatial pattern and dominate in almost the same subregions. Figure 9 indicates a positive significant correlation (r = 0.59, P \ 0.05) between different types of drought calculated on the basis of TDDs-SPIs-12 and TDDs-SPEIs-12 time series over Hungary. Nevertheless, it is worth mentioning here that severe and very extreme droughts are more typical in western and southern regions, while mild drought dominates in the south east of Hungary. Figure 10 shows the vulnerability of Hungary to different classes of drought. As illustrated, the common groups in Hungary are no drought (SPI/ SPEI [ 0), mild drought, and moderate drought (SPI/ SPEI between -1 and -1.49), while the other groups are less probable. However, a quick glance at both classes of indices revealed that SPEI was better able to give a general perception of drought evolution over Hungary. Following this, the SPI-12 indicates that no significant change can be detected from 1961 to 2010 over Hungary. However, the ''No drought'' group was subjected to an increase of ? 0.9 per decade (P [ 0.05), while the ''severe drought'', ''extreme drought'' and ''very extreme drought'' groups decreased by -0.6% per decade (P [ 0.05), -0.64% per decade (P [ 0.05), and -0.22% per decade (P [ 0.05), respectively. Temporal Evolution of the spatial Extent of Drought In contrast, the SPEI-12 shows that the drought waves that hit Hungary were more pronounced. Although, no significant trend was detected in the SPI-12 categories, it showed that two significant (P \ 0.05) different drought categories were detected over Hungary during the study period. In detail, results suggested a statistically significant decrease (P \ 0.05) of -2.28% per decade in the area that was not affected by drought (i.e. the ''No drought'' group). In contrast, a significant positive (P \ 0.05) trend of ? 1.4% per decade was detected for the area affected by very extreme drought. Even though the changes in the other categories (i.e. ''mild'', ''moderate'', ''severe'', and ''extreme'') were not significant, they obviously showed a positive trend, which is completely contrary to the calculated results from SPI-12 for the same categories. Regardless of the significant trends of both indices, we studied the correlation between each group for each indicator (i.e. SPI-spatial extant (%) and SPEI-spatial extent (%), as can be seen in Fig. 11). The highest correlation was recorded in the ''no drought'' group (r = 0.93), followed by the ''very extreme'' (r = 0.83), and ''extreme'' (r = 0.82) groups. Nonetheless, correlation values remain high, indicating the dynamics of two indices in defining drought over space and time. Discussion Drought is one of the normal features of climate that can have a more devastating impact on any ecosystem than any other natural hazard (Pandey et al. 2010;Abbas et al. 2014). Unfortunately, drought is classified as one of the costliest and yet one of the least understood natural disasters (Zhang et al. 2015). A basic understanding of such a phenomenon is an essential step toward building strategies for adaptation and mitigation in any country. Within this context, the main aim of this research was to track drought evolution and drought episodes over Hungary between 1961 and 2010 by using the well-known SPI and SPEI indices for 1045 gridded points obtained from the CARPATCLIM project for climatic data. The SPI has many disadvantages, such as using only rainfall data, not taking time distribution into account and not being able to predict the starting and ending of a drought cycle (Paltineanu et al. 2009;Wilhite 2000;Vicente-Serrano et al. 2010). The SPEI also has certain disadvantages, such as using the Thornthwaite equation (1948) in the calculation of PET which considers only temperature (Wang et al. 2015), and the fact that heatwaves can be mistaken for metrological drought (Spinoni et al. 2013). Nevertheless, many scholars indicate that monthly rainfall can be perfectly illustrated by using gamma distribution across Europe, and this was used in our study to calculate the SPI-12 and the SPEI-12. Interestingly, Lloyd-Hughes and Saunders (2002) indicate that the SPI-12 results are analogous to those obtained from a complicated indicator such as the Palmer drought severity index (PDSI) (Palmer 1965), and superior in terms of spatial standardization. However, the SPEI was designed to integrate the advantages of both the PDSI and the SPI, by considering the effects of precipitation and temperature to assess drought (Vicente-Serrano et al. 2010;Jin et al. 2019). Comparing the results obtained from both indicators (i.e. the SPI and the SPEI), we find that both indices indicate that the western part of Hungary was more prone to drought, where all of the gridded points showed a significant negative trend. On the contrary, we notice a clear disagreement in determining the significant trends to increased wetness over the eastern part of Hungary. The SPEI indicates that a small portion of eastern Hungary has positive significant trends (only 5 gridded points), while the SPI reveals that 359 gridded points distributed over the eastern part have a positive significant trend. These results can mainly be explained by the differences between the two indices in their drought calculating methodology, whereas using not only precipitation but also the PET significantly affected the results, and revealed more areas that may be prone to drought in Hungary. Indeed, a clear consensus exists among both indicators on two important points. On the one hand, the western part was more vulnerable to drought, where a tendency towards drought (i.e. decreases in SPI-12 and SPEI-12 values) was detected for the Transdanubia region (PC1) (P \ 0.05). On the other hand, both indices highlighted that the eastern region is less sensitive to drought, as is indicated in PC2 for the SPI-12 and PC4 for the SPEI-12 (P \ 0.05). Szabó et al. (2019) found similar results for the period 1961-2010 and reported that the western part of the country was the most sensitive to climate change. As previously established by many scholars, the evolution, intensity, duration, and end of a drought all mainly depend on precipitation, where rainfall is the key factor in drought determination and evolution within any region (Vicente-Serrano et al. 2010). Therefore, we tracked the decadal precipitation changes (i.e. every 10 years) in Hungary. Following this, the spatial distribution of rainfall changes over Hungary (not shown) reveal that the southern-western part of the study area receives more rainfall than the central part, while the central part (i.e. the Central Hungary region) was more susceptible to rainfall changes which indicate that the central part of Hungary was more prone to drought. However, this result Figure 11 the correlation between SPI-spatial extant (%) and SPEI-spatial extent during the studied period K. Alsafadi et al. Pure Appl. Geophys. should be considered together with the results from Fig. 8 where the spatial distribution of TDDs dominated in the central part of Hungary. In fact, this main difference between the TDD method and the SPI and the SPEI indices in determining drought zones over Hungary can be explained by the fact that TDDs do not consider the temporal consistency of time series and aggregation of drought events, regardless of the time in which they occurred. On the other hand, trend analyses for both the SPI and the SPEI takes into consideration the temporal consistency of drought changes with special focus on the time they occurred. However, Bede-Fazekas and Szabó (2019) indicate that central Hungary-where the average precipitation does not exceed 500 mm-is subjected to drought, and that 90% of Hungary is prone to drought, with exception of the northern area. In a similar vein, Makra et al. (2005) reported a reduction in rainfall and a tendency to increases in temperature of between 0.4 and 0.8°C in Hungary, and that drought will be more frequent in the second half of the 21st century, not only in Hungary but also in the Carpathian Basin as a whole ). This supports our results in Figs Drought in Hungary is one of the aspects of global climate change that could mainly be affected by various atmospheric factors such as the El Niño-Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO), the Greenland-Balkan Oscillation (GBO) and the Pacific decadal oscillation (PDO). However, Mares et al. (2016) demonstrate that the role of the Greenland-Balkan Oscillation (GBO) in influencing the south-east European hydro-climatic regime is greater than that of the NAO and the ENSO. In particular, the GBO captures both movements of air masses with a meridional component and a zonal component, while the ENSO and the NAO were less dominant in the Danube middle basin (Mares et al. 2016). In a similar vein, Bartholy et al. (2013) emphasized that the most significant factors affecting drought trends in Hungary are atmospheric circulation, lack of precipitation, changes in soil moisture and evapotranspiration and other elements of the hydrological cycle. Conclusions This research tracked the trends, intensity, spatial extent and duration of droughts in Hungary between 1961 and 2010 using 1045 gridded points collected from the Climate of the Carpathian region project-CARPATCLIM by using the SPI and the SPEI. The results demonstrate that the eastern part of Hungary was less vulnerable to drought, while the western part was more prone to drought. Such results stress the importance of climate mitigation plans which should be prepared on a sub-regional scale, taking into consideration the sustainability of the ecosystems in each one. material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2020-02-27T09:06:17.439Z
2020-02-26T00:00:00.000
{ "year": 2020, "sha1": "ab3c871edfd236cd391dc9e1eb02d25425260495", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00024-020-02449-5.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b2f2af191c3c5d93710ee595c128411b24887908", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
251229723
pes2o/s2orc
v3-fos-license
Changes and challenges in sexual life experienced by the husbands of women with breast cancer: a qualitative study Background Breast cancer (BC) in women can bring various problems to their marital and family life. Sexual life based on the experiences of the husbands of women diagnosed with BC has not been fully understood. Therefore, this research aimed to explore changes and challenges in sexual life experienced by the husbands of women diagnosed with BC. Methods A qualitative research was carried out on 18 men whose wives had been diagnosed with BC at reproductive age. They were selected using purposeful sampling and were interviewed using in-depth semi-structured interviews. Collected data were analyzed using the conventional content analysis method. Results ‘Sexual life suspension’ was the main theme of this research. Also, ‘unfulfilled sexual expectations’, ‘perceived barriers to satisfy sexual expectations’, and ‘efforts to adapt to sexual problems’ were subthemes. Conclusions The husbands of women with BC need support to improve their sexual and marital relationships. Education and counseling about sexual life during the treatment of BC should be incorporated into the healthcare program. Supplementary Information The online version contains supplementary material available at 10.1186/s12905-022-01906-8. Background Breast cancer (BC) accounts for 30% of new cancer diagnoses and is one of the most common types of cancers [1,2]. According to the World Health Organization (WHO), BC is the second leading cause of death from cancer in women [1]. The prevalence of BC has been estimated to increase from two million patients in 2018 to more than three million patients in 2046, representing 46% increase [1]. BC also accounts for 76% of cancer cases among Iranian women and the total number of BC diagnosis is 42 000. Annually, more than 7 000 new cases of BC are diagnosed [3]. It has been estimated that the incidence of BC in Iranian women will be tripled annually until 2030 [4]. More than 40% of Iranian women with BC diagnosis are in the age range of 40-50 years indicating a lower age at BC diagnosis compared to women in other countries [5]. The relative 5-year survival rate of these women has improved over the past 3 decades due to advances in early detection through increased awareness and the widespread use of mammography [6]. According to global statistics, the relative 5-years survival rate of BC has reached 90% [1]. Studies show that the five-year survival rate in Iranian women with BC varies from 51% to 76.5% [7][8][9][10]. The recent high survival rate of BC has attracted the attention of health researchers to the quality of life (QoL) and sexual life of these women and their families [11,12]. Sexual problems can be the result of any type of cancer, but it is more common in women with BC, because the breast is the symbol of femininity and plays an important role in sexual pleasure and arousal [13]. BC treatment has a wide range of physical and psychosocial consequences and leads to dysfunction and unpleasant changes in women's sexual function [14,15]. For example, mastectomy can change the individual's perceptions of body image and reduce sexual attractiveness and femininity [16]. Chemotherapy causes hair and weight loss and induces premature menopause. Moreover, radiotherapy causes pain and dermatitis, which decrease women's libido [17]. It is believed that deficiencies in sex hormones and changes in the body image alter the sexual function and cause arousal disorders, painful intercourse, and sexual dissatisfaction [18]. BC is a complex health problem and is difficult to cope with [19]. BC diagnosis in women can cause different health consequences for the whole family especially for their spouses [20]. BC can directly influence the quality of marital relationships and sexual function [18,21]. Therefore, BC is considered a 'disease of couples' or 'relational cancer' . It causes challenges including anxiety, depression, and sexual dysfunction in the marital relationship that are experienced not only by women but by also by their husbands [22,23]. Sexual relationship is an important part of marital relationships [24] and reduces emotional stress during BC treatment. It can improve psychosocial reactions to BC cancer diagnosis [25]. It is believed that couples experience more issues in their sexual relationships after BC diagnosis compared to before it [21]. A study on 1011 patients showed that 70% of women with BC suffered from sexual dysfunction during the treatment process [26]. BC survivors often avoid having sexual relationships with their partners, but some others prefer having a close and intimate sexual relationship without vaginal intercourse [19]. Moreover, BC-related changes can increase the emotional distance between couples. Related psychological stress reduces marital satisfaction [27]. The husbands of women with BC also experience many problems in their sexual life [20] that can have negative consequences for their emotional, psychological, and physical wellbeing [28]. They have to deal with and adapt to life changes and provide support to their wives and children [20]. Issues in their sexual intimacy and inclination to talk about their feelings and concerns can ignite frustration, anxiety, and communication problems [20] [29]. Therefore, their marital relationship should be strengthened to prevent more damages to their sexual relationships [29]. Background in Iran Cultural and religious factors can influence the psychosexuality of women and their husbands [17]. In the Iranian culture, the desire for having sexual relationships by women through requesting or showing interest is considered inappropriate. Also, the husband's preferences and satisfaction with sexual relationships are considered more important than the wife's satisfaction [21]. Since couples usually do not talk to each other about sexual issues, they do not reach an agreement on how sexual issues should be resolved [30]. Couples also are ashamed of talking about sexual issues with their healthcare providers. It causes that sexual problems to remain unrecognized and unresolved [31]. Therefore, couples should be assessed by healthcare professionals with regard to their sexual problems and receive recommendations to meet their needs [32]. Nevertheless, the Iranian healthcare system has not reached the optimal performance to proactively assess sexual problems among BC patients and their husbands [33]. The role of the husbands of women with BC to cope with BC has been emphasized [34], given that they are in the best position to identify challenges in their sexual life. Accordingly, healthcare providers can devise strategies to improve the couples' adaptation to sexual problems during the treatment process [31]. Therefore, this study aimed to explore changes and challenges in sexual life experienced by the husbands of women diagnosed with BC. Design and participants A qualitative research using conventional content analysis was used. Qualitative content analysis helps describe and interpret textual data based on the systematic process of data coding. In-depth descriptive and well-organized summary of research findings requires conducting qualitative research instead of quantitative research [35]. The article was reported using the standards for reporting qualitative research (SRQR) guideline [36]. The participants were selected using purposive sampling from June 2019 to February 2020 in an urban area of Iran based on the following eligibility criteria: men living with their wives diagnosed with BC, being at stages 1-3 of BC, and undergoing BC treatment for the past 1-5 years. The presence of mental and other chronic diseases in men and their wives led to their exclusion. Ethics considerations The ethical approval was obtained from the Ethics Committee affiliated with Shahroud University of Medical Sciences (decree number: IR.SHMU.REC.1398.012). Also, authorities granted permissions to enter the hospital before getting access to the patients' medical files. Sufficient explanations were given to the participants about the research purpose, voluntary nature of participation in this study, anonymity and right to withdraw from the study at any time, and confidentiality of collected data. Written informed consent and permission to audiorecord the interviews were obtained from the participants before data collection. Data collection Two researchers (MM, AM) decently reviewed all medical files of the women diagnosed with BC who completed their treatments in the past five years in a referral hospital. Among 156 reviewed medical files, 52 of them had initial eligibility criteria. Next, the husbands of these women were contacted via phone call to review their additional eligible criteria. Those participants who met the full inclusion criteria were invited to be interviewed. The recruitment process was continued until data saturation was reached when data analysis did not lead to the exploration of new findings, which happened after 18 interview sessions. In-depth semi-structured interviews using open-ended questions were carried out by the male researcher (AM) in Farsi. He scheduled appropriate time and place convenient to the participants to ensure of not interfering in their daily life routines. The interview guide (Additional file 1) consisted of questions that helped with collecting in-depth data about the research phenomenon. The main focus of the questions was changes happened in the life and sexual relationships after BC diagnosis. These questions were compiled by the research team after conducting a literature review on sexual life and its related issues among BC survivors. The depth of interviews was improving through asking probing questions to follow up the participants' perspectives. The interviews lasted 45-75 min and 14 participants were interviewed once. Four other participants were interviewed twice to remove ambiguities in the data collection process. Therefore, 22 interviews were performed with 18 participants. All interviews were recorded using a digital audio recorder. Questions about the participants' sociodemographic characteristics including the participant's age, education level, duration of the marriage, number of children, place of residence, employment status, economic status, the participants' wife's age, time passed from BC diagnosis, and cancer treatment modalities were asked before the interviews. The researchers were faculty members of nursing and midwifery schools at the time of the study. They were experts in qualitative research and had previous experiences with conducting qualitative research in cancer care. Two researchers (MM, AM) have clinical work experiences with cancer patients, but they had no relationship with the study participants before the research. They wrote reflective notes to bracket their own assumptions regarding the study phenomenon. Data analysis Verbatim transcribing of the interviews was performed immediately by the responsible researcher (AM) and simultaneously was entered into data analysis by the research team (MM, AM, MG, MV) applying conventional qualitative content analysis [37,38]. To immerse in the data and gain an in-depth understanding of the interviews' content, the transcripts were read line-byline several times. The meaning units were derived from the transcriptions and were labeled through open coding. Codes were assigned into categories based on their similarities. The codes and categories were compared together and concerning the whole data using the constant comparison method to develop the main theme and related subthemes [35,38]. Trustworthiness of data The four components of trustworthiness for qualitative research suggested by Lincoln and Guba including credibility, transferability, dependability, and confirmability were applied. Strategies used to strengthen the credibility of this study were prolonged engagement with the research settings and participants, reflexivity, peer debriefing, and member checking. Also, the interviewer wrote reflective notes to bracket his own assumptions about the study phenomenon and ensure that the research findings reflected the participants' perspectives. For peer debriefing, a third researcher reviewed and assessed the data analysis process. Also, a summary report of research results was provided to two participants who confirmed that our findings demonstrated their perspectives. Transferability was ensured through the provision of a thick description of the research findings. For dependability and confirmability, audit trial was used. An impartial person who was expert in qualitative research was asked to review and assess the transcripts, data analyses, and findings [39]. Results The participants were married and had the age range of 42-57 years (50.33 ± 4.15 y). The mean duration of their marriage was 21.16 years (SD = 5.44 y). Most of them had one child or 2 children (66.6%), resided in the city (72.2%), and had an under-diploma education degree (50%). Their wives had an age range of 33-50 years (44.38 ± 4.32 y). BC diagnosis happened in 56.6% and 44.4% of the participants' wives more than 3 years ago and in the last 1-3 years, respectively. The majority of the participants' wives (88.9%) had undergone mastectomy ( Table 1). The participants experienced substantial changes in their sexual life due to the diagnosis and treatment of BC in their wives. Our research findings consisted of the main theme of 'sexual life suspension' and three subthemes of 'unfulfilled sexual expectations' , 'perceived barriers to satisfy sexual expectations' , and 'efforts to adapt to sexual problems' (Fig. 1, Table 2). Sexual life suspension BC severely affected the sexual life of the men following the occurrence of changes in the well-being and sexual health of their wives. After the diagnosis of BC, major changes occurred in the couples' marital life leading to various challenges. There were barriers to meet their sexual needs, but each person responded differently to them. Therefore, the sexual life of the men was suspended. Unfulfilled sexual expectations The participants mostly reported a normal sexual life before the onset of BC. Some other men complained about sexual coldness in their wives even before BC. "Before this disease, my wife and I had sexual relationships almost every 10 or 15 days and we experienced no problem in our married life. " (Participant (P) 6, 48 years old) "We did not have an understanding about sexual issues at the beginning of BC. My wife was sexually very cold, and she did not like to have any sexual relationship. We had a sexual relationship once or twice a month, it reduced greatly after BC diagnosis. " (P 8, 49 years old) Negative changes in their sexual relationships following BC diagnosis were characterized as a reduction in sexual desire, frequency of sexual relationships, and sexual dissatisfaction. A sharp reduction in the frequency of sexual relationships was reported indicating having no sexual relationships during a year. It was attributed to the consequences of BC treatment including decreased physical charm and libido, vaginal dryness, and painful intercourse. "Before my wife's disease, we had sex every 7-10 days, but my wife felt often bored, was not interested in having sex, was harassed during intercourse; then the frequency of our sexual relationship decreased a lot ... and this is now once or twice a year. " (P 10, 45 years old) "The frequency and duration of our sexual inter- The occurrence of BC was a great shock to the participants' life and affected all aspects of their life so that their sexual desire toward their wives decreased. Therefore, their wives more often asked for having sexual relationships. Given the reduction in the participants' sexual desire for their wives and the frequency of sexual relationships, their sexual needs were not met leading to sexual dissatisfaction. Perceived barriers to satisfy sexual expectations Barriers to meet the participants' sexual needs were the sense of human aestheticism, culture, and insufficient sexual health support and education by healthcare providers. Changes in their wives' appearance were the basis of a series of new changes in their sexual life. Their wives had no longer those previous beauties and charm. On the other hand, the participants' innate sense of aestheticism, like any human being who was attracted to beautiful phenomena, inevitably reduced the sexual attraction of men to their wives. The presence of this innate sense was an important obstacle to have sexual relationships. The participants needed to receive support, training, and information on sexual issues from healthcare providers. They acknowledged that they had not received any support or training about it. Some participants turned to unofficial information sources after that they did not find answers to their inquiries about sexual problems. Efforts to adapt to sexual problems The participants experienced sexual crisis and used a variety of adaptation measures to overcome it. For instance, they imagined their wives' condition and made empathy. To deal with the sexual crisis, the participants reduced their sexual expectations from their wives. They dreamed a day when everything would return to normal. Although the participants were committed to have sexual relationships only with their wives, some of them had sexual relationships with another sexual partner in the face of sexual crisis and to meet their sexual needs. "During my wife's treatment, when we could not have sex for a long time, I went looking for another partner. This was only to meet my own sexual needs. " (P 7, 42 years old) One of the most difficult adapting behaviors was the suppression of sexual desire. The participants experienced sexual helplessness as they had no choice, but to suppress their sexual desire. Discussion A few qualitative studies so far have addressed sexuality and sexual health among men after the diagnosis of BC in their wives. This qualitative research explored changes and challenges in sexual life experienced by this vulnerable group. We found that men experienced unfulfilled sexual expectations given the occurrence of significant changes in their sexual relationships. Barriers to meet their sexual needs and efforts made by them to adapt to sexual changes were explored. Unfulfilled sexual expectations consisted of a reduction in the frequency of sexual relationships, diminished sexual desire, and sexual dissatisfaction. Similarly, a quantitative descriptive study on sexual adjustment among Israeli men after BC diagnosis showed that over 70% of them had difficulties in their sexual activities [40]. A qualitative study in the USA reported that men's sexual desire for their wives diminished after BC diagnosis [41]. Furthermore, in another qualitative study in the Iranian context, men's sexual desire decreased mostly due to mastectomy and treatment complications such as alopecia [31]. A cohort study revealed that the partners of young BC survivors had more sexual difficulties and less sexual enjoyment compared to the partners of healthy controls [42]. Undesirable sexual functions have been reported in studies on the sexual issues of women with BC. The majority of Chinese women with BC suffered from a significant reduction of sexual desire and frequency of sexual activities [43]. Changes in sexual life including reduced sexual relationships and sexual desire in BC survivors have been reported by many studies [44][45][46][47]. Barriers to meet sexual needs were described by our research participants. The participants' innate sense of aestheticism reduced their sexual attraction to their wives given the occurrence of changes in the women's physical attractiveness. According to Nasiri et al. 's study, unpleasant senses experienced by men because of physical changes in their wives led to avoid close contact with their wives and having sexual relationships [31]. Men often get separated from their wives, because of the disease's impact on their sexual relationships [14,20]. Indian women with BC undergoing mastectomy also stated that their husbands had arousal difficulties [48]. BC treatments usually cause most women to feel sexually inadequate and incomplete, which is likely associated with marital separation and breakdown of sexual relationships [49]. Physical and appearance problems in women following the treatment of BC are the common sources of sexual problems in men [18,19,47]. Sexual attractiveness and body beauty for women are emphasized in some cultures. Brazilian men consider the body beauty of women as an ideal not only for marriage but also for sexual relationships [50]. Therefore, breast-conserving surgery has become a routine approach to BC treatment in recent years. However, some patients still require mastectomy to decrease the risk of BC relapse [51,52]. A prospective controlled study suggested that those women who underwent a mastectomy were probably more at the risk of post-operative sexual dysfunctions [53]. Zehra et al. in a systematic review and meta-analysis reported that patients with breast-conserving surgery exhibited a better body image and physical and sexual health than those with mastectomy [54]. Therefore, it is important to pay attention to the quality of the surgery type because poor surgery-related cosmetic outcomes can impair sexual health [55]. In some cultures such as Brazil, having extra-marital sexual relationships in situations that wives are unable to meet their sexual needs has been suggested [44,50]. However, having an extramarital sexual relationship is unacceptable in Islam and the Iranian culture [19]. The tradition of temporary marriage in Islam supports men's decision to marry more than one woman. On the other hand, remarriage for men when their wives become ill is unacceptable in the Iranian culture and is dependent on fulfilling legal requirements [56]. Iranian women also forcefully disagree with their husband's remarriage. Lack of support, education, and training about sexual issues by healthcare providers was another perceived barrier in our study. Provision of adequate support and information concerning intimacy and sexuality can decrease distress in women with BC and their husbands [57]. This is a common finding in other studies that healthcare providers typically do not provide support and education about sexuality to patients with BC and their partners [49,[58][59][60]. There is a need to create an open, truthful, accepting communication environment with BC women and their husbands within the healthcare system and help them meet their sexual health needs [61,62]. Education programs for healthcare providers can improve their knowledge of sexual issues related to BC and how to communicate them to patients and their husbands [61]. When sexual issues are discussed with healthcare providers, husbands may not always be present to hear. Written information regarding sexual issues can improve their knowledge of how to resolve sexual issues during BC treatment [59]. The participants experienced sexual crisis after the diagnosis of BC in their wives. Some of them used the strategies of empathy, loyalty, patience, and hope to normal conditions in the future. Some others went looking for sexual relationships with another sexual partner or suppressed their sexual desire in dealing with this crisis. The adopted strategies can be various based on religious beliefs and relationship contexts [47]. Muslims believe that illness is a kind of test of God and God wants to see if they can endure difficulties or deviate. A qualitative study on Iranian men after BC diagnosis in their wives reported that the suppression of sexual desire, toleration of sexual frustration, and loyalty to their wives helped overcome unmet sexual needs [31]. However, in the Taiwan context, women with BC reported that their husbands had illegal sexual affairs with another partner [45]. In addition, Malay women with BC proposed their husbands to marry another woman in order to meet their sexual needs [47]. Jones et al. 's study on Canadian women with BC showed the necessity of having appropriate empathy, and a greater understanding and awareness of their husbands throughout cancer trajectory [63]. Limitations The present study is the pioneer of exploring men's sexual changes and challenges after BC diagnosis in their wives. Some limitations might have affected our data collection and analysis. Our participants' experiences might not be the representative of men with wives diagnosed with BC that would be outside of the reproductive age. The researchers bracketed their presumptions and ideas using reflexivity, but the researchers' subjectivity inevitably might have affected the interview process. This condition is especially relevant in the current study since the interviewer has been a qualified nurse and had a specific interest in this topic. The participants were recruited from a hospital in an urban area of Iran, which could impact on the transferability of our findings to other contexts. Taboo attached to sexual and marital issues in the Iranian culture might have caused the concern of disconnection of the alliance between the researcher and the participants and could hinder collecting in-depth data about the research phenomenon. Conclusion This study improves our knowledge of sexual changes and challenges experienced by the husbands of women diagnosed with BC. Following the diagnosis of BC, major changes and challenges occur in the marital life of women with BC and their husbands, which suspend men's sexual life. Our findings inform healthcare providers about the significance of paying attention to sexual health problems experienced by men during BC treatment. They should provide opportunities for the husbands of women with BC to express their concerns about their sexual health issues. This will promote help-seeking behaviors in this vulnerable group. It is also recommended that topics concerning the sexual life of women with BC and their husbands are incorporated into healthcare programs aiming at the improvement of QoL in couples. Education and counseling about sexual relationships during the treatment of BC should be one part of the holistic program aiming at the improvement of couple's sexual life and should be easily accessed by them in community settings. Supportive interventions by healthcare professionals for the husbands of woman with BC hinder further damages to marital relationships between couples. Future research is needed to design strategies for the provision of appropriate support to women with BC and their husbands, and examine their effects on couples' sexual life. In addition, future research should be conducted in other contexts. They can consider the development of appropriate practical instruments for the investigation of sexual life among the husbands of women with BC.
2022-08-02T13:51:30.575Z
2022-08-02T00:00:00.000
{ "year": 2022, "sha1": "bc412bbee3c16a69ba1cd1173854abc6d5749f94", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "bc412bbee3c16a69ba1cd1173854abc6d5749f94", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
52974310
pes2o/s2orc
v3-fos-license
Micro Fabry-Pérot Interferometer at Rayleigh Range The Fabry-Pérot interferometer is used in a variety of high-precision optical interferometry applications, such as gravitational wave detection. It is also used in various types of laser resonators to act as a narrow band filter. In addition, ultra-compact Fabry-Pérot interferometers are used in the optical resonators of semiconductor lasers and fiber-optic systems. In this work, we developed a micro-scale Fabry-Pérot interferometer that was constructed within the Rayleigh range of the optical focusing system. The high precision that is conventionally required for the optical parallelism and the surface accuracy of the mirrors was not so critical for this type of Fabry-Pérot interferometer. The interferometer was constructed using a gold-coated silicon microcantilever with reflectivity of 92% and a dielectric multilayer flat mirror with reflectivity of 85%. The focal spot size of the laser beam is 20 μm and the cavity length is approximately 20 μm. The finesse was measured to be approximately 25. The interferometric characteristics of the device were consistent with the theoretically calculated performance. The developed micro Fabry-Pérot interferometer has the potential to make a marked contribution to advances in optical measurements in various micro sensing system. In this work, we have developed a micro Fabry-Pérot (FP) interferometer with high sensitivity to realize high-performance feedback damping of the thermal vibration of a silicon microcantilever that is intended for use in an atomic force microscope (AFM). FP interferometers have been used in various high-precision optical interferometry applications, such as gravitational wave detection 1 . To date, there have been many studies of normal-sized FP interferometers 2,3 , but only a few studies have addressed smaller types of FP interferometers 4,5 . There have been several studies of feedback cooling of the thermal vibration of micro cantilevers [6][7][8][9][10][11][12][13] . Recent studies found that the measured signal-to-noise ratio determines the limits of the feedback cooling performance 6,7,12 . We used an FP interferometer rather than a Michelson interferometer to improve the measurement sensitivity and thus increase the signal-to-noise ratio. In conventional FP interferometers, the polished end faces of optical fibers 10,11 and micro mirrors from the surface of a multilayer dielectric mirror 12,13 formed by focused ion beam microfabrication have been used as cavity mirrors. However, use of these methods for the mirror has led to issues such as low finesse due to optical diffraction from the fiber output aperture and problems with the parallelism of the optical alignment and the interferometer, along with difficulties in the microfabrication process. In addition, these methods do not use the merits of the Rayleigh range. In this work, we have developed a micro FP interferometer that uses the optical merits of the Rayleigh range of the focal system to simplify the optical system and improve the interferometric performance. The interferometric characteristics of this FP interferometer show good agreement with the theoretically calculated performance. Figure 1 shows the experimental system that was used to measure the interferometric characteristics of the micro FP interferometer. A He-Ne laser (wavelength of 632.8 nm; laser power of approximately 1 mW) was used as the light source for the interferometer. The micro FP interferometer is constructed using the gold-coated surface of a microcantilever and a dielectric multilayer flat mirror. We measured the vibration of a commercially available silicon microcantilever (OMCL-AC240TN, Olympus Corporation) that is intended for use in AFMs. Figure 2 shows a scanning electron microscope image of this microcantilever. The micro cantilever's length, width, and thickness are 240 μm, 40 μm, and approximately 2.3 μm, respectively, and it is composed of single-crystal silicon. The natural oscillation frequency of the microcantilever is 77.6 kHz, and the catalog value of its spring constant is about 2 N/m. One single-side surface of the microcantilever was coated with gold to increase the laser reflectance using an ion-beam sputtering device that is commonly used for preprocessing before scanning electron microscope observation. The coating thickness was chosen to be as thin as possible while ensuring that sufficient reflectivity (92%) was obtained because reductions in both the natural oscillation frequency and the Q factor of the microcantilever were observed when a thick gold coating was used. The coating thickness was estimated to be approximately 25 nm based on the coating characteristic curve of the ion sputtering device. Ideally, the coating should be done only on the area, where the laser beam was irradiated. However the coating was done on the whole front surface of the cantilever, because it was easier than the partial coating. In case of the silicon micro cantilevers, we found dielectric multilayer coating was difficult on them. We tried several times, however the microlever were broken in all cases, probably due to the surface stress induced by the coating. It was one of the reason why we chose the gold coating. Methods The other side of the FP interferometer is formed by the dielectric multilayer flat mirror. The optical flatness and reflectance of this mirror were λ/10 and 85%, respectively. The diameter and thickness of the mirror were 30 mm and 1 mm, respectively. A laser beam with a diameter of 4 mm was focused using a spherical lens, which has a focal length of 80 mm and an F number of 20. The focal spot size was estimated to be approximately 16 μm under the assumption of the diffraction limit. The Rayleigh range was estimated to be approximately 250 μm, and the cavity length was approximately 20 μm. The optical system was set in a vacuum chamber at a pressure of approximately 4 × 10 −3 Pa. The interferometric signal was separated using a beam splitter and measured using an avalanche photodiode. The microcantilever was driven using a lead zirconate titanate (PZT) piezoelectric actuator. The signal was measured using an oscilloscope and a fast Fourier transform (FFT) analyzer. The vacuum circumstance was not essential for this experiment. It was only for obtaining a clear thermal vibration signal of the micro cantilever. The same optical characteristics of it were also obtained in the atmospheric pressure. Results The reflectance values of the microcantilever and the dielectric multilayer mirror were 92% and 85%, respectively. The reflectance of the microcantilever differs from that of the dielectric mirror. For the FP interferometer that was constructed using a pair of mirrors with different reflectances, the theoretical interferometric reflectance R was calculated to be where R 1 and R 2 are the reflectances of the multilayer mirror and of the microcantilever, respectively 14 . δ is the phase shift of each transmitted light wave due to the change in the cavity length L C and is given by δ λ = πL 4 / C . Figure 3 shows the interferometric reflectance R as a function of the cavity length, as calculated using eq. (1) for various values of R 2 . We can see that the minimum interferometric reflectance could not be 0% when R 1 and R 2 differ from each other. In the case where R 1 = R 2 , the minimum reflectance is 0%. In the case where R 1 > R 2 , the minimum reflectance increases as R 1 decreases. for R 1 and was located between R 2 and 1. The open circle in Fig. 4 is related to the experimental conditions (where R 2 = 0.85). Figure 5 shows the reflectance of the micro FP interferometer as a function of cavity length. R 1 and R 2 were 0.85 and 0.92, respectively. The FP interferometric characteristics were measured by varying the cavity length using the PZT actuator. The gray solid line indicates the theoretical calculation results obtained using eq. (1). The blue solid circles are the experimental results, which showed good agreement with the values on the theoretically calculated curve. Scale fitting was only performed for the horizontal scale. The finesse of the interferometer was measured to be 25. Figure 6 shows the FFT signal of the thermal vibration of the microcantilever, which is used as one of the mirrors of the micro FP interferometer, at maximum sensitivity. The frequency resolution of the FFT analyzer is 0.5 Hz. The data are averaged over 1000 measurements. The gray solid line is fitted to the experimental results using a Lorentzian curve. The quality factor Q was measured to be approximately 2000. The thermal vibration amplitude was approximately 5 pm. Discussions In the vicinity of the focal point of the focusing optical system, the laser beam wave fronts are sufficiently flat to allow the FP interferometer to be constructed. The Rayleigh length l L is given by where λ is the wavelength of the laser, f is the focal length and D is laser beam spot diameter on the lens. In this experiment, it was estimated to be about 250 μm, which is long enough than the cavity length (20 μm). It is the reason why flat mirrors can be used as the reflectors of the FP interferometer placed in the focusing optical system. Figure 7 shows a comparison of the retro reflectivity properties of the two types of optical reflecting systems, when the mirrors of the FP interferometer are not located in parallel with each other; this behavior is caused by the retroreflective effect. In case (b), the optical axis of the reflected beam is oriented parallel to the optical axis of the incident beam by the retroreflective effect, which makes it possible for the two beams to interfere. Consequently, in the micro FP interferometer, the requirement for parallel orientation of a pair of mirrors is greatly reduced when compared with the normal-type FP interferometer ((a)). We could observe the interference fringes, even when the reflected laser beam pattern from the interferometer was not completely overlapped with that of incident laser beam. The demand for optical flatness in the micro FP interferometer is also much weaker when compared with that for the normal-type FP interferometer because of the reduced cross-sectional area of the laser beam. The optical flatness of the reflection mirror were only λ/10, by which we cannot obtained the fines of 25 in case of the normal type FP interferometer. Another characteristic of the micro FP interferometer is that it has a large free spectral range because of its short cavity length. For the practical fabrication of this type of FP interferometer, we think that it is one of the method to contact a small dielectric multilayer mirror to the basement of the microcantilever with a thin spacer using the optical contact bonding. Conclusions We have developed a micro Fabry-Pérot interferometer that is constructed within the Rayleigh range of the optical focusing system and demonstrated that the interferometric characteristics of this interferometer were consistent with the theoretically calculated characteristics. The conventional high precision required for the optical parallelism and the surface flatness of the mirrors was not so essential for the micro FP interferometer. We believe that the proposed micro FP interferometer has the potential to make a marked contribution to advances in optical measurements in various micro sensing system.
2018-10-25T14:46:26.176Z
2018-10-12T00:00:00.000
{ "year": 2018, "sha1": "b58769ee30a5f1f8256ac9b00698c8f8c49a0b13", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-33665-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b58769ee30a5f1f8256ac9b00698c8f8c49a0b13", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
248431249
pes2o/s2orc
v3-fos-license
Death From COVID-19, Muslim Death Rituals and Disenfranchised Grief – A Patient-Centered Care Perspective In Islam, religious directives regarding death are derived from the Quran and Islamic tradition, but there is a variety of death rituals and practices, lived by Muslims across contexts and geographies. This narrative study explored the dynamics of death and bereavement resulting from COVID-19 death among religious Muslims in Israel. Narrative interviews were conducted with 32 religious Muslims ages 73–85. Findings suggest several absent death rituals in COVID-19 deaths (i.e., the physical and spiritual purification of the body, the shrouding of the body, the funeral, and the will). Theoretically, this study linked death from COVID-19 with patient-centered care, highlighting disenfranchised grief due to the clash of health authority guidelines with religious death practices. Methodologically, this narrative study voices the perspectives of elder religious Muslims in Israel. Practically, this study suggests ways to implement the cultural perspective in COVID-19 deaths and enable a healthy bereavement process. Introduction COVID-19 presented unprecedented challenges of uncertainties, isolation, patients dying alone in hospitals, hospices, or other care facilities without saying goodbye or having their loved ones paying last respects, thereby affecting experiences of death and bereavement (Fang & Comery, 2020;Ramsay, 2020). Furthermore, lockdown and social distancing have been restricting bereavement processes of emotional closeness and social connectedness, triggering overpowering sorrow and regret, resulting in disenfranchised grief (Doka, 1989;Holst-Warhaft, 2000;Valentine, 2009). Disenfranchised grief is a process in which loss is felt as not being "openly acknowledged, socially validated, or publicly mourned" (Doka, 1989). This experience of grief might pose difficulties in emotional processing, in distress expression, in social support, in obtaining compassion, all challenging one's coping with loss in one's ongoing life (Doka, 1993). When COVID-19 guidelines clash with belief systems and death rituals of religious people, they may refuse to comply with guidelines of hospitalization in severe COVID-19 (Boguszewski et al., 2020;Gabay & Tarabeih, 2021). Although religious beliefs and rituals are personal, they are also a social institution (Tobroni et al., 2020). The socio-cultural environment may impact the responses of individuals and communities to COVID-19 deaths and losses, as they process the loss and make meaning of it. Responses to loss and death in the context of COVID-19 are both individual and societal, particularly for religious minorities and their members confronting restrictions to their traditional practices (Bear et al., 2020). Given the rich global landscape of religious cultures, previous studies called to explore the impact of death from COVID-19 on grief processes of members of religious cultures (Arslan & Buldukoglu, 2021;Bear et al., 2020;Firouzkouhi et al., 2021;Stroebe & Schut, 2021;Walter, 2020). This qualitative study responds to these previous calls, exploring the impact of COVID-19 on Muslim individual and community grief in COVID-19 deaths. Muslim Religious Beliefs and Practices In Islam, all conduct is governed by the precepts of the Qur'an and the Sunnah law, directing Muslims in all aspects of human life, decisions, and commitments, including recommendations for responses to COVID-19 (Al Khayat, 1995). Islamic laws have five primary objectives: to protect life, to safeguard freedom of thought, to preserve the intellect, to preserve human honor and integrity, and to protect property (Al-Hayani, 2007;Ellison & Levin, 1998). The Qur'an accomplishes correctives and promotes wellbeing by laws forbidding a detrimental way of life and conduct and focuses on behaviors that promote wellbeing (moderate eating; abstention from liquor, tobacco, and other psychoactive substances; daily workout; praying; fasting; bathing and washing; breastfeeding; and more) (Atiyeh et al., 2008;Rispler, 1989). Islam teaches that everything that occurs is from God, whether dutifulness or noncompliance, trust or disloyalty, ailment or wellbeing, riches or poverty (Al-Gorany, 2021). Significant illness, miscarriage, and death are viewed as God's will, reflecting one's devotion and trust in God. Thus, death is regarded as a cleansing encounter, not a statement of God's anger (Atiyeh et al., 2008). Muslims consider adherence to Islamic teachings and to the Prophet Mohammed's recommendations as the only way to survive the pandemic. In Islam, there is a set of health directives to prevent the spread of disease, including: (1). Maintaining belief in God and the practice of prayer; (2) Commitment to quarantine and avoiding crowded places; (3) Attention to hygiene by washing hands with water and soap, maintaining hygiene of body, clothes, and environment; (4) Avoiding contact with people who are ill or suspected of being ill (Islamweb.net. 2020). The most profound directed life cycle rituals are those that mark the end of a life (Imber-Black, 2020). While Islamic principles, however, accept measures that reduce risk and offer a solution to pandemic health guidelines regarding prayers (i.e., individual prayers rather than in mosques), there are no such measures regarding religious death rituals Kamarulzaman and Saifuddeen (2010). Next, the demographics of Israeli Muslims. Muslims in Israel At the end of 2019, the Muslim population in Israel was estimated at 1.669 million, which is 18% of the population (Central Bureau of Statistics, 2021). Fifty-nine percent of the Muslim population live in cities, 33% of this population is in the age group of 0-14 and 4.5% is in the age group of 65 and over. The rate of labor force participation in 2020 was 52.4% among men and 25.3% among women. Fifty percent of households have a computer with connection to WIFI and 10% of the Muslim population have an academic degree (Central Bureau of Statistics, 2021). Perceptions of the Pandemic Among Religious Muslims Religious Muslims believe that the pandemic is God's will, thereby, abdicating responsibility for trying to contain it (Husni et al., 2020). Others consider COVID-19 to be God's punishment for wrongdoers and evil, believing the virus will not attack believers and pious worshipers who conscientiously carry out congregational prayers (Husni et al., 2020). Some believe that COVID-19 is a substance created by God that people can avoid by living healthy lifestyles, diligently reading the Qur'an, and praying in congregation (Husni et al., 2020). Others view COVID-19 as an ordinary phenomenon that occurs naturally and believe that there is no link between the outbreak of the pandemic and religion. The latter further believe that even if they are infected with the coronavirus, they must accept their fate. If they are infected, they may visit doctors and seek treatment, but they have no assurance that medical efforts will cure them. Instead, they self-introspect and blame themselves for the sins they have committed (Husni et al., 2020). Next, how religious Muslims have been coping with COVID-19. Adaptation to COVID-19 Among Religious Muslims Since the outbreak of the COVID-19 pandemic, governments around the world, have been communicating guidelines to all communities aimed at preventing the spread of the virus, through coercion, information, education, and persuasion. Since the outbreak of the pandemic, attitudes towards health guidelines among members of varied religious cultures are seen to be embedded in their world view, shaping their perspectives (Husni et al., 2020). These perspectives of members of religious cultures towards COVID-19 may entail different perceptions of the virus (Boguszewski et al., 2020;McCaulley, 2020). In Israel, since the outbreak of the COVID-19 pandemic in March 2020, Muslims have been adjusting certain religious rituals that are perceived as a form of leniency in worship, as some Muslim leaders have encouraged members to replace Friday prayers at the mosque with praying at home to comply with social distancing guidelines and help each other, in solidarity, to contain the virus (Bruns et al., 2020;Kooraki et al., 2020;Muhammad et al., 2020;Nishiura et al., 2020;Syamsuddin, 2021). Death by COVID-19 deprives the living of saying goodbye or grieving in traditional ways. (Imber-Black, 2020). The Quran and the Sunnah, however, do not have explicit answers for social issues that emerged since the revelation of the Quran and the teachings of the Prophet Mohammed (Rispler, 1989). Extensive research has related to the challenge of physical distancing in religious minorities , but research is scant on one of the most highlighted issues in the Covid-19 pandemic, death rituals among Muslims (Gabay & Tarabeih, 2021;Tobroni et al., 2020). Muslim Death Rituals The Quran distinctly emphasizes that adversities confronted by individuals that are aimed at testing them as believers and making them more tolerant and patient so they can better cope with the ordeals they face (Afakseir, 2012). Islamic tradition seems to present ambiguous instructions for fulfilling death rituals. The discrepancy between the ritual directives and its actual practice is often confusing (Rappaport, 1999). Literature on dying and death rituals in Islam presents Islam as built around the central five pillars and the Islamic law regarding ritual practice (Venhorst, 2021). While Islam refers to the religious principles and regulations as derived from the Quran and Islamic tradition, there are diverse practices, lived by a variety of Muslims in a variety of contexts and geographies, causing tensions between the prescribed practice and the actual practice of death rituals (Venhorst, 2021). The beliefs, feelings and practices around death are part of religiosity (Ho & Ho, 2007). Muslim religious directives view religious belief, spiritual activities, and practices as important assets for dealing with death. Islam teaches its followers to be patient, to have trust in Allah, to offer regular prayers, and to ask Allah for help in difficulties surrounding death (Saleem & Saleem, 2020). Islam doctrines teach that the whole life of a Muslim believer is a trial where one will be tested again and again, and one's final destiny will be determined on the basis of one's performance. Thus, for a Muslim, death is the return of soul to its creator, Allah. The notion of the inevitability of death and life after death are never far from a Muslim's consciousness. Believing in the supremacy and kindness of Allah helps the individual accept death as a stage of life. Attitudes toward death are expressed consciously or unconsciously by personal attributes or through cultural, social and philosophical belief systems. Many religious institutions carry out death rituals which shape the culture and buffer death anxiety (Saleem & Saleem, 2020). The following death rituals are shared by all religious Muslims, but there are subtle differences among groups (Aalulbayt Information Centre, 2005). When a Muslim is near death, frequently distracted by pain and discomfort, relatives around the dying person are called upon to recite verses from the Quran, provide physical comfort, and encourage the dying person to recite words of remembrance and prayer reminding them of God's mercy and forgiveness (Cheraghi et al., 2005;Sarhill et al., 2001). Relatives of the dying person should place her or him in a comfortable position facing Mecca (Cheraghi et al., 2005;Sheikh, 1998). Since devout and pious Muslims believe that death is part of God's plan and that one's duty is to accept whatever God sends, however difficult, they discipline themselves to show no emotion at a death, because weeping openly would suggest rebellion against God's will. Islam teaches that the dead body must be treated with gentleness and respect (Sarhill et al., 2001). Upon notification of death, relatives of the deceased are encouraged to remain calm and pray for the departed as it is forbidden for those in mourning to wail excessively, scream, or thrash about. When a religious Muslim dies, the eyes and mouth should be closed, the feet tied together with a thread around the toes, the face bandaged so as to keep the mouth closed, and the limbs should be straightened (Cheraghi et al., 2005;Sheikh, 1998). The body should be covered with a clean sheet temporarily. A ritual of washing the body is performed by a same-sex Muslim as soon as possible. Nails are cleaned and shortened, and the body is shrouded in simple, unsewn pieces of white cloth (Cheraghi et al., 2005;Sheikh, 1998). Performing the rituals of bathing, shrouding the body, rubbing camphor oil on the seven parts of the body which are placed on the ground during prostration when praying (i.e., the forehead, palms, knees and toes), and helping with the burial are important religious acts. Calling a religious leader is necessary as a Muslim should be taken home or to the mosque to be washed (Sarhill et al., 2001). It is a religious requirement that the dead be buried as soon as possible, and considerable family distress can be avoided by speedy issuing of the death certificate (Sarhill et al., 2001). If the person dies earlier during the day, the body will be taken to the local mosque or to the appointed cemetery to be washed and prepared. However, if the person dies late at night, the body will be kept at home with lights on or candles burning all night, resembling the pre-Islamic traditions. It is believed that the evil spirits will attack the dead if left in darkness. The Quran will be placed close to or on the dead person to both protect and bless the deceased (Sarhill et al., 2001). A funeral prayer is held in the local mosque, and relatives and community members follow the funeral procession to the graveyard where a final prayer is said as the deceased is laid to rest. Events occur in rapid succession, and often the deceased will be buried within 24 hours; Muslims are always buried as cremation is forbidden (Cheraghi et al., 2005). If the dying patient is wealthy, his/her relatives may place a semiprecious stone like an agate with 14 prayers carved on it by handicraft specialists under the deceased's tongue after completion of funeral rites and before placing the dead person in the grave, to enable the deceased to answer questions properly when asked questions by the spirits (Sarhill et al., 2001). It is believed that life after death will continue such that the preservation of the body is absolutely essential (Parkes et al., 1997) Authorities across countries face the need to care for people from various religions, who may comply poorly with hospitalization guidelines due to their theological belief. This qualitative study explores religious practices around COVID-19 deaths among Israeli Muslims unlawfully refusing to be hospitalized in a public hospital due to religious reasons. The patient-centered care approach (PCC) was adopted as the theoretical anchor of this study. Patient-Centered Care Patient-centered care (PCC) is the preferred approach of care in health (Lusk & Fater, 2013). PCC is associated with higher patient safety, higher quality of care, better clinical outcomes, higher patient satisfaction, higher life quality, higher well-being, and less suffering among patients (Jarrar et al., 2019;Rathert et al., 2013). While the biomedical model focuses on COVID-19 in the patient's body, the PCC model focuses on understanding the patient's perceptions, expectations, feelings, anxieties, as an individual (Venhuizen, 2019). PCC emerged out of the limitations of the conventional 'biomedical model. (Epstein & Street, 2011). PCC recognizes the patient's psychological and social needs, respecting each patient's cultural values, beliefs, and preferences (Lusk & Fater, 2013;Kitson et al., 2013;Voshaar et al., 2015). PCC advocates flexible healthcare, entailing a shift away from fragmented institution-centered care to integrated, patient-tailored care that aims at meeting patient needs (Delaney, 2018). PCC shifts the emphasis from body care to total care; integrates health care and provides physical comfort and emotional support (Kitson et al., 2013). PCC considers the patient's point of view and circumstances and is characterized by high responsiveness to patient needs, beliefs, and preferences, using the patient's informed wishes to guide end of life activity (Jarrar et al., 2019;Rathert et al., 2013). This study explores the dynamics of death resulting from COVID-19, death practices and rituals among religious Israeli Muslims, and their grief process. The research questions are (a). What are the main causes of clashes over Israeli Muslim religious death rituals in cases of death from . What is the communal experience surrounding death from COVID-19? Ethical Approval The ethics committee at the academic institution with which the second author is affiliated granted ethical approval for this study (IRB #1037). Participants signed an informed consent form regarding participation and publication. To protect anonymity and confidentiality, demographic data is presented only at the group level (Morse, 2007). The informed consent form stated that participation is anonymous and confidential, and that the participant may stop the interview at any stage. Sampling Based on our experience of interviewing insular communities, we considered the unique aspects of the culture in shaping the design, sampling, and data collection. We considered values of modesty, speech codes, and the need for endorsement of Muslim religious leaders (Rier et al., 2008). Asking the Imam's endorsement for our study was the first step to facilitate agreement of religious Muslims to participate in the study. Without this endorsement, we would have had no cooperation from the community. Following the Imam's endorsement, moderators from the community explored the willingness of community members to participate. Procedure The Imam connected us to several moderators from the community, who contacted potential participants. We shared the purpose of the study with the moderators and asked them to distribute an invitation among community members, inviting them to participate in a study on the refusal of religious Muslims to be hospitalized in COVID-19 due to religious reasons. Those interested in participating shared their contact information with the moderators who then sent the information to the second author, a secular Muslim. The second author contacted each participant by phone, explained the goal of the study, scheduled a face-to face, in-depth open-ended interview with each participant, and sent participants an informed consent form through WhatsApp. Participants were asked to have a family member help them sign an informed consent form and email it to the second author. They were also asked to have a family member help them connect to the ZOOM link for the interview. We determined the study sample size using the information saturation approach (Malterud et al., 2016). Participants were 32 religious Muslim elders, 29 males and three females, age 73-85, with an average age of 79. The number of children at home ranged from 10 to 17. Participants were from four villages in Northern Israel: North Sakhnin, Arraba, Deir Hanaa, and Eilabun. Table 1 presents demographics of participants. Interviews The interview opened with a greeting and wishes for good health. The second author thanked the participants, read the information from the informed consent form, explained the goal and the methodology of the study, their right to stop the interview at any time. He promised confidentiality and anonymity, asked for their permission to record interviews, and then asked participants if they were still willing to participate. The second author emphasized that there would be only one general question to which the participants may respond as they deem appropriate (Josselson, 2013). The interview began with a few minutes of casual conversation to help engage the participant and lasted about 45 minutes on the Zoom platform (Robert et al., 2020). The interview question was: "Please describe your thoughts regarding Muslims' refusal to be hospitalized, should you, God forbid, be severely ill with COVID-19." The atmosphere was pleasant, and the second author tried to avoid any verbal and non-verbal judgement. The second author recorded the interviews, which were then transcribed. To mask participants' identity, before transcribing the interviews, we assigned each participant a code. Data Analysis We performed thematic analysis, a qualitative method that fits well with our epistemology, with the research questions for identifying, analyzing, organizing, describing, and reporting the themes within data (Nowell et al., 2017;Saldaña, 2021). Thematic analysis is effective for exploring the perspectives of the participants, for highlighting similarities among them, and for generating unanticipated insights. (Nowell et al., 2017). We aimed at crystallizing thoughts of participants regarding the clashing of hospitalization in COVID-19 with Muslim death rituals. We independently familiarized ourselves with the data, reading it again and again, and generated initial code description using coding (Saldaña, 2021). Themes are units derived from patterns, such as recurring meanings, feelings, and perceptions (Taylor & Bogdan, 1984). We independently searched for themes and reviewed them. The data analysis process was iterative, reflective, and developed over time, involving constant moving back and forward between analysis phases. We identified themes and patterns of perceptions that emerged from the data through six analytical steps: 1. We independently read and re-read the interviews and listed patterns of perceptions of clashes between death from COVID-19 and traditional rituals underlying the refusal to be hospitalized. 2. We identified all data that related to the patterns already classified. 3. We placed all data of a specific pattern with the corresponding pattern. 4. We combined related patterns and categorized them into subthemes to obtain a comprehensive view of the patterns that emerged regarding perceptions of death from COVID-19 vis-à-vis traditional rituals. We pieced together themes in a meaningful way to form a comprehensive picture representing the participants' viewpoint (Saldaña, 2021). 6. By referring to the theory of PCC and disenfranchised grief, we gained information that allowed us to make inferences from the interviews regarding how COVID-19 deaths clash with traditional death rituals and the perceived communal processes of grief. The themes and categories we generated conveyed the meaning that participants intended to make. We independently identified links between the themes; produced a list of main themes which captured participants' main concerns; and presented evidence in words from the interviews. We marked elements derived from patterns such as recurring meanings and feelings also as themes (Saldaña, 2021). By bringing together elements of perceptions, which are often meaningless when viewed alone, we were able to make sense for the specific context of the study. We pieced together themes in a meaningful way to form a comprehensive picture representing participants' interpretation of death and grief in COVID-19. The interviews revealed unanticipated themes, facilitating an in-depth understanding of the perceived reality among participants in this extreme health crisis. The unstructured interviews relied on the interviewee's subjective, spontaneous responses to the question, enabling an understanding of their perspectives without imposing any prior categorization which might narrow the field of inquiry (Josselson, 2013). Following data analysis, translation was done from Arabic to Hebrew and to English. Quality Criteria Since qualitative research encompasses the perspective of the researchers rather than objective reality, as the human instruments making judgments about coding, theming, decontextualizing, and recontextualizing the data, we ensured that the coding creates trustworthiness through credibility, transferability, dependability, and confirmability (Guba & Lincoln, 1994). Each author independently analyzed all interviews and identified themes and subthemes in the data. Each author independently identified links between the themes; produced a list of main themes which captured participants' main concerns; and presented evidence in words from the interviews. Any disagreements resulted in omission. We each recorded the study logistics, our methodological decisions, our personal values, our reflections after each of the interviews and our insights (Guba & Lincoln, 1994). We acknowledged our privileged position as secular academic researchers who enjoy a relationship of mutual trust with moderators from the Muslim minority, who distributed our invitation to participate in the study. To support the transferability of the findings, we provided dense descriptions of the points of view of interviewees. To assure reliability, we analyzed all interviews independently, we identified themes and subthemes in the data and omitted a theme we disagreed on. Last, interviews were anchored within three contexts: the broad context, the micro-context, and the immediate context. The broad context is political tension between the Muslim minority and the government. The micro-context is the high infection and mortality rates among the Muslim minority. The immediate context was the "here and now", which may have also affected the interview content, particularly the neutral academic identity of the interviewer and his being Muslim. Findings All participants talked with pain about hospitalizations of friends and relatives in which there were no visits, no family members at the time of death due to fear of being infected with the COVID-19, death anxiety, and failure to perform religious rites at death. Findings are presented by practices around death rituals, which are absent when members of the Muslim community die at the hospital. Practices are the purification of the body, spiritual aspects of the purification, the shrouding of the body, the funeral, and the will. The numbers in parentheses stand for the age of the interviewee. Purification. Many respondents talked about the lack of the ritual of purification of the deceased as a disaster, inhibiting a respectful meeting of the deceased with God: "The procedure of treating Muslim COVID patients does not honor the deceased in his final way. Purification is performed for Muslims in facilities designated for the care of the deceased but by family members or by the Imam" (Male, 85); "I do not know if the people who care for the body of the deceased read a prayer." (Male, 78); "By the customary process, according to Islamic Sharia, the family travels to release the body from the hospital and returns it home for the whole process of purification by the family. The washing of the corpse must be performed by a Muslim guiding the process, with only men washing a corpse of a man and only women washing that of a woman. The purification should be done solely by Muslims who know all the Islamic Sharia. Who knows who does the purification in the hospital, a Muslim, a Christian, a Druze, or a Jew? So we would rather die at home (Male,82). Until now, families who were not willing to give it up, did the purification themselves at the hospital, it is sad, it's terrible" (Female, 82). Spiritual Aspects of Purification. Beyond the physical aspect of the ritual, there is a strong spiritual aspect of the purification involving the Imam and family reciting versus from the Qu'aran: "The Imam reads verses from the 'Shahada', such as "I bear witness that there is no god to be worshiped but Allah, and that Muhammad is theMessenger of Allah". Anyone who admits that the 'Shahada' is true is Guaranteed a place in heaven. These are the last words of every Muslim before dying, so that God will forgive the person for all his sins. The family must remind the patient before dying to lift the right finger and say the verse" (Male, 79); "God needs to give permission for the soul to die at a predetermined time." (Male, 73); "No matter the cause of death, the time and place of death are set on your first day of birth, life is temporary and a probationary period for man. Death is when the soul transitions from the present material world to the pure spiritual world" (Male, 82). The Shrouding of the Body. Participants described the covering of the body in layers with a special fabric as part of their heritage which they expect to accompany the treatment of their body when they die: "The deceased man will be bathed and dried only by his Imam and the family. Men will be wrapped in three sheets of cloth, a reminder for the sheets the Prophet was wrapped with after his death. Women will be wrapped in five sheets of cloth for modesty. It is despicable to wrap them in a black plastic bag, as if they are trash, as they do in the hospital. The plastic bag may tear when the body is lowered into the grave. We are supposed to meet the kings and God naked, in purity, to honor God" (Male, 79); "A deceased Muslim is supposed to wear white shrouds as a symbol of love or of a new life in heaven. At the hospital in COVID-19 the body is put in a black bag. A person is supposed to return to God as white and clean as on the day he was born, with the neck, arms, and legs bound as required. At the time of burial, the knots made earlier should be loosened, but those who die from COVID-19 are buried in a black plastic bag" (Male 82); "I made my death shrouds in green, symbolizing heaven. I sewed it and also made a pillow out of hair that I pulled out while combing my hair. I kept it for this day all my life to sleep on in the grave." (Female, 78); "I want to say goodbye to my family with dignity, at the hospital the family sees the body only at the time of death." (Female, 80); "I prefer to die at home and be buried when I die in the pure and modest garments that I wore while on Hajj in Mecca to observe the holiday. It is one of the most important deeds in Islam, in a hospital, they will bury me in a black bag, against Islamic Sharia. I will not disobey my God." (Male, 82). The Burial. The interviewees viewed burial without saying goodbye to family, friends, and the community, as a disaster. Interviewees talked about burial without an Imam in attendance, the lack of prayers, and the time delay until the burial. "The shocking thing is that people were buried without the participation of our Imam, who did not attend the burial ceremony. Before the burial the Imam asks all present to sit and then gives the deceased the answers to the questions that angels will ask him: Who is your Lord? What's your religion? What is your book? And who is your prophet? The Imam asks God to forgive the deceased and bring him to heaven" (Male 72); "A ceremony held by the guidelines of the Ministry of Health forbids approaching the body and the de ceased is buried as soon as possible. Sons cannot stay as customary near the grave of the deceased." (Male, 76). Interviewees stressed the lack of prayers as lack of peace: " I want to die according to Islamic Sharia by which before the funeral, family members go to the cemetery and dig a grave plot. When the funeral procession arrives, the coffin is placed on the ground. We accept death without an outburst of emotion because it is the will of God who forgives sins, especially if the declaration of faith ('Shahada') is said before the departure of the soul. At home the dying person lies on the right side and directs his face south to Mecca, requiring moving the bed. At the hospital there is no time and they do not understand the sharia." (Male, 79);"The participants in the funeral turn their faces towards Mecca and recite the funeral prayer. But this burial process doesn't exist in the pandemic. No one understands what they are doing." (Male, 80); "I want the Imam to read the Qur'an from beginning to end on the 3 days of mourning. The Divine rewards for the recitation of the Quran ascribing credit to the deceased on the Judgement Day. It is customary that the relatives of the deceased divide the reading among them. You need at least 30 relatives to do that, but in COVID there will not be enough people to recite the Qur'an." (Male, 85). The time until the burial was also very concerning to participants: "It is customary to bring the dead body for burial as quickly as possible at any time of day without putting the corpse in a refrigerator." (Male, 80). "The deceased will be buried only at his place of residence immediately after the body is purified. His eyes must be closed, his body washed and perfumed with incense. The dream of every religious Muslim is to die during the observance of the Hajj commandments facing the holiest place for Islam, Mecca, it is a death pure of all sins." (Male, 83); "I feel that we as a community do not insist on giving the proper respect before death. It is all in the hands of Allah even though the doctors determine how much time is left for the patient to live. We believe that the body belongs to God and any unnecessary suffering, like waiting in a refrigerator until the burial, must be avoided. Delaying the treatment of the body can break the bones when straightening the limbs. It is like breaking a living bone and it is forbidden by Islamic Sharia. The Funeral. Following the cleansing ritual, all family members can say their last goodbyes and ask for forgiveness, and the Imam from the Mosque says the final words of prayer. Death from COVID-19 in the hospital inhibits the performing of the ritual both by law and by practice: "To die without the Imam of the mosque who prays for you is very difficult for a religious Muslim. When is he buried? What time? Who is to transport the body to the cemetery? The whole purification process should be done at home by the family and Imam but now the company caring for the body just brings the corpse to burial"(Male, 82); When death is from COVID-19, it is strictly forbidden for relatives to touch the body before the burial, it is shocking, there is no asking for forgiveness, nor reading verses from the Qur'an. I never thought that can happen. The body arrives in a windowless closed ambulance. It is prohibited to climb into the ambulance. Staff who transport the corpse protect themselves from head to toe, informing the cemetery staff that this is a corona patient, and no purification processes is performed." (Male, 79); "My friend's son had not seen his father for more than a month and a half, when they arrived at the hospital, they found him dead, with a long beard which he never had. They could barely recognize him. He was neither clean nor well-groomed, there was a puddle of blood under his sheet. The disrespectful funeral was limited to 10 people.…It is shocking to say a last goodbye to someone who did so many good deeds for all the village, with just a few people accompanying him in his last journey, instead of thousands…. [Quiet]. The silence and restraint at such funerals are horrible. I cannot stop thinking about it. It is no longer possible to die with dignity" (Male, 85); "Instead of people coming to comfort the family, family members are each alone for the bereavement process, lonely, in their own pain." (Female, 78). The Will Participants reported that the deceased leaves a will that may be difficult to share if they are hospitalized: "The will of the deceased is to unite the family members and take care of the property and lands after death. Death in the hospital means that only sons will say their goodbyes as in Islam, daughters are considered weak, do not control their emotions when they see father or mother dying. The men are the strong figure in Muslim society" (Female, 78); "The male adult in the family, the father or husband, must be notified of the death. Notification to a person younger than him or to a woman harms the accepted family structure." (Male,79). " In Islam, every dedicated Muslim is required to complete the pilgrimage to Mecca, at least once in his life. Many Muslims save all their lives to afford the journey to Mecca. Since due to the situation, a lot of religious Muslims did go on Hajj, they may ask a son, as part of the will, to do it in their name. You cannot do that if you die in isolation at a COVID-19 ward at hospitals. You cannot ensure that you fulfill your religious duty as part of the will" (Female, 78). Discussion This qualitative thematic study explored death from COVID-19, death rituals, and grief and bereavement among Northern Israeli religious Muslim elders. This study makes several contributions. Theoretically, this study links death of Israeli Muslims from COVID-19 with PCC, highlighting disenfranchised grief due to the clash of health authority guidelines with religious death practices. Methodologically, this narrative study gives voice to the perspectives of elder religious Muslims in Israel. Practically, this study suggests recommendations to implement PCC in COVID-19 deaths and implement the cultural perspective to enable a healthy bereavement process. Disenfranchised Grief Due to Lack of Death Rituals Due to COVID-19 Deaths Among Israeli Muslims Findings indicate that due to COVID-19, funerals and ceremonies have been significantly altered with no normal face-to-face interactions. Funerals were limited to a few mourners, with no reciting of prayers, and no opportunity to position the body facing towards Mecca. Findings also indicate that burial practices changed profoundly in COVID-19 deaths, without the purification and shrouding of the body, without permission to look upon the face of the deceased before burial, with the emotional difficulty of having the body of the deceased packed in plastic bags and taken to the cemetery for burial without clergy praying for the deceased. Findings show that death from COVID-19 deprives the deceased of a chance to say goodbye properly; because of the highly contagious nature of the virus, spouses, children, siblings, and friends were forbidden to enter the hospital. Death from COVID-19 left families and friends with guilt, sadness, distress, feelings of neglect, the sense of the body being attended to as dehumanized treatment, and disenfranchised grief. Findings suggest that the refusal of ill Muslims to be hospitalized stems not only from their preference to die in their natural environment rather than in isolation at the hospital, but also from fear of being deprived of the traditional death rituals. This denial of traditional practices is perceived as infringing upon communal structural duties and as jeopardizing forgiveness and a peaceful welcoming by God. Bereaved people suffer as they witness the clash between official procedure and their religious practices accompanying the "bad death" of their loved one. This clash may cause additional pain and loneliness due to the increased social restrictions due to COVID-19 further compounding the poor quality of the dying experience. Theoretical Implications Findings extend the existent knowledge regarding death in pandemics. Religious Muslim individuals and families are unable to follow traditional, religiously mandated ''rules and practices,'' they are unable to practice death rituals in burial and funeral services, unable to grieve with social support and therefore, disenfranchised grief takes place at the family and the community level (Doka, 2002;Ramadas & Vijayakumar, 2021;Wallace et al., 2020). Funerals during the COVID-19 pandemic are emptied of memorial activities for the community as a supportive social network, conversation with other mourners, and sacred site of religion, all emptying meaning for the bereaved (Alcorn, 2020;Hamid & Jahangir, 2020;Imber-Black, 2020). The marginalization of the bereaved from religious minorities who must grieve alone may further exacerbate coping difficulties. Public recognition of some people's loss and grief may also be neglected during the pandemic, as not everyone's loss and grief can be acknowledged due to the proliferation of deaths. As such, feeling unentitled and unsupported to publicly share and cope with grief may exacerbate disenfranchisement (Davies, 2017;Doka, 1989;Horowitz & Bubola, 2020;McCann, 2020;O'Rourke et al., 2011). The absence of an ongoing traditional structure for mourning creates ambiguity and distress (Doka, 1989). Disenfranchised grief in bereaved families and the community may cause moral distress or secondary traumatic stress (Arslan & Buldukoglu, 2021;Doka, 2002). Risks due to disenfranchised grief are expected at both the individual and societal level during and after the COVID-19 pandemic: mental distress, dysfunction, poor health (Bigelow & Hollinger, 1996;Doka, 1993;Holst-Warhaft, 2000). Among members of religious minorities, when communal grief and loss are disenfranchised, effective bereavement is disrupted and polarization may deepen, further alienating the community from the general society. Findings contrast with previous research that suggested a negative relationship between religiosity and death anxiety (Saleem & Saleem, 2020;Wen, 2010). We found that fear of death was more related to the disenfranchisement of grief and infringement on traditional death rituals, rather than of death itself. Bereaved people from minority religious cultures may feel degraded, powerless, isolated, contradicting the essence of the PCC approach. The PCC approach calls to understand the refusal to be hospitalized from a socioreligious perspective and actively support respect for diversity through multicultural alternatives to death rituals (Husni et al., 2020). Significant challenges and risks when dealing with death and grief could be expected at both the individual and societal level as COVID-19 may cause not only severe physical damage, mental distress, disorder, dysfunction within society, but also disrupt the lives of individuals and society (Bigelow & Hollinger, 1996;Holst-Warhaft, 2000). In the context of PCC, just as healthcare delivery to the living is culturally adapted, religious death rituals may also be culturally adapted through flexible regulations that enable various cultural and religious core death practices to be performed within the constraints of the pandemic. Practice Implications Health authorities have an important role in providing the cultural needs of dying religious Muslims and their loved ones. In a growingly diverse global religious landscape, the honoring of the deceased should be fundamental, particularly since much knowledge has been developed regarding the virus (Mortazavi et al., 2021). Health authorities are called upon to develop a dialogue between Imams, undertakers, funeral directors, and family members, as essential to devising respectful, safe alternatives to traditional rituals of death and effective bereavement. It is of utmost importance that families feel able to honor their dead to prevent their experiencing considerable guilt because they were unable to perform the proper religious rituals. Communities should be able to support and express solidarity with the bereaved family (Aly, 2010). Bereavements in their community should be facilitated while still maintaining physical distancing measures. Prayers for the deceased may be organized via technology platforms. A collaborative effort should aim at establishing rituals that adhere to Sharia regulations during the pandemic. Successful modifications will acknowledge the symbolic act of the practices and enable respectful bereavement. Modifications of the washing, shrouding, burial, and funeral have an important role in supporting PCC. Congruent to PCC, governmental bodies and professional organizations are called upon to adopt more flexible approaches toward traditional rituals to support the bereaved (Albuquerque et al., 2021). Potential interventions are to (a). Avoid feelings among the bereaved that the loss is not acknowledged by the community. Validating emotions of the bereaved and gaining awareness of potential coping skills that accord with their religious values and beliefs. (b). Brainstorm for strategies of maintaining closeness and communication among a close-knit small Muslim support network. (c). Help members access resources to help them plan for practical needs after the death. (d). Provide access to grief support. (e). Enhance self-care of elders in this challenging time. (e). Promote contact with other people going through the same experience to provide comfort, create a sense of belonging, and reduce isolation. The identification with other mourners of COVID-19 victims could also help them enfranchise their grief, further contributing to developing mutual understanding and a sense of belonging in the face of meaninglessness and isolation. (f). Community members may join together to create platforms for members of all age groups to express their sorrow for collective loss and individual deaths, restoring their social religious identity. The development of community-based support for all age and gender groups may provide an invaluable model for mutual understanding and support among bereaved people with similar backgrounds and experiences. Directions for Future Studies Future studies may explore the experience of Muslim religious clergy in COVIDrelated deaths. Also, future studies may test how the suggested interventions affect the grief process of family members and of communities. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article.
2022-04-30T06:24:43.189Z
2022-04-29T00:00:00.000
{ "year": 2022, "sha1": "ffe86e0d47089db32b44a915be47a6eb6a82d18a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fd4cb0c3de2289fc7854f1f1af0226c54582c5c9", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
54952571
pes2o/s2orc
v3-fos-license
Acute and subchronic toxicity studies of the original drug FS-1 Interest in iodine complexes has increased significantly in recent years because of their wide spectrum of biological activity. The FS-1 is an ion nanostructured complex formed by proteins and/or polypeptides, carbohydrates, salts of alkali and alkaline earth metals with intercalated iodine. Patented in 2014, it is intended for the treatment of infectious diseases of bacterial origin including nosocomial infections and multidrug resistant tuberculosis. The aim of the study was to determine its acute and subchronic toxicity. The study of acute and subchronic toxicity was performed on adult Wistar rats according to OECD guidelines. The data on acute toxicity showed LD50 > 2,000 mg/kg after a single intragastric administration. Twenty-eight days of FS-1 administration at a dose of 500 mg/kg resulted in toxic effects. At a dose of 250 mg/kg, the toxic effects were temporary and a return to normal followed after the recovery period. Doses of 100 mg/kg had no adverse effects on the rats. Antibacterial agent, lethal dose, iodine complexes, prolonged administration, rats Most often used in medical practice, iodine complexes containing the ligands polyvinylpyrrolidone (PVP-I), polyvinyl alcohol and polysaccharides are called iodophors (Gottardi 1991). Iodine coordinated with organic macromolecular ligands exhibits more stable characteristics and lower toxicity than solutions of molecular iodine with potassium iodide (Gottardi 1991; Navikaite et al. 2013). Despite their diverse biological activity, the use of iodine and its complexes is limited by their relative instability and resulting toxicity (Glick et al. 1985). Based on data from clinical cases of iodine poisoning, lethal doses range between 12 and 120 mg/kg (WHO 2009). Furthermore, the presence of the internal environment of proteins reduces the biocidal activity of PVP-I (Zamora et al. 1985). Interest in iodine complexes has increased significantly in recent years because of their wide spectrum of biological activity. Studies have been conducted on obtained samples of complexes of iodine with thioamides, selenoamides, and amides (Hadjikakou and Hadjiliadis 2006), cefotaxime sodium and iodine (El-Dien et al. 2009), atenolol and iodine (Pandeeswaran and Elango 2009), organic hyaluronan with inorganic iodine (Brenes et al. 2011), β-carotene, stearic acid, tripalmitin, lysozyme, folic acid, cytochrome C, valinomycin, gramicidin with inorganic iodine (Solanki et al. 2008), and iodinelithium-alpha-dextrin (Yuldasheva et al. 2012). The substance FS-1 was patented in 2014 (Ilin and Kulmanov 2014). It is an original antibacterial agent intended for the treatment of infectious bacterial diseases, including nosocomial infections and multidrug resistant tuberculosis. FS-1 is an ion nanostructured complex formed by proteins and/or polypeptides, carbohydrates, salts of alkali and alkaline earth metals with intercalated iodine. The proteins and/or polypeptides nanostructured ion complex contain at least one terminal amino acid with electron-donating functional groups (Ilin and Kulmanov 2014). ACTA VET. BRNO 2016, 85: 009–016; doi:10.2754/avb201685010009 Address for correspondence: Doc. PharmDr. Mgr. David Vetchý, Ph.D. Department of Pharmaceutics, Faculty of Pharmacy University of Veterinary and Pharmaceutical Sciences Palackého tř. 1946/1, 612 42 Brno, Czech Republic Phone: + 420 541 562 860 E-mail: vetchyd@vfu.cz http://actavet.vfu.cz/ The aim of the study was to determine acute and subchronic toxicity of the original FS-1 substance, and to determine the possibility and place of reversibility of its toxic effects in rats. Materials and Methods Test solution An FS-1 working solution was prepared by dissolving FS-1 in distilled water immediately prior to its administration to animals. The amount depended on the body mass of animals involved in the experimental groups. The working solution was administered orally by gavage at a volume of 1 ml. The solvent for the test substance (distilled water) was used as a negative control. Animals Both sexes were used at the total amount of 92 subjects, weighing 180–200 g. According to the OECD Guideline 2002 for the Testing of Chemicals No. 423 “Acute Oral Toxicity – Acute Toxic Class Method”, the preferred rodent species is the rat, and 6 animals should be used for each dose. Females are generally slightly more sensitive than males. Thus, 12 healthy adult female Wistar rats were used in the acute toxicity experiment. According to the OECD Guideline 2008 for the Testing of Chemicals No. 407 “Repeated Dose 28-Day Oral Toxicity Study in Rodents”, the preferred rodent species is the rat. A total of 40 males and 40 females of adult Wistar rats were used in the subchronic toxicity experiment for testing of 4 groups (20 animals per group, 10 females and 10 males). All the animals were kept in individually ventilated cages (IVCs, Tecniplast, Italy). Room temperature and humidity were maintained at 22 °C (± 3 °C) and 45–60%, respectively, with a light-dark cycle of 12 h (light from 07:00 h to 19:00 h). The animals were fed commercially available standard pellet chow (Ssniff) and water was supplied ad libitum. Animals were sacrificed as per rules of humane treatment of laboratory animals in a CO2 chamber, containing 70% CO2 at a flow rate of 30 litres per min. Animal experiments were approved by the local Animal Ethics Committee of the Scientific Center for Anti-Infectious Drugs, in accordance with the Kazakhstani law (PHT-007/1). Acute toxicity study The initial dose of FS-1 300 mg/kg was selected on the basis of the US Environmental Protection Agency for iodine LD50 (315 mg/kg) in rats when administered intragastrically (US Environmental Protection Agency 2006). Dose searching continued until a dose was found at which marked signs of toxicity in several animals were observed or the loss of no more than one animal in the group occurred. Animals were kept under observation for 14 days, for the first day every 2 h, and each following day every 12 h. Animal body weight was measured weekly, starting with the first day of the study. Macroscopic organ analysis was carried out upon completion of the study. The toxic effects of the drug were evaluated by the general state of the animals and their survival, LD50. Subchronic toxicity study Based on the OECD, the following doses were selected for this study: the highest dose that causes observable, non-fatal toxic effects 1/4 LD50 (500 mg/kg), the mean dose 1/8 LD50 (250 mg/kg), and the lowest dose (100 mg/kg). Experiments were carried out on four groups of rats comprised of 20 animals (10 males and 10 females). The drug was administered orally (by gavage). The volume of administration of the test substances was calculated for each animal according to its body mass. The administration was carried out once a day in the morning, six days per week. The total period of administration of the drug was 28 days, followed by a 28-day recovery period. Animal observation and body mass measurements were performed once a week. Animals were sacrificed after 28 and 56 days. The following indicators were selected as a test to characterize the state of the animals under the influence of the test substance: 1) animal death; 2) evaluation of the general condition of the animals and appetite; 3) the nature of motor activity; 4) the occurrence and nature of seizures; 5) the state of hair and skin; 6) the state and colour of the mucous membranes; 6) a change in body weight. Biochemical variables (total bilirubin, alkaline phosphatase, albumin, total protein, urea, creatinine, aspartate aminotransferase, and alanine aminotransferase) were measured to characterize the functional state of internal organs using the BioSystem A 25 biochemical analyzer (Spain). Haematological indicators were determined using the HumaCount (Germany). Histopathological examination was conducted in all animals included in the experiment. Statistical analysis The mean value (x) and standard deviation (SD) were calculated for each variable measured and analyzed statistically by analysis of variance (ANOVA) to determine significant differences between groups at P < 0.05. Calculation of the body weight gain (BWG) was produced by the following formula (1): 10 Most often used in medical practice, iodine complexes containing the ligands polyvinylpyrrolidone (PVP-I), polyvinyl alcohol and polysaccharides are called iodophors (Gottardi 1991).Iodine coordinated with organic macromolecular ligands exhibits more stable characteristics and lower toxicity than solutions of molecular iodine with potassium iodide (Gottardi 1991;Navikaite et al. 2013).Despite their diverse biological activity, the use of iodine and its complexes is limited by their relative instability and resulting toxicity (Glick et al. 1985).Based on data from clinical cases of iodine poisoning, lethal doses range between 12 and 120 mg/kg (WHO 2009).Furthermore, the presence of the internal environment of proteins reduces the biocidal activity of PVP-I (Zamora et al. 1985). The substance FS-1 was patented in 2014 (Ilin and Kulmanov 2014).It is an original antibacterial agent intended for the treatment of infectious bacterial diseases, including nosocomial infections and multidrug resistant tuberculosis.FS-1 is an ion nanostructured complex formed by proteins and/or polypeptides, carbohydrates, salts of alkali and alkaline earth metals with intercalated iodine.The proteins and/or polypeptides nanostructured ion complex contain at least one terminal amino acid with electron-donating functional groups (Ilin and Kulmanov 2014). The aim of the study was to determine acute and subchronic toxicity of the original FS-1 substance, and to determine the possibility and place of reversibility of its toxic effects in rats. Test solution An FS-1 working solution was prepared by dissolving FS-1 in distilled water immediately prior to its administration to animals.The amount depended on the body mass of animals involved in the experimental groups.The working solution was administered orally by gavage at a volume of 1 ml.The solvent for the test substance (distilled water) was used as a negative control. Animals Both sexes were used at the total amount of 92 subjects, weighing 180-200 g. According to the OECD Guideline 2002 for the Testing of Chemicals No. 423 "Acute Oral Toxicity -Acute Toxic Class Method", the preferred rodent species is the rat, and 6 animals should be used for each dose.Females are generally slightly more sensitive than males.Thus, 12 healthy adult female Wistar rats were used in the acute toxicity experiment. According to the OECD Guideline 2008 for the Testing of Chemicals No. 407 "Repeated Dose 28-Day Oral Toxicity Study in Rodents", the preferred rodent species is the rat.A total of 40 males and 40 females of adult Wistar rats were used in the subchronic toxicity experiment for testing of 4 groups (20 animals per group, 10 females and 10 males). All the animals were kept in individually ventilated cages (IVCs, Tecniplast, Italy).Room temperature and humidity were maintained at 22 °C (± 3 °C) and 45-60%, respectively, with a light-dark cycle of 12 h (light from 07:00 h to 19:00 h).The animals were fed commercially available standard pellet chow (Ssniff) and water was supplied ad libitum.Animals were sacrificed as per rules of humane treatment of laboratory animals in a CO 2 chamber, containing 70% CO 2 at a flow rate of 30 litres per min.Animal experiments were approved by the local Animal Ethics Committee of the Scientific Center for Anti-Infectious Drugs, in accordance with the Kazakhstani law (PHT-007/1). Acute toxicity study The initial dose of FS-1 300 mg/kg was selected on the basis of the US Environmental Protection Agency for iodine LD 50 (315 mg/kg) in rats when administered intragastrically (US Environmental Protection Agency 2006).Dose searching continued until a dose was found at which marked signs of toxicity in several animals were observed or the loss of no more than one animal in the group occurred.Animals were kept under observation for 14 days, for the first day every 2 h, and each following day every 12 h.Animal body weight was measured weekly, starting with the first day of the study.Macroscopic organ analysis was carried out upon completion of the study.The toxic effects of the drug were evaluated by the general state of the animals and their survival, LD 50 . Subchronic toxicity study Based on the OECD, the following doses were selected for this study: the highest dose that causes observable, non-fatal toxic effects 1/4 LD 50 (500 mg/kg), the mean dose 1/8 LD 50 (250 mg/kg), and the lowest dose (100 mg/kg). Experiments were carried out on four groups of rats comprised of 20 animals (10 males and 10 females).The drug was administered orally (by gavage).The volume of administration of the test substances was calculated for each animal according to its body mass.The administration was carried out once a day in the morning, six days per week.The total period of administration of the drug was 28 days, followed by a 28-day recovery period.Animal observation and body mass measurements were performed once a week.Animals were sacrificed after 28 and 56 days.The following indicators were selected as a test to characterize the state of the animals under the influence of the test substance: 1) animal death; 2) evaluation of the general condition of the animals and appetite; 3) the nature of motor activity; 4) the occurrence and nature of seizures; 5) the state of hair and skin; 6) the state and colour of the mucous membranes; 6) a change in body weight. Biochemical variables (total bilirubin, alkaline phosphatase, albumin, total protein, urea, creatinine, aspartate aminotransferase, and alanine aminotransferase) were measured to characterize the functional state of internal organs using the BioSystem A 25 biochemical analyzer (Spain).Haematological indicators were determined using the HumaCount (Germany).Histopathological examination was conducted in all animals included in the experiment. Statistical analysis The mean value (x) and standard deviation (SD) were calculated for each variable measured and analyzed statistically by analysis of variance (ANOVA) to determine significant differences between groups at P < 0.05. Calculation of the body weight gain (BWG) was produced by the following formula (1): (1) where Р 1 is the mean weight of the animal at the end of the experiment and Р 2 is the mean weight of the animal at the beginning of the experiment. Acute oral toxicity Animals treated with FS-1 at a dose of 300 mg/kg did not show any toxicity symptoms or mortality (n = 6).Therefore, the next dose selected was 2,000 mg/kg.After oral administration of FS-1 at a dose of 2,000 mg/kg, death was not observed in the rats (n = 6).Toxic symptoms in the first hours of the experiment were observed in the form of animals huddled in groups and a dramatically increased response to external stimuli (noise).All symptoms disappeared completely after 6 h.The study of the dynamics of body weight after FS-1 administration showed no weight loss.Macroscopic examination of the internal organs of experimental animals after necropsy did not reveal any abnormalities.Under the OECD guideline No. 423, the study was stopped at the maximum dose of 2,000 mg/kg. Subchronic oral toxicity In accordance with the recommendations of OECD No. 407, the study was conducted on laboratory rats for 28 days, followed by a 28-day recovery period. No death was observed during the 28-day administration of FS-1.During observation of the animals, a satisfactory state was found in the animals treated with the test substance at a dose of 500 mg/kg (in 4 of the 20 animals the physical activity was low, there was confusion, and the animals huddled in a group).After discontinuing administration of the test substance, the appearance of experimental animals returned to that of the control group.In both groups where animals were given a dose of 250 mg/kg and 100 mg/kg, respectively, animal behaviour did not differ from the control group and no clinical signs of iodine poisoning were observed. While analyzing the results of changes in the body weight, a slowing of the weight gain in rats treated with FS-1 at a dose of 500 mg/kg was observed.Animals in this group exhibited a significant decrease in the body weight relative to the control group (P ≤ 0.05). The body weight gain remained unchanged in rats treated with FS-1 at doses of 250 and 100 mg/kg.After discontinuing the FS-1 administration, the rate of weight gain remained the same and matched that of the control group of animals (Table 1).At 29 and 56 days from the start of the experiment, blood sampling was conducted on the animals to further the haematological and biochemical research. Haematology indicators in the control group of rats did not exceed the physiological range (Giknis and Clifford 2008) for this animal species (Tables 2 and 3).In male rats treated with FS-1 at a dose of 500 mg/kg for 28 days, there was a marked increase in the level of white blood cells and lymphocytes, 9.92 ± 1.32 and 8.35 ± 1.25, respectively (P ≤ 0.05 in both cases).This change was temporary and returned to normal after the recovery period.These changes were not observed in the female group. Peripheral blood examination clearly revealed a significant effect in rats treated with FS-1 at both doses of 250 mg/kg and 100 mg/kg.A decrease in haemoglobin (P ≤ 0.05) only occurred among male subjects who received doses of 250 mg/kg, but these changes in the composition of the peripheral blood were within the physiological range (Giknis and Clifford 2008).Thus, FS-1 at both doses of 250 mg/kg and 100 mg/kg did not change the qualitative and quantitative composition of the peripheral blood. Blood biochemical indicators were investigated to detect metabolic abnormalities in rats under the influence of FS-1 (Tables 4 and 5).A significant increase in hepatic indicator profiles, such as alanine aminotransferase, aspartate aminotransferase, and total bilirubin was observed under the influence of FS-1 at a dose of 500 mg/kg (P ≤ 0.05).This rise was observed in both male and female groups.Based on the biochemical indicators of this group of animals, it can be concluded that females were more sensitive than males to the substance in the test dose.This conclusion was confirmed by the data obtained in the evaluation of renal and hepatic function in females.The renal load in females at a dose of 500 mg/kg was evaluated based on increases in creatinine and urea.A slight decrease in albumin in the group of females treated with FS-1 at a dose of 250 mg/kg was observed.This indicator was significantly different from the control but was within the physiological range for this animal species.The changes were temporary and no significant deviations were observed in relation to the same biochemical blood indicators in the negative control group of animals after the recovery period. Biochemical indicators of the blood serum of animals treated with FS-1 at a dose of Two types of follicles, round or oval, and irregular shapes were observed under microscopic examination of histological sections of the thyroid gland of the control group animals (Plate III, Fig. 1).The colloid was weakly eosinophilic or partially basophilic.Epithelial cells were observed with a flattened shape or slightly increased in volume. The follicle size increased in the rats treated with FS-1 at a dose of 500 mg/kg (Plate III, Fig. 2).An irregular, reduced form of follicles was observed, along with a rounded shape to the follicle.Thyrocytes freely positioned in the cavity of the follicle lost connection with the basement membrane.There was an apical part of thyrocytes with cytoplasmic outgrowths, facing into the cavity of the follicle.Proliferative activity of thyrocytes was observed.Nuclei of thyrocytes reduced in size, and chromatin was condensed.Some atrophic follicles changed.Colloid resorption was also present.Vessels of capsules and partitions were extended.Large mast cells and interstitial oedema were observed. The elements of the liver triad, central veins and radial liver beams were not broken in histological sections of the liver controls (Plate IV, Fig. 3).At higher magnification, focal expansion of the lumen of Disse, mainly in the periportal zone, was marked.Cell nuclei were round, some cells had two nuclei.There was a focal activation of Kupffer cells.In the group treated with FS-1 at a dose of 500 mg/kg, two types of hepatocyte nuclei were observed: one group of light-coloured nuclei with small nucleoli, other nuclei were dark-coloured.Hepatocytes with two nuclei were also observed.Hepatocytes were changed dystrophically, including fatty dystrophy in the shape of a small droplet, and also a focallarge droplet (Plate IV, Fig. 4).There was an enlargement of the space of Disse, sinusoid congestion, and activation of Kupffer cells indicating the appearance of lymphohistiocytic infiltration of focal perivascular oedema. Histological sections of the kidneys of rats in the control group showed that the cortical was represented by renal corpuscles, urinary space was clearly visible (Plate V, Fig. 5).Darkcoloured proximal tubular epithelial cells, and some muddy cytoplasm and basal-located WBC -white blood cell count; LYM -lymphocytes; MID -mid-range absolute count; GRA -granulocytes; RBC -red blood cell count; HGB -haemoglobin; HCT -haematocrit, PLT -platelet count; Significant at P < 0.05 nucleus constitute the main share of cortical substance.Light-coloured distal tubules with a wider space were also observed.In the medulla, the thick parts of the nephron and collecting ducts were visible.Figure 6 (Plate V), shows the rats treated with FS-1 at a dose of 500 mg/kg.Vascular glomeruli were unevenly congested, thickened by plasmatic impregnation.Mesangial matrix was increased, and individual glomeruli were hypercellular.The palmate structure of glomeruli appeared in the cavity of the capsule of fibrin masses.Renal tubules exhibited signs of granular dystrophy, and proximal tubules were in most cases constricted.The focal necrosis of epithelial cells was marked.The main share of epithelial tubules was swollen, and the space of the tubule was very constricted as a result of it.There were some vessels with signs of stasis.Histological examination of animals treated with FC-1 at doses of 250 mg/kg and 100 mg/kg showed no difference from the control group of animals. Discussion Data on the acute toxicity of FS-1 on laboratory rats showed that LD 50 > 2,000 mg/kg after a single intragastric administration.Thus, FS-1 can be attributed to the Class 5 toxicity, i.e. non-toxic substances. Results of the subchronic toxicity study of FS-1 showed that prolonged exposure to the pharmacological agent at a dose of 500 mg/kg can cause toxic effects.The obtained data showed a slowdown in the body weight gain in the studied rats.Apparently, this effect can be explained by the thyroidstimulating properties of iodine.It is known that thyroid hormones affect metabolic processes.Excessive administration of iodine in the diet leads to hyperthyroidism (Aakre et al. 2015). Inflammation, anaemia, metabolic acidosis, hyperchloraemia and hyperkaliaemia, acute fibrinolysis, and increased cytolytic enzymes (LDH) are often observed in iodine poisoning.These changes are associated with iodaemia (Kataoka et al. 2006;Lakhal et al. 2011).Apparently, the observed changes in biochemical and haematological blood indicators are associated with high levels of iodine in blood (Glick et al. 1985).Since the changes in the biochemical and haematological blood indicators in males and females were very small and temporary, it can be concluded that no distinct sex differences were traced after FS-1 administration at the studied doses. High doses of iodine have a direct cytolytic effect on the cells of the gastrointestinal tract, as well as the liver and kidneys.Lower doses do not cause clinical signs of poisoning, nor have an effect on the body through the thyroid system (Sherer et al. 1991;Kataoka et al. 2006;Tsurumaru et al. 2010). On the basis of biochemical and histological studies, it can be argued that the liver, kidneys, and the thyroid gland were the target organs of toxic destruction at a dose of 500 mg/kg. At a dose of 250 mg/kg, the toxic effects were temporary and returned to normal after the recovery period.The dose of 100 mg/kg had no adverse effects on the rats based on the results of clinical, haematological and biochemical studies and necropsy. In conclusion, for FS-1 the dose of 100 mg/kg body weight of in both male and female animals is NOAEL (the highest concentration where no adverse treatmentrelated findings are observed) and the dose 250 mg/kg is LOAEL (the lowest concentration of a chemical used in a toxicity test that has a significant adverse effect on the exposed population of test organisms compared with the controls). Fig. 1 . Fig. 1.Histological structure of the thyroid gland of rats in the control group Haematoxylin-eosin stain, × 200 magnification Table 4 . Biochemical indicators of the blood of males after FS-1 administration.
2019-03-31T13:43:18.960Z
2016-03-20T00:00:00.000
{ "year": 2016, "sha1": "4716a43b2b72fcfc077f8e4d7074c382be099d78", "oa_license": "CCBY", "oa_url": "https://actavet.vfu.cz/media/pdf/actavet_2016085010009.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e2c612564a7e44b9b89edd25f580f20e5b3d86ab", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
233526659
pes2o/s2orc
v3-fos-license
Electrical Conductivity Adjustment for Interface Capacitive‐Like Storage in Sodium‐Ion Battery Sodium‐ion battery (SIB) is significant for grid‐scale energy storage. However, a large radius of Na ions raises the difficulties of ion intercalation, hindering the electrochemical performance during fast charge/discharge. Conventional strategies to promote rate performance focus on the optimization of ion diffusion. Improving interface capacitive‐like storage by tuning the electrical conductivity of electrodes is also expected to combine the features of the high energy density of batteries and the high power density of capacitors. Inspired by this concept, an oxide‐metal sandwich 3D‐ordered macroporous architecture (3DOM) stands out as a superior anode candidate for high‐rate SIBs. Taking Ni‐TiO2 sandwich 3DOM as a proof‐of‐concept, anatase TiO2 delivers a reversible capacity of 233.3 mAh g−1 in half‐cells and 210.1 mAh g−1 in full‐cells after 100 cycles at 50 mA g−1. At the high charge/discharge rate of 5000 mA g−1, 104.4 mAh g−1 in half‐cells and 68 mAh g−1 in full‐cells can also be obtained with satisfying stability. In‐depth analysis of electrochemical kinetics evidence that the dominated interface capacitive‐like storage enables ultrafast uptaking and releasing of Na‐ions. This understanding between electrical conductivity and rate performance of SIBs is expected to guild future design to realize effective energy storage. Introduction To solve the resulting problems of fossil fuel burning, the conversion and utilization of clean renewable energy are of high significance. Owing to the imminent and unpredictable features of such resources, energy storage technologies have thus attracted intensive attention for both basic research and practical industry. [1] Among them, lithium-ion battery stands out as a prominent energy storage technology with a high potential for large-scale off-grid applications. [2] However, issues like the accessible lithium, safety, and cost inspire us to find other alternatives. [3] Since sodium not only shares similar electrochemical properties with lithium, but also possesses several advantages like abundant resources, high safety, and low cost, sodium-ion battery (SIB) has thus been considered as a promising candidate for energy storage. [4] Many researchers have devoted to corresponding studies of electrode materials, especially in pursuit of satisfying rate performance that is even more important due to the demands of fast charge/discharge. [5] The promising methods without consuming reversibility still remain a challenge to Na-free transition metal oxide anode materials with intercalation mechanism. [6] In general, ion intercalation contains three steps during cycles: solvated Na + diffusion within the electrolyte; chargetransfer reactions at or near the interface between active materials and electrolyte; Na + diffusion in the bulk materials. Hence, the whole ability of sodium storage is attributed to both Na-ion intercalation in bulk and capacitance at or near the interface. Larger radius of Na + (1.02 Å) versus Li + (0.76 Å) always raises more complex requirements for electrode materials' crystalline structures during bulk intercalation, [6] which would largely hinder the performance at high charge/ discharge rates. Interface capacitive-like storage always occurs on the order of seconds and minutes, and thus the motivation from increased interface capacitive-like storage is expected to well balance high capacity and high rate performance in the same material. [7] Moreover, interface capacitive-like storage will not result in a huge change of crystalline structures, which would also contribute to the stability during long-term sodium storage. Several methods, like defects or amorphization engineering of materials' crystalline structures [8] and morphology optimization, [9] have been used to enhance the rate performance from improved interface capacitive-like storage and obtained some preliminary results. A deep understanding of interface capacitive-like storage nature would be helpful for developing clear criteria for intercalation-based electrodes. Since the contributions at or near the interface are mainly controlled by surface instead of diffusion, and always accompanied with electron transfer or hopping, [10] sufficient electrical conductivity for fast electron movement might be also significant to add extra capacitive-like sodium storage at or near the interface. [11] This enhancement would be expected to benefit for high-rate charge/discharge in short time. Inspired by this understanding, an active material-metal current collectors and sandwich architecture is an effective candidate to shorten electron transport paths throughout the entire electrode, originated from the high electrical conductivity of the metal skeleton. To evidence this concept more clearly, we chose anatase TiO 2 , a typical intercalation-based material with unsatisfactory rate performance, as an example of active materials. It is facile to deposit TiO 2 uniformly on complex conductive metal skeletons with uniform thickness through suitable approaches, such as atomic layer deposition (ALD). [12] Considering the requirements of electrolyte infiltration and volume expansion accommodation of active materials, a 3D-ordered macroporous architecture (3DOM) is selected as a skeleton candidate. First, interconnected periodic macroporous architectures can provide quasi-1D long-range ordered paths for electron transport throughout the entire electrode, and further enhance the electric conductivity and long-term stability of overall electrodes at the same time. The resulting sandwich architecture consists of uniform anatase TiO 2 and 3DOM Ni skeleton without any additives and binders. A clean interface of active material can efficiently exclude the influences of other phases. Second, 3DOM is incorporated with dual porosity, resulting from controllable templates with plasticity of the diameter and the interconnected area between neighboring spheres, respectively. It is expected to increase the area of electrode/electrolyte junction and electrode/current collector junction. Third, with the help of ALD techniques, it is facile to control the thickness of TiO 2 in samples with and without Ni skeleton. So, both architectures are quite similar through concise control of ALD, resulting in the same mass loading and ion diffusion length, but different electrical conductivity. It is helpful to focus on the relationship between electric conductivity, interface capacitive-like storage, and rate performance. As expected, the Ni-TiO 2 sandwich 3DOM delivers great rate performance during sodium storage of both half-cells and fullcells. A reversible capacity of 233.3 mAh g −1 in half cells and 210.1 mAh g −1 in full cells (coupled with P2-Na 2/3 Ni 1/3 Mn 2/3 O 2 cathode) can be obtained after 100 cycles at 50 mA g −1 . Moreover, at the high charge/discharge rate of 5000 mA g −1 , TiO 2 can deliver specific capacities of 104.4 mAhg −1 in half cells and 68 mAh g −1 in full cells with satisfying stability. Such enhancement originates primarily from the 3DOM Ni skeleton, which can not only serve as the supporter for anatase TiO 2 , but also provide direct and reduced pathways for electron transport, leading to boosted interface capacitive-like storage. Our results confirm the possibilities of employing electrical conductivity adjustment of electrodes to improve interface capacitive-like storage for effective high-rate energy storage. This observation is expected for a universal design to realize satisfied energy storage using other large transport ions. Fabrication and Characterizations of Ni-TiO 2 Sandwich 3DOMs To realize this tentative idea of the active material-current collector sandwich 3DOM architecture, it is of great significance to find feasible and straightforward procedures. Being a technologically facile approach, the colloidal crystal template (CCT) method assisted by ALD is a promising candidate to fabricate the sandwich 3DOMs. The schematic fabrication process is shown in Scheme 1. Polystyrene (PS) spheres with the same diameter were first assembled on a conductive substrate to construct a large-scaled CCT. Then the interstices in CCT were filled with metal to generate an in situ 3DOM conductive skeleton after removing PSs. Finally, the active materials, like oxides, were uniformly coated on the metal skeleton by ALD to obtain the final sandwich 3DOMs. To confirm the aforementioned steps, we choose Ni as the metal skeleton and anatase TiO 2 as the active material. Figure 1a,b shows the representative scanning electron microscope (SEM) images of 3DOM Ni skeleton and Ni-TiO 2 sandwich 3DOM, which are the replica translated from the periodic CCTs based on PSs with ≈500 nm diameter. The Ni skeleton with the periodic macroporous structure was obtained on Ti foil after the electrochemical deposition and template removal by organic solvents. And then ≈20 nm TiO 2 was deposited on the Ni skeleton to form an oxide-metal sandwich 3DOM structure, followed by annealing to obtain the anatase phase. As shown in Figure 1b, morphologies of both surface layer and inner layers (highlighted in the circle of Figure 1b) confirm the successful realization of Ni-TiO 2 sandwich 3DOM, the phase Scheme 1. Schematic illustration of the fabrication of sandwich 3DOMs by ALD assisted CCT method. www.advancedsciencenews.com and purity of which were further examined by X-ray diffraction (XRD) in Figure 1c and Raman spectroscopy in Figure 1d. Besides the diffraction peaks from Ni (JCPDS No. 65-2865) and Ti (JCPDS No. 44-1294) substrates, all the other peaks can be indexed to the diffractions of tetragonal anatase TiO 2 (JCPDS No. 21-1272). No peaks of other phases can be detected, indicating the high purity of the as-prepared Ni-TiO 2 sandwich 3DOMs. The Raman scattering spectra of as-fabricated samples further evidences the purity of anatase TiO 2 based on vibration modes of A 1g , B 1g , E g . [13] It is worth mentioning that there is no peak in Raman spectra between 1200 and 1800 cm −1 , where the D band and the G band of carbonaceous materials appear. Two symmetrical peaks in X-ray photoelectron spectroscopy (XPS) are attributed to +4 state of Ti 4+ in the lattice as shown in Figure S1, Supporting Information. [1c,8a] Overall, we can infer that no carbon is contained in the Ni-TiO 2 sandwich 3DOMs. [14] To clearly visualize the sandwich 3DOM features on a large scale, transmission electron microscopy (TEM) analysis and energy-dispersive X-ray spectroscopy (EDX) scan were used here to verify the distributions of Ni, Ti, and O ( www.afm-journal.de www.advancedsciencenews.com macroporous features of the entire Ni-TiO 2 sample. The elemental distribution of Ni ( Figure 2b) exhibits similar 3DOM morphology compared with that of Ni-TiO 2 sample, confirming the core of the 3DOM Ni skeleton, while Ti and O are distributed around the Ni skeleton uniformly like a sandwich construction. With the help of color code, the clean interface between Ni (orange) and TiO 2 (blue) can be clearly found in high-resolution TEM (HRTEM) image (Figure 2e), verifying the direct contact between the active materials and current collectors. No other impurities like NiO, carbon exist in the sandwich interface. Summarizing 3DOM, where a 3D macro porous conductive network is formed by the metal Ni with a uniform coating of anatase TiO 2 acted as active materials on both sides. To better illuminate the structure information, enlarged HRTEM images of Ni and TiO 2 are shown in Figure S2, Supporting Information. Electrochemical Characteristics of Ni-TiO 2 Sandwich 3DOMs To quantify the sodium storage of Ni-TiO 2 sandwich 3DOM, we first employ a half-cell configuration with a TiO 2 -based working electrode and Na disk counter electrode to assemble two-electrode coin cells. In order to demonstrate our hypothesis of the influences of electrical conductivity of electrodes, anatase TiO 2 3DOM without Ni skeleton was fabricated as reference samples with the same thickness of TiO 2 (2 × 20 nm) and mass loading. Corresponding characterizations can be found in Figure S3, Supporting Information. Since both kinds of 3DOMs were fabricated directly on the conductive Ti substrates, we didn't use any conductive additive and polymeric binder to ensure sufficient conductivity of entire electrodes. Hence, all the exhibited sodium storage abilities only originate from electrochemical behaves of anatase TiO 2 with a comparatively clean surface. Cyclic voltammetry (CV) measurements were first carried out to elucidate the redox processes in the host matrix with different constructions. To avoid the disturbance from the solid electrolyte interphase (SEI) layer and the other irreversible reactions, the second cycle of CV curves is chosen as diagrammed in Figure 3a. Looking carefully into the appearances of both CV curves, the pair of redox peaks located at approximately 0.67 V (cathodic) and 0.85 V (anodic) are observed in both Ni-TiO 2 sandwich 3DOM and TiO 2 3DOM. This redox pair well matches the potential of reversible Ti 4+ /Ti 3+ redox coupled with Na + insertion and extraction into anatase TiO 2 . [15] Considering similar mass loading of Ni-TiO 2 sandwich 3DOM and TiO 2 3DOM without Ni skeleton, it is reasonable to exclude the influence of the mass of the active material without normalization of Y-axis. We can further assert that the electrochemical behaves during Faradaic bulk storage of sodium is quite similar during low charge/discharge by similar location of redox pairs. Gradually broadening curves of Ni-TiO 2 sandwich 3DOM may indicate larger contributions from interface capacitive-like storage. [9d,16] Hence, Ni-TiO 2 sandwich 3DOM exhibits larger capacities during cycles, the value of which keeps 233 mAh g −1 even after 100 cycles at the low charge/discharge rate of 50 mA g −1 , while TiO 2 3DOM only contributes a capacity of 175 mAh g −1 (Figure 3b). The results can be further verified by charge/ discharge profiles of Ni-TiO 2 sandwich 3DOM and TiO 2 www.afm-journal.de www.advancedsciencenews.com 3DOM at the 1st, 2nd, 20 th , and 50th cycle at the current density of 50 mA g −1 in Figure 3c,d. In accordance with CV data, the appearances of discharge curves also infer that the electrochemical contributions related to the intercalation reactions in bulk together with interface capacitive-like storage. The former storage mainly contributes to the capacities within the discharge plateau, while the other capacities are mainly attributed to capacitive-like storage. As mentioned before, one of the key challenges within SIBs is to enable high capacity and high power at the same time. To examine this potential, the rate capabilities were recorded in Figure 4a together with corresponding charge/discharge profiles in Figure S4, Supporting Information, to investigate the feasibility of fast charge/discharge. Ni-TiO 2 sandwich 3DOM shows discharge capacities of 280, 193, 135, 120, and 105 mAh g −1 at the current densities of 50, 200, 1000, 2000, and 5000 mA g −1 , while the values of TiO 2 3DOM are 216, 139, 86, 63, and 47 mAh g −1 , respectively. Besides the high capacities at high rates, the retention reaches 91% from the 2nd cycles to 1000th cycles at the current density as high as 2000 mA g −1 (Figure 4b). The specific capacity after 1000 cycles can still reach 109 mAh g −1 , near onefold larger than that of TiO 2 3DOM. It implies that high-rate sodium storage of Ni-TiO 2 sandwich 3DOM is feasible, reversible, and stable. Comparison with other current collectors with different geometry, like highly ordered Ni core-shell nanowire array, [17c] randomly oriented Ni nanowire array, [17c] planar morphology as shown in Figures S5 and S6, Supporting Information, Ni 3DOM skeleton serves as an excellent candidate for 3D current collector. Ascended a survey of literatures, our results of SIBs stand out among various pure TiO 2 -based anodes. Related comparison with previous TiO 2 -based anodes can be seen in Figure S7, Supporting Information, for rate capacities and Table S1, Supporting Information, for retention in details. In-Depth Discussion of Electrochemical Behaves To deeply understand the origins of the rate performance gap between Ni-TiO 2 sandwich 3DOM and TiO 2 3DOM, here, four important points should be first predeclared considering the electrochemical behaves. i) According to Raman spectra in Figure 1d, there is no peak found in the range from 1200 to 1800 cm −1 , which might be assigned to the signals of carbonaceous materials. We also cannot find any peak belonged to various oxidation products of Ni, which is further convinced by XRD and HRTEM results in Figures 1c and 2e. Meanwhile, we haven't added any conductive additive and polymeric binder within all the electrodes. Hence, all the electrodes don't contain any carbonaceous material with the potential to change the long-term stability and rate performance. [18] The entire gap of sodium storage ability at different rates should be only attributed to the electrodes themselves. ii) The thickness of TiO 2 in both architectures are quite similar through concise control of ALD, resulting in the same mass loading and ion diffusion length as shown in Scheme 2. Related impacts from ion diffusion can be excluded. iii) Both kinds Scheme 2. Schematic illustration of the ion diffusion length and electron transport pathways in a) Ni-TiO 2 sandwich 3DOM and b) TiO 2 3DOM. Blue stands for TiO 2 , while orange represents Ni. The yellow arrays stand for ion diffusion. The white and red arrays are four major electron movement events occurring during ion-intercalation and deintercalation. 1) accumulation and/or reaction at the electrode/electrolyte interface; 2) electron transport throughout the active materials; 3) electron injection and extraction at the electrode-current collector interface; 4) electron transport in current collector. www.afm-journal.de www.advancedsciencenews.com of electrodes are fabricated by the topologic transformation from CCTs, so the morphology features are similar. We can fully exclude related influences from electrode architectures like facet, orientation, etc. [19] It is worth mentioning that TiO 2 sandwich 3DOM even possesses less contact area with electrolyte than TiO 2 3DOM because of the existence of Ni skeleton and the requirements of the same ion diffusion length, which would slightly decrease the ability of interface storage in a conventional sense. iv) Both kinds of electrodes are stored in N 2 before measurement. It is not necessary to take the detrimental influences from the surface state by absorption and/or adhesion into account. After such careful analysis of the two architectures, the major difference between the two kinds of electrodes is the existence of Ni 3DOM current collector instead of a planar current collector. Hence, we turn our attention to electron transport that might result in a huge change of rate performance of SIBs. Shown in Figure 5a are the current-voltage (I-V) characteristics of Ni-TiO 2 sandwich 3DOM and TiO 2 3DOM. The symmetrical and linear appearance of I-V curves indicates the Ohmic contacts between the electrode and conductive substrate. [17] Obviously, the existence of Ni 3DOM skeleton indeed largely facilitates electron transport and reduces the resistance throughout the whole matrix by 20-folds. Facile electron transport leads to decreased charge-transfer resistance (R ct ) at the surface. According to a modified Randles equivalent circuit in Figure S8, Supporting Information, analysis of Nyquist plots of electrochemical impedance spectroscopy (EIS) ranging from 100k to 0.1 Hz (Figure 5b,c) confirms decreased R ct after introducing the Ni 3DOM current collector (Figure 5d). And the gap of R ct becomes larger with increased cycles. Since R ct mainly depends on the charge transfer resistance related to the redox reactions across the electrode/electrolyte interface, [20] smaller R ct suggests much facile reactions and associated sodium storage near or at the interface. This understanding inspires us to think over a question: how can we associate the electrical conductivity of electrodes with SIB performance at high rates? Due to the same active materials, the accommodation ability of Na ions is the same in theory. So, we try to hunt for the origins based on the possible kinetic factors. CV is a powerful tool to illuminate the electrochemical kinetics of the electrodes towards Na + . Current response to an applied sweep rate will vary depending on whether the redox reaction is diffusion-controlled or not. [10b,11a,21] Hence, we recorded the CV curves of electrodes before and after the introduction of Ni 3DOM current collector at various scan rates from 0.1 to 5 mV s −1 (Figure 6a,b). Similar redox pairs also match the potentials of reversible Ti 4+ /Ti 3+ redox, consistent with the conclusions from Figure 3a. In general, ion intercalation contains three steps: solvated Na + diffusion within the electrolyte; charge-transfer reactions at or near the interface between active materials and electrolyte; and Na-ion diffusion in the bulk materials. According to the features of these electrochemical behaves, there are two types of contributions towards the whole sodium storage: Na-ion intercalation in bulk related to diffusion-controlled mechanism and capacitance at or near the interface free of diffusion-controlled mechanism. To visualize the storage mechanism, we seek help www.afm-journal.de www.advancedsciencenews.com from the relationship of peak current (i) and scan rate (υ) by the following Equations (1,2) Here, a and b are adjustable constants. b-value can be obtained from the slopes by plotting log(i) against log(υ). b-value should be 0.5 provided the sodium storage is an ideal faradaic intercalation process controlled by semi-infinite linear diffusion, whereas the b-value of 1.0 stands for a pure capacitive contribution without diffusion control. [10b,21b] As depicted in Figure 6c,d, both kinds of electrodes display a good linear relationship. The b-values of Ni-TiO 2 sandwich 3DOM are 0.92 and 0.91 for anodic and cathodic processes, respectively, which are closer to 1 than those of TiO 2 3DOM (0.83 and 0.80 for anodic and cathodic processes). This value is closed to amorphous TiO 2 3DOM with mainly surface-controlled capacitive-like contributions during charge/discharge. [8a,15a] Such comparison implies that the sodium storage is less diffusion-controlled after the introduction of Ni 3DOM current collector. It is a strong evidence that facile electron transport resulted from Ni 3DOM current collector can change the electrochemical kinetics and promote the fast interface capacitance-like sodium storage free of diffusion control. Hence, the interface capacitance-like contribution at a certain scan rate (υ) can be further quantitatively differentiated by separating current response (i) at a fixed potential into bulk Na-ion intercalation contribution and interface capacitance-like contribution according to Equation (3): Here, k 1 and k 2 are adjustable constants. Solving for their values at each potential is helpful for the separation of the diffusion-controlled currents and capacitive-like currents, because the former one is proportional to ν, while the latter one is proportional to ν 1/2 . [10b,11b,21c] Figure 7a-f exhibit the typical CV curves for the capacities from interface capacitance-like contributions(green region) in comparison with the total capacities at the scan rates of 0.2, 1, and 4 mV s −1 , respectively. It is obvious to find the quantified results that the interface capacitance-like contribution gradually improves with the accelerated scan rates. At the scan rate of 0.2 mV s −1 , ≈63.7% of the total capacity is capacitive in nature. The ratio increases to 90.4% with the improved scan rate to 4 mV s −1 . More importantly, the Ni-TiO 2 sandwich 3DOM possesses much larger interface capacitance-like contributions than TiO 2 3DOM. The interface capacitance-like contribution is only 79.2% of the total sodium ability at 4 mV s −1 . Considering the four prerequisites, all the results indicate that facile electron transport by introducing Ni 3DOM current collector can successfully promote the interface capacitance-like sodium storage. In general, there are four major electron movement events occurring during ion-intercalation and deintercalation as www.afm-journal.de www.advancedsciencenews.com depicted by the white arrays in Scheme 2: 1) accumulation and/ or reaction at the electrode/electrolyte interface; 2) electron transport throughout the electrode; 3) electron injection and extraction at the electrode-current collector interface; 4) electron transport in the current collector. According to the previous analysis, a comparatively small surface area may slightly weaken the electron utilization at the surface because of a little less accessibility of the active material for Na + . But the existence of Ni 3DOM current collector can obviously shorten electron transport pathways in step (2) (within active materials) and amplified the contact area of electrode-current collector interface as shown in Scheme 2. Since electron transport in the current collector is much faster than that in materials and across the interface, this amelioration in the latter two points can largely improve the electron transport and reduce the resistance throughout the entire electrodes, associated with facile interface charge transfer evidenced in Figure 5. As well known, electron movement is completed by a redox reaction in the materials, and thus is much fast than ion diffusion. Hence, optimization of ion diffusion dominates the previous studies. However, accelerated electron transport would also play a vital role in ion transfer at the interface between the material and electrolyte, which has been regarded as a slow process during electrochemical energy storage. [11b,15a] 3DOM metal skeleton can not only serve as a supporter for active materials, but also provides a direct and reduced way for electron transport. First, fast electron transport can avoid the charge accumulation at and/or near the interface, leading to accelerated interface kinetics and decreased potential of meaningless side reactions. Second, fast rearrangement of lattice at the interface can be realized by facile electron supplement or removal. Third, high electronic conductivity can reduce the electrostatic repulsive force, which would fasten the electron transfer and transport in particular on the interface. Overall, the processes of interface capacitive-like storage, like pseudo-capacitance or doublelayer capacitance, can both be optimized together with the ion intercalation/deintercalation of Na + . Interface capacitive-like storage always occurs on the order of seconds and minutes, and thus promoted interface capacitive-like storage is particularly expected to improve high-rate capacities in short time. [11a] Moreover, interface capacitive-like storage will not result in a huge change of crystalline structures that can be recovered fast. This would benefit for a well balance of high capacity and highrate performance at the same time. Hence, it is efficient and significant to employ the sandwich design to largely boost rate performances of SIBs by tuning the electrical conductivity of electrodes. Full-Cell Electrochemical Characteristics of Ni-TiO 2 Sandwich 3DOMs Such superiorities can also be used to improve the rate performance in full cells. To check this potential, layered P2-Na 2/3 Ni 1/3 Mn 2/3 O 2 is chosen as the cathode to assemble two-electrode full cells together with Ni-TiO 2 sandwich 3DOM and TiO 2 3DOM anodes like the scheme in Figure 8a. Corresponding characterizations and electrochemical performance of P2-Na 2/3 Ni 1/3 Mn 2/3 O 2 are listed in Figure S9, Supporting Information. Similar to the superior performance in half cells, Ni-TiO 2 sandwich 3DOM shows long-term stability and large sodium storage as shown in Figure 8b,c. With respect to the mass of anodes, a capacity of ≈210.1 mAh g −1 can be obtained even after 100 cycles at 50 mA g −1 , the value of which is much www.afm-journal.de www.advancedsciencenews.com larger than TiO 2 3DOM anode and amorphous TiO 2 3DOM anode [15a] in our previous reports. The stable capacity retentions and high-rate capabilities (Figure 8c) are also attractive in practical applications. The full-cell performance shows the good ability of sodium storage even at a current density as high as 5000 mA g −1 , and these capacities can be fully recovered when the current density is switched back to 50 mA g −1 . These results of full cells fully confirm that introducing a metal 3DOM current collector holds the potential to build high-rate energy storage devices. Given the analysis, our observations confirm the possibilities of improving SIB rate performance by tuning the electrical conductivity of electrodes. Although electron transport is magnitudes faster than that of Na ions, [7b,22] strategies to improve electron transport open a novel and efficient avenue to promote rate performance by optimizing fast interface capacitive-like storage rather than comparatively slow bulk storage. The present understandings are expected to not only extend to other sodium-based energy storage device, but also guide the development of functional materials for efficient energy storage at high rates. We are looking forward to more efforts in these open questions in the near future. Conclusions In summary, we have successfully developed an oxide-metal sandwich 3DOM as a superior anode candidate for high-rate SIB. Taking Ni-TiO 2 sandwich 3DOM as a proof-of-concept, anatase TiO 2 delivers a reversible capacity of 233.3 mAhg −1 in half cells and 210.1 mAh g −1 in full cells (coupled with P2-Na 2/3 Ni 1/3 Mn 2/3 O 2 cathode) after 100 cycles at 50 mA g −1 . At the high charge/discharge rate of 5000 mA g −1 , 104.4 mAh g −1 in half cells and 68 mAh g −1 in full cells can also be obtained with satisfying stability, indicating great rate performance of SIBs among various TiO 2 -based anodes. Indepth analysis of electrochemical kinetics fully evidences that the presented remarkable sodium storage ability, in particular at high charge/discharge rates, origins from dominated interface capacitive-like storage because of increased electrical conductivity by introducing of metal 3DOM current collector. The optimized electron transport can efficiently accelerate the interface kinetics of sodium storage, and thus improve the interface capacitive-like contribution ratio to the total sodium storage. Fast and stable interface capacitive-like storage efficiently facilitates sodium storage at high rates without huge fading off. Different from the optimized strategies focused on ion diffusion, our observations about employing a high electrical conductivity paradigm into electrode design enable a novel and efficient foundation for realizing high-rate SIBs. It is expected to realize a universal design for satisfying energy storage by extending this rule to the secondary ion batteries with other large transport ions. Further combination of ion diffusion optimization holds great promise to attract more research interests in the future development in electrochemical energy storage. Experimental Section Fabrication of Polystyrene Sphere Colloidal Crystal Templates: The wellordered CCTs comprised of polystyrene spheres (PSs) were fabricated to form a face-centered cubic packing arrangement using a vertical deposition approach according to the previous works. [23] Ti foils as the substrates have been cleaned under sonication by sequentially immersing in acetone (15 min), ethanol (15 min), and distilled water (15 min). The substrates were then immersed vertically in the 0.5 wt% PS latex at 60 °C. Drying of PS latex appeared near the Ti foil at very low speed in high moisture. Then, CCTs on Ti foil were heated at 90 °C for 5 min and 110 °C for 2 min to enhance the adhesion and avoid a crash during the subsequent steps. Here, the diameter of PSs was ≈500 nm after optimization of sodium storage ability. Fabrication of Ni 3DOM Skeletons: In a typical procedure, 3DOM Ni skeletons were fabricated by electrochemical deposition using an aqueous electrolyte of 1 m NiSO 4 ·6H 2 O, 0.1 m NiCl 2 ·6H 2 O, and 0.5 m H 3 BO 3 . The electrochemical deposition was realized by means of a standard three-electrode system consisting of CCTs as the working electrode, Pt plate as the counter electrode, and HgSO 4 /Hg as the reference electrode. Constant current mode with the current of −2.45 mA was used here for 30 min by a VSP electrochemical workstation (Bio-Logic, France), where Ni was solidified in situ at the interstice of CCTs to form a 3DOM architecture. All the depositions were carried out at room temperature. Then, the CCTs were removed by immersing into 10 mL tetrahydrofuran (THF) for 24 h. Ni 3DOM skeletons could be obtained after dying in N 2 flow without any oxidation and kept in N 2 . Fabrication of Ni-TiO 2 Sandwich 3DOM: In a typical procedure, TiO 2 was deposited on both sides of Ni 3DOM skeleton by ALD (PicoSun SUNALE R150 ALD system, PicoSun, Finland) to achieve the sandwich construction. Here, TiCl 4 and H 2 O were chosen as the precursors of Ti and O, respectively. TiCl 4 and H 2 O were alternatingly pulsed for 0.1 s with separation by 10 s purge of N 2 . The cycle numbers were 400 in order to reach a uniform thickness of 20 nm. The reaction chamber was 70 °C. Then the as-prepared sandwich architectures were annealed in air at 350 °C for 2 h with a ramp rate of 2 °C min −1 to obtain the anatase phase of TiO 2 . Figure 8. a) Scheme of a two-electrode full cell with TiO 2 sandwich 3DOM and TiO 2 3DOM anodes and P2-Na 2/3 Ni 1/3 Mn 2/3 O 2 cathode; b) Cycling performance at 50 mA g −1 ; c) Rate performance with current densities varying from 50 to 5000 mA g −1 . www.advancedsciencenews.com
2021-05-04T22:06:05.177Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "8ef6c8295c0dc38bacca2d17504b887f83ac329b", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adfm.202101081", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "f9ed5a17a34cd359cabcfdbc201e31397741ffca", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
255982919
pes2o/s2orc
v3-fos-license
Patterns of genetic variation and life history traits of Zeuxapta seriolae infesting Seriola lalandi across the coastal and oceanic areas in the southeastern Pacific Ocean: potential implications for aquaculture The monogenean, Zeuxapta seriolae, is a host-specific parasite that has an extensive geographical distribution on its host, Seriola lalandi, and is considered highly pathogenic in farmed fish. In recent years, developing cultures of S. lalandi in different coastal localities in Southeastern Pacific Ocean (SEP) have been affected by moderate and heavy infections of this parasite, attributed to contagion from wild to farmed fish. Here, we evaluated the pattern of genetic variations and biological traits of Z. seriolae in a spatial and temporal scale across its geographical distribution in SEP to determine its genetic status and biological traits, which could affect its transmission dynamics from wild to farmed fish. Wild fish and their parasites were sampled from fisheries in the northern Chilean coast (NCC: 24°S-30°S) and Eastern islands (JFA: ca 33°S; 80°W) between 2012 and 2014. Fragments of 816 bp of the cytochrome c oxidase subunit I (COI) gene was sequenced for 112 individuals from NCC and 63 from JFA and compared using AMOVA. Prevalence and intensity of Z. seriolae were calculated for each area. The parasite body size, fecundity and size at sexual maturity were estimated for 177 parasites from NCC and 128 from JFA, and significant differences were evaluated using GLM. Geographical genetic structuring was detected for Z. seriolae across SEP, with a population in NCC and the other in JFA, both with the same high haplotype diversity. Neutrality tests and mismatch analyses indicated that both Z. seriolae populations are stable. Parasite biological traits such as fecundity, body size, and size at sexual maturity, and population parameters varied significantly between geographical areas. Two genetic groups of Z. seriolae were detected in wild fish across SEP. Because of the seasonal migration of wild host and temporal contact with farming, quantifying the genetic diversity and level of gene flow or isolation between parasite populations is useful for fish health management in farming. The smallest size of sexual maturity in parasites from NCC is predictive of shorter life cycles, and their high genetic diversity suggests high evolutionary potential and high transmission of this parasite to farmed hosts. Background Patterns of genetic diversity or variations among parasite populations can provide clues to the population life histories and degree of evolutionary isolation, which may have important applications in aquaculture and epidemiology [1,2]. Therefore, the population genetic structure of a parasite, which is associated with the amount of genetic exchange between subpopulations, allows for understanding the parasite's dispersal capabilities and identifying a source of infection on a spatial scale, which contributes to the control of diseases [2]. The extent of geographic variations in gene frequency in most species is the result of the geographic distance between subpopulations and is associated with variations in the oceanographical conditions [3][4][5]. Further, balances among mutation, genetic drift, selection and genetic flow can produce either local genetic differentiation or genetic homogeneity [6]. Of these aspects, gene flow, which includes all mechanisms that result in the movement of genes from one population to another, determines the extent to which each local population of a species is an independent evolutionary unit [7,8]. For parasites, genetic flow among populations also depends on the intrinsic host-parasite relationship [9,10]. Therefore, high or low genetic flow in parasite populations has important implications for evolutionary processes, such as host-race formation, adaptation to host defenses, and the evolution of drug resistance [11]. Moreover, it has been hypothesized that a pathogen population with high evolutionary potential, which is the potential to generate new variations in traits that determine host-parasite interactions and the evolution of disease dynamics [12], is associated with high mutation rates, high potential for gene flow and a large population effective size (N e ), each of which allows for the increase of genetic diversity and presents a higher risk of breaking down resistance genes [13]. The genetic structure of parasite populations is associated with host specificity, host mobility and environmental conditions [9]. The dispersal ability of parasites with low host specificity and in the absence of physical barriers greatly facilitates extensive gene exchange among different subpopulations [12,14,15]. Alternatively, host specific parasites are more likely to experience frequent local extinction and re-colonization events, particularly in small and fragmented wild host populations. These population processes may promote the loss of genetic diversity within a parasite population and generate genetic differences among populations through genetic drift [12]. Additionally, the potential spatial distribution of a parasite not only depends on the dispersal stages of parasites (the free living stages, e.g., eggs, oncomiracidium larvae, and nauplius larvae) but is also closely coupled with host mobility (sedentary or highly mobile), thereby facilitating the homogenization of a parasite population and allowing for the potential evolution and spread of drug resistance in parasites [16], as demonstrated for terrestrial nematodes [11] and marine hosts [17], freshwater trematodes [16] and marine monogeneans [15]. In marine systems, oceanographic conditions and the distance between populations can determine the geographic genetic patterns in parasites [18,19] and, similarly, differences in parasite biological traits. Environmental parameters, such as temperature and salinity in different latitudes, may affect the egg production rate, egg-hatching time and development to sexual maturity in marine ectoparasites because these occur more quickly at warmer temperatures and can trigger different infection dynamics [20][21][22]. Population genetic studies provide insight into parasite evolutionary histories and aid in the identification of the causal factors contributing to disease dynamics and distribution [12]. The monogenean, Zeuxapta seriolae, is reported in Seriola lalandi from Australia, New Zealand, Japan and California and in S. dumerili from the Mediterranean Sea [23][24][25][26]. The genus Zeuxapta is specific to the genus Seriola, and Z. seriolae, and shows a particularly wide geographic distribution in both farmed and wild fish. Additionally, Z. seriolae is considered highly pathogenic in farmed fish because at high intensities, it can kill its host by causing anaemia [25,[27][28][29][30]. The yellowtail kingfish S. lalandi is a pelagic fish and is highly migratory and widely distributed in temperate and subtropical waters of the world. In the SEP, S. lalandi present a permanent population in an archipelago approximately 700 km from continental Chile (80°W), where it is captured year-round. On the northern continental Chilean coast (NCC: 20°S to 30°S), this species arrives annually in the summer, most likely to feed, as occurs in the southwest Atlantic Ocean [31]. No spawning and nursery areas are known in SEP. In recent years, cultures of S. lalandi have been initiated at different localities in the NCC, where farmed fish are affected by moderate and heavy infection levels of this parasite, which has been attributed to contagion from wild to farmed fish. Because Z. seriolae is a host specific parasite that has an extensive geographic distribution (cosmopolitan) and affects the aquaculture of S. lalandi worldwide it is important to know its evolutionary potential, which can be useful for understanding the dynamic of the disease. Here, we evaluated the patterns of genetic variation of this monogenean on spatial and temporal scales using mitochondrial DNA markers. Additionally, we estimated several life history traits, such as fecundity and size at sexual maturity, beside population parameters of infection across the geographic distribution in the SEP. Taking into account that the host population migrates in the summer to the coast in the SEP (extending the fishery from 20°S to 30°S) but the host migratory route, fish origin and the effect of physical barriers on the dispersal of parasite population are not known, we can expect that parasite populations correspond to a single panmictic population. Alternatively, if physical barriers or host segregation affect the genetic flow of parasites, we can expect genetic structure of Z. seriolae across the SEP. For molecular analyses, only one parasite per fish was used in 123 fish to prevent the sampling of inbred offspring [14]. Two parasites per fish were collected from 23 fish to increase the number of parasites by locality and year. In only two cases, three parasites from one fish were used to prior check that they were not clones. Specimens of monogenean Z. seriolae were removed from the gills of fish, counted and identified using a stereomicroscope, and each parasite was placed individually into a 1.5 ml Eppendorf tube with absolute ethanol for DNA extraction. Genetic diversity and population structure Molecular diversity was estimated through the following indices: number of haplotypes (H), number of polymorphic sites (S), haplotype diversity (Hd: a measure of the frequencies and numbers of haplotypes among individuals [35]), nucleotide diversity (π: average weighted sequence divergence between haplotypes [35]) and mean number of pairwise differences (k) that were computed using Dnasp 5.0 [36] and Arlequin v3.1 [37]. Genetic distance within locations and between localities was estimated using Mega 6.0 [38]. Mann Whitney U test were used to evaluate differences in genetic diversity of Z. seriolae between geographical areas. Genetic population structures were examined through AMOVA to determine the amount of genetic variability using F-statistics in three temporally subdivided hierarchical levels: the proportion of variations among years (F ct ), among populations within years (F sc ) and within populations (F st ). The significance of the covariance component associated with different possible levels of genetic structure was permuted 10,000 times. Pairwise genetic differentiation between populations was estimated using the fixation index F st and statistical significance was Demographic analysis Episodes of population growth or decline show characteristic signatures in the distribution of nucleotides between pairs of individuals that in Z. seriolae were evaluated with mismatch distribution analysis in each locality and year and for total samples, with analysis performed using Dnasp 5.0. Unimodal distribution and multimodal distributions were thereby distinguished for historic expansion and population equilibrium, respectively [39]. In addition, two neutrality tests were performed in Arlequin v3.1 software: Tajima's D [40] and Fu F statistics [41]. Both neutrality tests provide predictions about evolution under mutation-drift equilibrium in the absence of systematic effects such as selection or demographic effects. Thus, significant deviations from neutrality can be a consequence of selection, population expansions, bottlenecks or demographic fluctuations [42]. Finally, a haplotype network was constructed using HaploViewer (http://www.cibiv.at/~greg/haploviewer, Center for Integrative Bioinformatics Vienna) previous construction of a neighbour-joining tree (TN93 + G model) in Mega v6. All sequences were deposited in GenBank under accession number: KP119183-KP119357. Population parameters, fecundity and body size Prevalence and intensity of Z. seriolae were calculated according to Bush et al. [43] for years, sites and geographical areas. The median intensity was used as descriptor of central tendency because data do not show normal distribution [44]. Three hundred and five Z. seriolae specimens were randomly selected to estimate parasite body length and fecundity by year, site and geographical area. Each parasite was individually examined in a slide with a drop of water and cover slip. Measurements were carried out using Micrometrics 5.0 software (New York Microscope Company, Inc.), which was connected to an Olympus camera. The total length (in millimeters) included the opisthaptor length. Parasite fecundity was measured as number of eggs per parasite. For this, the parasite uterus was dissected and an entire chain of eggs was removed and counted using a manual counter. To evaluate differences in fecundity, only those parasites whose uteri contained over 50 eggs were considered to exclude those parasites just starting their egg production. The size at first sexual maturity (= first size egg production) was measured as the parasite length when 50 % of the population of Z. seriolae contained at least one egg within the uterus [45] for each fishing area. For this estimation, all examined parasites (with and without eggs) were used (n NCC = 590; n JFA = 158). The parasite length was categorized in mm (1 mm to 25 mm). 2x2 contingency tables with Yates correction were used to evaluate prevalence between geographical areas and temporal variations for sites AF (coastal) and JF (oceanic). Mann Whitney U tests were used to evaluate differences in intensity of Z. seriolae between geographical areas and to compare the parasite total length between the NCC and JFA samples. The Spearman correlation was used to evaluate the association between parasite body length and number of eggs [44]. Generalized linear models were used to evaluate differences in intensity of infection and parasite fecundity between geographical areas. Fish size was used as co-variable for intensity, and total parasite length was used as co-variable for fecundity. For both models, we used the poisson distribution for response variable and the log as the function link [46]. All analyses were performed using Statistica 7.0 software (Statsoft Inc., Tulsa, Oklahoma). Genetic diversity and population structure A 816 bp segment of the COI gene was obtained from 175 Z. seriolae individuals sampled from the two geographical areas: 112 sequences from the NCC and 63 sequences from the JFA (Table 1). Sequence variability between NCC and JFA was 0.5-0.9 %, while intralocality variability was 0.4-0.8 %. Genetic diversity indices of Z. seriolae within and between geographic areas are summarized in Table 1. Total haplotype diversity considering the entire region of study was 0.91 ± 0.01, and nucleotide diversity was 0.007 ± 0.0001. The haplotype diversity did not differ between geographical areas (U = 37.5; p = 0.42), but nucleotide diversity was significantly lower in JFA (U = 9; p = 0.002; Table 1). Hierarchical AMOVA analysis revealed significant genetic differentiation at the spatial scale (F st = 0.1286, p < 0.05; F sc = 0.1354, p < 0.05). Eighty-seven percent of the variance was explained by variability within populations and 13 % of the remaining genetic variation was attributed to variability among populations within years ( Table 2). Parasites from JFA were significantly different from those from the NCC (Table 3); therefore, JFA is considered as a different population from NCC. At temporal scale, there were no significant genetic differences for parasites from JFA between years, but parasites from NCC showed differences between site CH of year 2012 compared with AF and CQ of 2014 (Table 3). Demographic history The Z. seriolae haplotype network revealed the existence of two genetic groups: the first group showed one main ancestral haplotype (H5) occurring in each site and geographical area, but it was more predominant in JFA where H5 was surrounded by a large number of unique haplotypes. The group 2 included three highly frequent haplotypes (H1, H2, H3) that were predominant in the NCC (Fig. 2). The mismatch distribution for each site (graph not shown) and for entire geographical area of study deviated significantly from the expected distribution under expansion model, exhibiting a bimodal distribution of pairwise differences (Fig. 3). Neutrality test showed lack of significance of the Tajimas D test (D = −0.996, p > 0.1; Table 1), indicating that parasite populations are in equilibrium under the neutral model. Fu's test, however, was significantly negative (Table 1) for NCC and JFA parasite populations, suggesting the influence of some process affecting the demographical history of Z. seriolae. Parasites from the JFA were significantly longer than those from the NCC ( Table 4). The range of total Z. seriolae body length varied from 3.06 to 24.31 mm in the NCC and from 1.65 to 25.8 mm in the JFA. The first size at sexual maturity of Z. seriolae reached 11.8 mm in NCC, and at 14.4 mm in JFA (Fig. 4). Consequently, parasites from the JFA reached sexual maturity at higher body lengths. Discussion For the effective management and health maximization of cultured fish, it is essential to have a clear understanding of the levels of gene flow, the connectivity of parasite populations or metapopulations, the sources of larval infestations, and the relative fluxes of parasites between wild and farmed fish [1,2]. To study these factors, genetic markers, such as mtDNA, have been widely used to detect population genetic structures [47,48], which is useful for estimating the evolutionary potential of a parasite. Our results using the COI gene provide a geographical genetic structure for the monogenean Zeuxapta seriolae, with one population present in their host S. lalandi that approaches the coast in the summer and the other population in host fish from the oceanic area in the southeastern Pacific Ocean (SEP). This parasite population genetic structure may be explained by oceanographical barriers to parasite dispersion and/or by host population segregation. Population structure in space and time is the result of both present processes and past history [49]. Populations that are established from small numbers of individuals tend to lose genetic variation due to increased effects of genetic drift [50]. However, successful colonization occurring during a long time period can accumulate new mutations, and species with relatively quick generation turnover, high fecundity and short lifespan like parasites can favour a fast rate of molecular evolution [51]. In our study, the bimodal mismatch distribution, which depends on the evolutionary history of one population [39], suggest that Z. seriolae populations are stable and consistent with populations that are geographically subdivided and have limited or low migration [52]. Additionally, network haplotype also suggests stable populations and are consistent with the lack of significance showed for neutrality Tajima's test. This test is based on the allele frequency distribution of segregating nucleotide sites [40], while Fu's test uses the distribution of alleles or haplotypes [41]; this last being considered the most sensitive test to detect some selection (or expansion) process [53]. Thus, the significant values in Fu's test indicate an excess of rare haplotypes what would be expected under neutrality (H5 connected to a high number of unique haplotypes), suggesting that in the past some purifying selection process on Z. seriolae populations [41,54] has taken place, which could have occurred as a consequence of two possible scenarios: when host fish colonized oceanic Eastern island, therefore, introducing the parasites [55] or when parasites infecting migrant fish colonized fish from JFA. Likewise, the lower genetic diversity in parasite populations from JFA might be associated with 'founder effect' [56,57] as has been suggested for the monogenean Mazocraeoides gonialosae along the coast of China [14]. Parasites infecting highly mobile hosts such as S. lalandi, could reach remote locations as well as its host. However, geographical barriers can affect either the host or parasite distribution [20,58]. JFA corresponds to islands of volcanic origin along hotspot lines of the Nazca Plate and emerged in the Plio-Pleistocene period, approximately 4 million years ago, [59]. These islands are located between the coastal and oceanic branches of the sub-Antarctic Peru or Humboldt Current, which is split by the subtropical Peruvian countercurrent [60]. Studies of the zoogeography of icthyofauna suggest that the JFA and other groups of oceanic islands in the SEP should be considered a biogeographic unit [61,62]. Until now, previous studies of other carangids, such as Trachurus murphyi [63], have not demonstrated that oceanographic barriers limit their dispersion and distribution in SEP. However, factors such as the Chile-Peru Current (Humboldt Current), surface gradients of temperature and salinity, and the great depths of the Chile-Peru Trench (more than 4000 m in the region) [64,65] may impose barriers to the dispersion of parasites and may thus explain the genetic structure detected in the Z. seriolae population there. The dispersal ability and genetic flow of ectoparasites is influenced by host specificity [66]. Host specificity is a key property of parasites because it is a determinant of their local extinction risk and their likelihood for successful establishment following introduction to a new region, with generalist species less prone to local extinction and better invaders than specialists [67]. The monogeneans M. gonialosae and Gotocotyla sawara, which are considered generalist parasites that infect more than one host fish species, did not show a geographical genetic structure based on COI mtDNA in the western Pacific Ocean. The absence of different genetic structures between the populations of these parasite species was attributed to a high gene flow favoured by their host range [14,15]. The dispersion stages of Z. seriolae consist of a string of eggs that can entangle in the gills of fish or join together to form light masses of numerous eggs that have a wide surface area, which allows the eggs to stay in the water column for a short time [28], with passive dispersion and free-swimming larvae that live approximately 24 h (personal obs.). The short lifetime of these developmental stages in the water column, their passive mode of dispersion and their high host specificity may decrease the probability of encountering a suitable host that favours their dispersal and genetic flow across an extensive geographical area. These parasite characteristics suggest a low degree of parasite migration between host populations, which may contribute to local adaptations [68]. Host behaviour is an important factor for parasite dispersal [9]. Therefore, hosts with an interrupted geographical distribution will affect the gene flow of their parasites [69]. S. lalandi is a cosmopolitan species distributed in the Pacific, Indian and Atlantic Oceans (www.fishbase.org). This species arrives at the NCC annually between December and April, where it is captured by artisanal fishermen, whereas S. lalandi is captured in the JFA year-round. In other geographical areas, S. lalandi shows a reproductive strategy with limited migration; specifically, only a subset of the reproductive fish migrates [70]. Additionally, tagging studies conducted in Australia suggest that S. lalandi can migrate considerable distances, but most juveniles are relatively sedentary [23,71]. However, spawning and nursery areas of S. lalandi are unknown in the SEP, the geographical genetic differences between Z. seriolae from JFA and NCC as well as the temporal genetic differentiation of Z. seriolae between localities from NCC could be indicative of different host subpopulations. Alternatively, the short generation time of the parasites combined with the subdivision of the host populations during prolonged time periods may contribute to the generation of structured parasite populations. Throughout a latitudinal gradient, variations in environmental parameters, such as temperature and salinity, can modify the biological traits of a species [72,73]. Monogenean biological traits vary according to environmental parameters, such as temperature and salinity, resulting in individuals achieving sexual maturity more quickly at warmer temperatures [20,[74][75][76]. Z. seriolae from the NCC area showed a smaller first size at sexual maturity and was significantly smaller and produced fewer eggs than did parasites from the JFA. Tubbs et al. [20] suggested an optimal temperature of 17.5°C for the in vitro fecundity of Z. seriolae because egg production decreases at other temperatures yet sexual maturity is reached more quickly at higher temperatures. The seawater in NCC varies between 18°C and 20°C in summer [77] and between 14°C (winter-autumn) and 20°C (spring-summer) in the waters of JFA [78]. Therefore, it is possible that the local environmental conditions can modify the biological traits of Z. seriolae populations across the SEP; however, it is also possible that genetic differences among the parasite populations are reflected in different biological traits. Regardless, the biological trait differences between populations of Z. seriolae may involve differential infestation dynamics (i.e. duration of parasite life cycles, infestation rates and parasite loads) on host populations from different geographical areas, as demonstrated for different ectoparasites in fish farming [20,79,80]. Similarly, the lower first size at sexual maturity in the NCC is predictive of shorter life cycles of this parasite in this area and probably quickly re-infests, which is associated with high densities of fish in captivity increasing the risk of outbreaks of this disease in farming [81,82]. Conclusions In this study, we detected two different populations of the parasite Z. seriolae infesting the wild fish S. lalandi across the SEP. In each area, the parasites showed different biological characteristics, such as fecundity, body size, and size at first sexual maturity, and population parameters suggesting different dynamics of infestation. Because of the seasonal migration of wild hosts in the SEP, and consequent temporal contact with farmed hosts in NCC, the quantification of diversity and genetic differentiation and level of gene flow (or isolation) between parasite populations, as well as potential parasite adaptations, is useful for fish health management in farming. The smallest sizes of sexual maturity observed in parasites from the NCC predict shorter life cycles, which along with the high genetic diversity suggest high evolutionary potential and high transmission of this parasite to farmed hosts.
2023-01-19T22:08:46.324Z
2015-05-22T00:00:00.000
{ "year": 2015, "sha1": "24eca03d3dce7c1eed0c142bb9583f3cf91973fd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13071-015-0892-4", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "24eca03d3dce7c1eed0c142bb9583f3cf91973fd", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [] }
56324976
pes2o/s2orc
v3-fos-license
Multifractal spectra and multifractal zeta-functions We introduce multifractal zeta-functions providing precise information of a very general class of multifractal spectra, including, for example, the multifractal spectra of self-conformal measures and the multifractal spectra of ergodic Birkhoff averages of continuous functions. More precisely, we prove that these and more general multifractal spectra equal the abscissae of convergence of the associated zetafunctions. Introduction. Measures with widely varying intensity are called multifractals and have during the past 20 years been the focus of enormous attention in the mathematical literature. Loosely speaking there are two main ingredients in multifractal analysis: the multifractal spectrum and the Renyi dimensions. One of the main goals in multifractal analysis is to understand these two ingredients and their relationship with each other. It is generally believed by experts that the multifractal spectrum and the Renyi dimensions of a measure encode important information about the measure, and it is therefore of considerable importance to find explicit formulas for these quantities. In Ol4,Ol5,Ol6] the authors used the zeta-function technique introduced and pioneered by M. Lapidus et al in the intriguing books in order to find explicit formulas for the Renyi dimensions of a self-similar measure. At this point we note that it is generally believed that analysing the multifractal spectrum of a measure is considerably more difficult and challenging than analysing its Renyi dimensions, and the main purpose of this paper is to address the substantially more difficult problem of finding explicit formulas for the multifractal spectrum of a self-similar measure similar to the explicit formulas for its Renyi dimensions found in Ol4,Ol5,Ol6]. In particular, and as a first step in this direction, we introduce multifractal zeta-functions providing precise information of very general classes of multifractal spectra, including, for example, the multifractal spectra of self-conformal measures and the multifractal spectra of ergodic Birkhoff averages of continuous functions. More precisely, we prove that these and more general multifractal spectra equal the abscissae of convergence of the associated zeta-functions. 1.1. The first ingredient in multifractal analysis: multifractal spectra. For a Borel measure µ on R d with support equal to K and a positive number α, let us consider the set ∆ µ (α) of those points x in R d for which the measure µ(B(x, δ)) of the ball B(x, δ) with center x and radius δ behaves like δ α for small δ, i.e. the set ∆ µ (α) = x ∈ K lim rց0 log µ(B(x, r)) log r = α . If the intensity of the measure µ varies very widely, it may happen that the sets ∆ µ (α) display a fractal-like character for a range of values of α. In this case it is natural to study the Hausdorff dimensions of the sets ∆ µ (α) as α varies. We therefore define the the multifractal spectrum of µ by where dim H denotes the Hausdorff dimension. Here and below we use the following convention, namely, we define the Hausdorff of the empty set to be −∞, i.e. we put One of the main problems in multifractal analysis is to study this and related functions. The function f µ (α) was first explicitly defined by the physicists Halsey et al. in 1986 in their seminal paper [HaJeKaPrSh]. The multifractal spectrum f µ is defined using the Hausdorff dimension. There is an alternative approach using "box-counting" arguments leading to the coarse multifractal spectrum. Namely, for a Borel probability measure µ on R d with support equal to K and a real number α, the coarse multifractal spectrum is defined as follows. For positive real numbers r > 0 and δ > 0, we write N µ,δ (α; r) = sup |I| (B(x i , δ)) i∈I is a finite family of balls such that: and define the r-approximate coarse multifractal spectrum f c µ (α; r) of µ by f c µ (α; r) = lim inf δց0 log N µ,δ (α; r) − log δ . (1.4) Finally, the coarse multifractal spectrum f c µ (α) of µ is defined by f c µ (α) = lim rց0 f c µ (α; r) (1.5) (it is clear that this limit exists since f c µ (α; r) is a monotone function of r). We note that it is easily seen that f µ (α) ≤ f c µ (α) , and that this inequality may be strict, see, for example, [Fa1]. We note that the q-moment M µ,δ (q; E) is closely related to the box dimension dim B E of E. Indeed, if we let M δ (E) denote the greatest number of pairwise disjoint balls of radii δ with centers in E, then it follows from the definition of the box dimension that dim B E = lim δ→0 log M δ (E) − log δ (provided the limit exists) and we clearly have ( 1.6) It is also possible to define an integral version of the q-moments M µ,δ (q; E). Namely, for E ⊆ R d , q ∈ R and δ > 0, we define the integral q-moment V µ,δ (q) of µ on E at scale δ by where B(E, δ) = {x ∈ R d | dist(x, E) ≤ δ} and L d denotes the Lebesgue measure in R d . We now define the lower and upper integral Renyi spectra T µ (·; E), T µ (·; E) : R → [−∞, ∞] of µ by T µ (q; E) = lim inf δց0 log V µ,δ (q; E) − log δ , T µ (q; E) = lim sup δց0 log V µ,δ (q; E) − log δ . As above, we note that the integral q-moment V µ,δ (q; E) is also closely related to the Minkowski volume of E and the box dimension dim B E of E. Namely, if we let V δ (E) denote the δ approximate Minkowski volume of E, i.e. V δ (E) = L d ( B(E, δ) ), then it is well-known that dim B E = lim δ→0 log( 1 r d V δ (E)) − log δ (provided the limit exists) and we clearly have (1.7) 1.3. The Multifractal Formalism. Based on a remarkable insight together with a clever heuristic argument, it was suggested by theoretical physicists Halsey et al. [HaJeKaPrSh] that the multifractal spectra f µ and f c µ can be computed using the Renyi dimensions. This result is known as the "Multifractal Formalism" in the physics literature. More precisely, the "Multifractal Formalism" says that the multifractal spectra equal the Legendre transform of the Renyi dimensions. Recall that if ϕ : R → R is a real valued function, then the Legendre transform ϕ * : R → [−∞, ∞] of ϕ is defined by ϕ * (x) = inf y (xy + ϕ(y)) . (1.8) We can now state the "Multifractal Formalism". for all α. During the past 20 years there has been an enormous interest in verifying the Multifractal Formalism and computing the multifractal spectra of measures in the mathematical literature. [Fa2,Pe] and the references therein. Summarizing the previous paragraph somewhat more succinctly, previous work has almost entirely concentrated on the following problem: Previous work: Previous work has concentrated on finding the limiting behaviour of the following ratios, namely, log M µ,δ (q) − log δ and log N µ,δ (α; r) − log δ . Indeed, computing the Renyi dimensions τ µ (q) and τ µ (q) involves analysing the limiting behaviour of log Iµ,r (q) − log r , and computing the coarse multifractal spectrum f c µ (α; r) involves analysing the limiting behaviour of Due to the importance of the quantities M µ,δ (q) and N µ,δ (α; r) it is clearly desirable not only to find expressions for the limiting behaviour of can be computed directly from these expressions. We will now describe our strategy for analysing the quantities M µ,δ (q) and N µ,δ (α; r). Very loosely speaking, the quantities M µ,δ (q) and N µ,δ (α; r) "count" the number of balls B(x, δ) satisfying certain conditions. There are two distinct and widely used techniques for analysing the asymptotic behaviour of such (and similar) "counting functions", namely, (1) using ideas from renewal theory or (2) using the Mellin transform and the residue theorem to express the "counting functions" as sums involving the residues of suitably defined zeta-functions. Indeed, renewal theory techniques were introduced and pioneered by Lalley [La1,La2,La2] in the 1980's, and later investigated further by Gatzouras [Ga], Winter [Wi] and most recently Kesseböhmer & Kombrink [KeKo], in order to analyse the asymptotic behaviour of the "counting function" M δ (E) = M µ,δ (0, E) = M µ,δ (0) for self-similar sets E (see (1.6)) and similar "counting functions" from fractal geometry. However, while renewal theory techniques are powerful tools for analysing the asymptotic behaviour of "counting functions", they do not yield "explicit" formulas. This is clearly unsatisfactory and it would be desirable if "explicit" expressions could be found. However, despite, or perhaps in spite, of the difficulties, the problem of finding "explicit" formulas of "counting functions" in fractal geometry has recently attracted considerable interest. In particular, Lapidus and collaborators [LapPea1,LapPea2,LapPeaWi, have with spectacular success during the past 20 years pioneered the use of applying the Mellin transform to suitably defined zeta-functions in order to obtain explicit formulas for the It would clearly be desirable if similar formulas could be found for the multifractal quantities M µ,δ (q) and N µ,δ (α; r) of self-similar (and more general) multifractal measures µ. In multifractal analysis it is generally believed that analysing the the q-moments M µ,δ (q) and the associated Renyi dimenions τ * µ (α) and τ * µ (α) is less difficult than analysing the "counting function" N µ,δ (α; r) and the associated multifractal spectra f µ and f c µ . Indeed, in [Le-VeMe,Ol4] (see also the surveys [Ol5,Ol6]) the authors introduced a one-parameter family of multifractal zeta-functions and established explicit formulas for the integral q-moments V µ,δ (q) expressing V µ,δ (q) as a sum involving the residues of these zeta-functions, and in [Ol1] the asymptotic behaviour of the q-moments M µ,δ (q) were analysed using techniques from renewal theory. In addition, we note that Lapidus and collaborators have introduced various intriguing multifractal zeta-functions [LapRo,LapLe-VeRo]. However, the multifractal zetafunctions in [LapRo,LapLe-VeRo] serve very different purposes and are significantly different from the multifractal zeta-functions introduced in [Le-VeMe, Ol2,Ol4]. The purpose of this paper is to address the significantly more difficult and challenging problem of performing a similar analysis of the multifractal spectrum "counting function" N µ,δ (α; r). In particular, the final aim is to introduce a class of multifractal zeta-functions allowing us to derive explicit formulas for the "counting function" N µ,δ (α; r) expressing N µ,δ (α; r) as a sum involving the residues of these zeta-functions. As a first step in this direction, in this work we introduce multifractal zeta-functions providing precise information of very general classes of multifractal spectra, including, for example, the spectra f µ and f c µ of selfsimilar multifractal measures µ. More precisely, we prove that the multifractal spectra equal the abscissae of convergence of the associated zeta-functions. It is our hope that a more careful analysis of these zeta-functions will provide explicit formulas for the "counting function" N µ,δ (α; r) allowing us to express N µ,δ (α; r) as a sum involving the residues of these zeta-functions; this will be explored in [MiOl]. In order to illustrate the ideas involved we now consider a simple example. 1.4. An example illustrating the ideas: self-similar measures. To illustrate the above ideas in a simple setting, we consider the following example involving self-similar measures. Recall, that self-similar measures are defined as follows. Let (S 1 , . . . , S N ) be a list of contracting similarities S i : R d → R d and let r i denote the similarity ratio of S i . Also, let (p 1 , . . . , p N ) be a probability vector. Then there is a unique Borel probability measure µ on R d such that see [Fa1,Hu]. The measure µ is called the self-similar measure associated with the list (S 1 , . . . , S N , p 1 , . . . , p N ). If the so-called Open Set Condition (OSC) is satisfied, then the multifractal spectra f µ and f c µ are given by the following formula. Namely, if if the OSC is satisfied and if we define for all α ∈ R where β * denotes the Legendre transform of β (recall, that the definition of the Legendre transform is given in (1.8)). For α ∈ R, we are now attempting to introduce a "natural" self-similar multifractal zeta-function ζ sim α whose abscissa of convergence equals f µ (α). To do this we first introduce the following notation. Write Σ * = {i = i 1 . . . i n | n ∈ N , i j ∈ {1, . . . , N } } i.e. Σ * is the set of all finite strings i = i 1 . . . i n with n ∈ N and i j ∈ {1, . . . , N }. For a finite string i = i 1 . . . i n ∈ Σ * of length n, we write |i| = n, and we write r i = r i1 · · · r in and p i = p i1 · · · p in . With this notation, we can now motivate the introduction of a "natural" multifractal zeta-function as follows. Namely, since f µ (α) measures the size of the set of points x for which lim δց0 log µ(B(x,δ)) log δ = α and since log µ(B(x,δ)) log δ has the same form as log p i log r i , it is natural to define the self-similar multifractal zeta-function ζ sim α by for those complex numbers s for which the series converges absolutely. An easy and straight forward calculation (which we present below) shows that the abscissa of convergence σ ab ( ζ sim log ri ], then it is easily seen that that for all i ∈ Σ * , we have log p i log r i = α, whence σ ab ( ζ sim α ) = −∞, and inequality (1.12) is therefore trivially satisfied. On the other hand, if α ∈ [min i log pi log ri , max i log pi log ri ], then it follows from [CaMa, Fa1,Pa] that there is a (unique) q ∈ R with f µ (α) = f c µ (α) = αq + β(q). Hence, for each ε > 0, we have (using the fact that i p q i r However, it is also clear that we, in general, do not have equality in (1.12). Indeed, the set { log p i log r i | i ∈ Σ * } is clearly countable (because Σ * is countable) and if α ∈ R \ { log p i log r i | i ∈ Σ * }, then σ ab ( ζ α ) = −∞ (because the series (1.11) that defines ζ sim α (s) is obtained by summing over the empty set). Since it also follows from [CaMa, Fa1,Pa] log ri ), we therefore conclude that: for all except at most countably many α ∈ (min (1.13) It follows from the above discussion that while the definition of ζ sim α (s) is "natural", it is not does not encode sufficient information allowing us to recover the multifractal spectra f µ (α) and f c µ (α). The reason for the strict inequality in (1.13) is, of course, clear: even though there are no strings i ∈ Σ * for which the ratio log p there are nevertheless many sequences (i n ) n of strings i n ∈ Σ * for which the ratios log p in log r in converges to α. In order to capture this, it is necessary to ensure that those strings i for which the ratio log p i log r i is "close" to α are also included in the series defining the multifractal zeta-function. For this reason, we modify the definition of ζ sim α and introduce a self-similar multifractal zeta-function obtained by replacing the original small "target" set {α} by a larger "target" set I (for example, we may choose the enlarged "target" set I to be a non-degenerate interval). In order to make this idea precise we proceed as follows. For a closed interval I, we define the self-similar multifractal zeta-function ζ sim for those complex numbers s for which the series converges absolutely. Observe that if I = {α}, then ζ sim I (s) = ζ sim α (s) . We can now proceed in two equally natural ways. Either, we can consider a family of enlarged "target" sets shrinking to the original main "target" {α}; this approach will be referred to as the shrinking target approach. Or, alternatively, we can consider a fixed enlarge "target" set and regard this as our original main "target"; this approach will be referred to as the fixed target approach. We now discuss these approaches in more detail. (1) The shrinking target approach. For a given (small) "target" {α}, we consider the following family [α − r, α + r] r>0 of enlarged "target" sets [α − r, α + r] shrinking to the original main "target" {α} as r ց 0, and attempt to relate the limiting behaviour of the abscissa convergence of ζ sim [α−r,α+r] to the multifractal spectrum f µ (α) at α. In order to make this idea formal we proceed as follows. For each α ∈ R and for each r > 0, we define the zeta-function ζ sim α (·; r) by ζ sim α (s; r) = ζ sim [α−r,α+r] (s) The next result, which is an application of one of our main results (see Theorem 3.6), shows that the multifractal zeta-functions ζ sim α (·; r) encode sufficient information allowing us to recover the multifractal spectra f µ (α) and f c µ (α) by letting r ց 0. (2) The fixed target approach Alternatively we can keep the enlarged "target" set I fixed and attempt to relate the abscissa of convergence of the multifractal zeta-function ζ sim I associated with the enlarger "target" set I to the values of the multifractal spectrum f µ (α) for α ∈ I. Of course, inequality(1.13) shows that if the "target" set I is "too small", then this is not possible. However, if the enlarger "target" set I satisfies a mild non-degeneracy condition, namely condition (1.16), guaranteeing that I is sufficiently "big", then the next result, which is also an application of one of our main results (see Theorem 3.6), shows that this is possible. More precisely the result shows that if the enlarger "target" set I satisfies condition (1.16), then the multifractal zeta-function ζ sim I associated with the enlarger "target" set I encode sufficient information allowing us to recover the suprema sup α∈I f µ (α) and sup α∈I f c µ (α) of the multifractal spectra f µ (α) and f c µ (α) for α ∈ I. Theorem 1.2. Fixed targets. Assume that the list (S 1 , . . . S N ) satisfies the OSC and let µ be the self-similar measure defined by (1.9). For a closed interval I, let ζ sim I be defined by (1.14). If where σ ab ζ sim I denotes the abscissa of convergence of the zeta-function ζ sim I . We emphasise that Theorem 1.1 and Theorem 1.2 are presented in order to motive this work and are special cases of the substantially more general and abstract theory of multifractal zeta-function developed in this paper. The next section, i.e. Section 2, describes the general framework developed in this paper and list our main results. In Section 3 we will discuss a number of examples, including, mixed and non-mixed multifractal spectra of self-similar and self-conformal measures, and multifractal spectra of Birkhoff ergodic averages. Statements of main results. 2.1. Main definitions: the zeta-functions ζ U,Λ C (·) and ζ U,Λ C (·; r). In this section we describe the framework developed in this paper and list our main results. We first recall and introduce some useful notation. Fix a positive integer N . Let Σ = {1, . . . , N } and for a positive integer n, write i.e. Σ n is the family of all strings i = i 1 . . . i n of length n with i j ∈ {1, . . . , N } and Σ * is the family of all finite strings i = i 1 . . . i m with m ∈ N and i j ∈ {1, . . . , N }. Also write i.e. Σ N is the family of all infinite strings i = i 1 i 2 . . . with i j ∈ {1, . . . , N }. For an infinite string i = i 1 i 2 . . . ∈ Σ N and a positive integer n, we will write i|n = i 1 . . . i n . In addition, for a positive integer n and a finite string i = i 1 . . . i n ∈ Σ n with length equal to n, we will write |i| = n, and we let [i] denote the cylinder generated by i, i.e. Also, let S : Σ N → Σ N denote the shift map. Finally, we denote the family of Borel probability measures on Σ N by P(Σ N ) and we equip P(Σ N ) with the weak topology. The multifractal zeta-function framework developed in this paper depend on a space X and two maps U and Λ satisfying various conditions. We will now introduce the space X and the maps U and Λ. (1) First, we fix a metric space X. (3) Finally, we fix a function Λ : Σ N → R satisfying the following three conditions: (C1) The function Λ is continuous; (C2) There are constants c min and c max with −∞ < c min ≤ c max < 0 such that c min ≤ Λ ≤ c max ; (C3) There is a constant c with c ≥ 1 such that for all positive integers n and all i, j ∈ Σ N with i|n = j|n, we have Condition (C2) is clearly motivated by the hyperbolicity condition from dynamical systems, and Condition (C3) is equally clearly motivated the bounded distortion property from dynamical systems. Associated with the space X and the maps U and Λ, we now define the following multifractal zeta-functions. Definition. The zeta-functions ζ U,Λ C and ζ U,Λ C (·; r) associated with the space X and the maps U and Λ. For a finite string i ∈ Σ n , let and for a positive integer n and an infinite string i ∈ Σ N , let L n : Σ N → P(Σ N ) be defined by For C ⊆ X, we define the zeta-function ζ U,Λ C associated with the space X and the maps U and Λ by for those complex numbers s for which the series converges absolutely, and for r > 0 and C ⊆ X, we define the zeta-function ζ U,Λ C (·; r) associated with the space X and the maps U and Λ by for those complex numbers s for which the series converges absolutely and where B(C, r) = {x ∈ X | dist(x, C) ≤ r} denotes the closed r neighborhood of C. Next, we formally define the abscissa of convergence (of a zeta-function). Definition. Abscissa of convergence. Let ( a i ) i∈Σ * be a family of positive numbers and define the (zeta-)function ζ by ζ(s) = i a s i for those complex numbers s for which the series converges. The abscissa of convergence of ζ is defined by Our main results, i.e. Theorem 2.1 and Theorem 2.2 below, relate the abscissa of converge of the zeta-functions ζ U,Λ C (·; r) and ζ U,Λ C to various multifractal quantities, including, the coarse multifractal spectrum associated with the space X and the maps U and Λ. In order to state Theorem 2.1 and Theorem 2.2 we will now define the coarse multifractal spectra. Definition. The coarse multifractal spectra associated with the space X and the maps U and Λ. For i = i 1 . . . i n ∈ Σ * , we let i = i 1 . . . i n−1 ∈ Σ * denote the "parent" of i. Next, for i ∈ Σ * and δ > 0, we write and . We define the lower and upper r-approximate coarse multifractal spectrum associated with the space X and the maps U and Λ by and we define the lower and upper coarse multifractal spectrum associated with the space X and the maps U and Λ by Below we state our main results. As suggested by the discussion in Section 1.4, we will attempt to relate the abscissae of convergence of the multifractal zeta-functions ζ U,Λ C and ζ U,Λ C (·; r) to various multifractal spectra using two different but equally natural approaches: the shrinking target approach or the fixed target approach. The shrinking target approach is discussed in Section 2.2 and the fixed target approach is discussed in Section 2.3. 2.2. First main result. The shrinking target approach: finding lim rց0 σ ab ζ U,Λ C (·; r) . For a given "target" C, we consider the following family B(C, r) r>0 of enlarged "target" sets B(C, r) shrinking to the original main "target" C as r ց 0, and attempt to relate the limiting behaviour of the abscissa convergence of the zeta-function ζ U,Λ C (·; r) = ζ U,Λ B(C,r) to the coarse multifractal spectrum f U,Λ (C) and other multifractal quatities. Our first main result, i.e. Theorem 2.1 below, shows that this is possible. More precisely, Theorem 2.1 shows that the abscissa of convergence of the zeta-function ζ U,Λ C (·; r) converges as r ց 0, and that this limit equals the coarse multifractal spectrum of C. We also show that the limit can be obtained by a variational principle involving the supremum of the entropy of all shift invariant Borel probability measures µ ∈ P(Σ N ) with U µ ∈ C. In Section 3 we show that in many important cases the limit lim rց0 σ ab ζ U,Λ C (·; r) equals the traditional multifractal spectra. Theorem 2.1. Shrinking targets. Let X be a metric space and let U : P(Σ N ) → X be continuous with respect to the weak topology. Let C ⊆ X be a closed subset of X. (1) The lower coarse multifractal spectrum associated with the space X and the maps U and Λ: we have (2) The variational principle: we have here P S (Σ N ) denotes the family of shift invariant Borel probability measures on Σ N and h(µ) denotes the entropy of µ ∈ P S (Σ N ). In order to prove Theorem 2.1 it suffices to prove the following three inequalities: Inequality (2.1) is proven in Section 5 using techniques from the theory of large deviations. Inequality (2.2) is proven in Section 6 using techniques from ergodic theory. Finally, inequality (2.3) follows directly from the definitions and is proved in Section 7. 2.3. Second main result. The fixed target approach: finding σ ab ζ U,Λ C . Alternatively, instead of choosing a family of "target" sets that shrinks to the given "target" C, we can keep the given "target" set C fixed and attempt to relate the abscissa of convergence of the multifractal zetafunction ζ U,Λ C associated with the "target" set C to the values of the multifractal spectrum coarse multifractal spectrum f U,Λ (C). Of course, the example in Section 1.4 shows that if the "target" set C is "too small", then this is not possible. However, if the coarse multifractal spectrum f U,Λ satisfies a continuity condition at C guaranteeing that the interior of C is "sufficiently big", then our second main result, i.e. Theorem 2.2 below, shows that this is possible. More precisely, Theorem 2.2 shows that if the coarse multifractal spectrum f U,Λ is inner continuous at C (the definition of inner continuity will be given below), then the abscissa of convergence of the zeta-function ζ U,Λ C equals the coarse multifractal spectrum of C. In analogy with Theorem 2.1, we also show that the abscissa of convergence of ζ U,Λ C can be obtained by a variational principle involving the supremum of the entropy of all shift invariant Borel probability measures µ ∈ P(Σ N ) with U µ ∈ C. However, before stating Theorem 2.2, we first define the continuity condition that the coarse multifractal spectrum f U,Λ is required to satisfy. Definition. Inner continuity. Let P (X) denote the family of subsets of X and for C ⊆ X and r > 0, write We say that a function Φ : We can now state Theorem 2.2. Theorem 2.2. Fixed targets. Fix a positive integer M . Let U : P(Σ N ) → R M be continuous with respect to the weak topology. Let C ⊆ R M be a closed subset of R M and assume that f U,Λ is inner continuous at C. (1) The lower coarse multifractal spectrum associated with R M and the maps U and Λ: we have (2) The variational principle: we have here P S (Σ N ) denotes the family of shift invariant Borel probability measures on Σ N and h(µ) denotes the entropy of µ ∈ P S (Σ N ). Theorem 2.2 follows easily from Theorem 2.1 and is proved in Section 8. Euler product. We will now prove that the multifractal zeta-function ζ U,Λ C has a natural Euler product. We begin with a definition. Definition. Composite and prime. A finite string i ∈ Σ * is called composite (or peiodic) if there is u ∈ Σ * and a positive integer n > 1 such that i = u . . . u where u is repeated n times. A finite string i ∈ Σ * is called prime if it is not composite. Theorem 2.3 shows that ζ U,Λ C has an Euler product. In Theorem 2.3 we use the following notation, namely, if f is a holomorphic function that does not attain the value 0, then we let Lf denote the logarithmic derivative of f , i.e. Lf = f ′ f . We can now state Theorem 2.3. Theorem 2.3. Euler product. Let X be a metric space and let U : P(Σ N ) → X be continuous with respect to the weak topology. Assume that (1) For complex numbers s with Re(s) > σ ab ( ζ U,Λ C ), the product Theorem 2.3 is proved in Section 9. 3. Applications: multifractal spectra of measures and multifractal spectra of ergodic Birkhoff averages We will now consider several of applications of Theorem 2.1 and Theorem 2.2 to multifractal spectra of measures and ergodic averages. In particular, we consider the following examples: • Section 3.1: Multifractal spectra of self-conformal measures. 3.1. Multifractal spectra of self-conformal measures. Since our examples are formulated in the setting of self-conformal (or self-similar) measures we begin be recalling the definition of selfconformal (and self-similar) measures. A conformal iterated function system with probabilities is a list It follows from [Hu] that there exists a unique non-empty compact set K with K ⊆ X such that The set K is called the self-conformal set associated with the list V , X , (S i ) i=1,... ,N ; in particular, if each map S i is a contracting similarity, then the set K is called the self-similar set associated with the list V , X , .. ,N is a probability vector then it follows from [Hu] that there is a unique probability measure µ with supp µ = K such that The measure µ is called the self-conformal measure associated with the list V , X , .. ,N . We will frequently assume that the list V , X , (S i ) i=1,... ,N satisfies the Open Set Condition defined below. Namely, the list Next, we define the natural projection map π : Σ N → K. However, we first make the follwing definitions. Namely, for i = i 1 . . . i n ∈ Σ * , write The natural projection map π : Σ N → K is now defined by Finally, we collect the definitions and results from multifractal analysis of self-conformal measures that we need in order to state our main results. We first recall, that the Hausdorff multifractal spectrum f µ of µ is defined by . ∈ Σ N , and for q ∈ R, let β(q) be the unique real number such that 0 = P β(q)Λ + qΦ ; here, and below, we use the following standard notation, namely if ϕ : Σ N → R is a Hölder continuous function, then P (ϕ) denotes the pressure of ϕ. Also, recall that the Legendre transform is defined in (1.8). We can now state Patzschke's result. Theorem A [P]. Let µ be defined by (3.2) and α ∈ R. If the OSC is satisfied, then we have Of course, in general, the limit lim rց0 [Mo]) have shown that the set of divergence points, i.e. the set of points x for which the limit lim rց0 log µB(x,r) log r does not exist, typically is highly "visible" and "observable", namely it has full Hausdorff dimension. More precisely, it follows from [BaSc] that if the OSC is satisfied and t denotes the Hausdorff dimension of K, then Hausdorff measure restricted to K. This suggests that the set ∆ µ has a surprising rich and complex fractal structure, and in order to explore this more carefully Olsen & Winter [OlWi1,OlWi2] introduced various generalised multifractal spectra functions designed to "see" different sets of divergence points. In order to define these spectra we introduce the following notation. If M is a metric space and ϕ : (0, ∞) → M is a function, then we write acc rց0 f (r) for the set of accumulation points of f as r ց 0, i.e. acc rց0 ϕ(r) = x ∈ M x is an accumulation point of f as r ց 0 . [Ca,Vo] for earlier but related results in a slightly different setting). In [OlWi1] Olsen & Winter introduced and investigated the generalised Hausdorff multifractal spec Theorem B [LiWuXi, Mo,OlWi1]. Let µ be defined by (3.2) and let C be a closed subset of R. If the OSC is satisfied, then we have As a first application of Theorem 2.1 and Theorem 2.2 we obtain a zeta-function whose abscissa of convergence equals the generalised multifractal spectrum F µ (C) of a self-conformal measure µ. The is the content of the next theorem. Theorem 3.1. Multifractal zeta-functinons for multifractal spectra of self-conformal measures. Let (p 1 , . . . , p N ) be a probability vector, and let µ denote the self-conformal measure associated with the list V , X , For a closed set C ⊆ R, we define the self-conformal multifractal zeta-function by For a closed set C ⊆ R and r > 0, we define the self-conformal multifractal zeta-function by and if α ∈ R and C = {α} is the singleton consisting of α, then we write ζ con for q ∈ R. Let C be a closed subset of R. Then the following hold: In particular, if α ∈ R, then we have lim rց0 σ ab ζ con α (·; r) = β * (α) . (1.2) If the OSC is satisfied, then we have In particular, if the OSC is satisfied and α ∈ R, then we have (2.1) If C is an interval and (2.2) If C is an interval and • C ∩ − β ′ (R) = ∅ and the OSC is satisfied, then we have Proof This follows immediately from the more general Theorem 3.2 in Section 3.2 by putting M = 1. 3.2. Mixed multifractal spectra of self-conformal measures. Recently mixed (or simultaneous) multifractal spectra have generated an enormous interest in the mathematical literature, see [BaSa,Mo,Ol2,Ol3]. Indeed, previous result (Theorem A and Theorem B) only considered the scaling behaviour of a single measure. Mixed multifractal analysis investigates the simultaneous scaling behaviour of finitely many measures. Mixed multifractal analysis thus combines local characteristics which depend simultaneously on various different aspects of the underlying dynamical system, and provides the basis for a significantly better understanding of the underlying dynamics. We will now make these ideas precise. For m = 1, . . . , M , let (p m,1 , . . . , p m,N ) be a probability vector, and let µ m denote the self-conformal measure associated with the list V , X , The mixed multifractal spectrum f µ µ µ of the list µ µ µ = (µ 1 , . . . , µ M ) is defined by log µ 1 (B(x, r)) log r , . . . , log µ M (B(x, r)) log r = α α α for α α α ∈ R M . Of course, it is also possible to define generalised mixed multifractal spectra designed to "see" different sets of divergence points. Namely, we define the generalised mixed Hausdorff multifractal spectrum F µ µ µ of the list µ µ µ = (µ 1 , . . . , µ M ) by Again we note that the generalised mixed multifractal spectrum is a genuine extensions of the traditional mixed multifractal spectrum F µ (α α α), namely, if C = {α α α} is a singleton consisting of the point α α α, then clearly F µ (C) = f µ (α α α). Assuming the OSC, the generalised mixed multifractal spectrum F µ (C) can be computed [Mo,Ol2]. In order to state the result from [Mo,Ol2], we introduce the following definitions. Define Λ, Φ m : Σ N → R for m = 1, . . . , M by Λ(i) = log |DS i1 (πSi)| and Φ m (i) = log p m,i1 for i = i 1 i 2 . . . ∈ Σ N , and write Φ Φ Φ = (Φ 1 , . . . , Φ M ). Define β : R M → R by 0 = P β(q)Λ + q|Φ Φ Φ for q ∈ R M ; recall that if ϕ : Σ N → R is a Hölder continuous map, then P (ϕ) denotes the pressure of ϕ. Also, for x, y ∈ R M , we let x|y denote the usual inner product of x and y, and if ϕ : R M → R is a function, we define the Legendre transform ϕ * : The generalised mixed multifractal spectra f µ µ µ and F µ µ µ are now given by the following theorem. Theorem C. [Mo,Ol2]. Let µ 1 , . . . , µ M be defined by (3.3) and let C ⊆ R M be a closed set. Put µ µ µ = (µ 1 , . . . , µ M ). If the OSC is satisfied, then we have In particular, if the OSC is satisfied and α α α ∈ R M , then we have As a second application of Theorem 2.1 and Theorem 2.2 we obtain a zeta-function whose abscissa of convergence equals the generalised mixed multifractal spectrum F µ µ µ (C) of a list µ µ µ of self-conformal measures. The is the content of the next theorem. For a closed set C ⊆ R M , we define the self-conformal multifractal zeta-function by For a closed set C ⊆ R M and r > 0, we define the self-conformal multifractal zeta-function by for q ∈ R M . Let C be a closed subset of R M . Then the following hold: (1.1) We have lim rց0 σ ab ζ con C (·; r) = sup α α α∈C β * (α α α) . (1.2) If the OSC is satisfied, then we have lim rց0 σ ab ζ con C (·; r) = sup (2.2) If C is convex and • C ∩ − ∇β(R M ) = ∅ and the OSC is satisfied, then we have We will now prove Theorem 3.2. Recall that the function Λ : Σ N → R is defined by It is well-known that Λ satisfies Conditions (C1)-(C3) in Section 2.1. Also, a straight forward calculation shows that sup k∈ and note that if i ∈ Σ * , then Hence, for C ⊆ R M we have In order to prove Theorem 3.2, we first prove the following three auxiliary results, namely, Propositions 3.3-3.5. Proof This result is folklore for M = 1. The proof of Proposition 3.3 for an arbitrary positive integer can (with some modifications) be modelled on the argument for M = 1. However, for the sake of brevity we have decided to omit the proof. (4) Let C be a closed subset of R M . If C is convex and Proof (1) It is well-known that there is a constant c 0 > 0 such that for all i ∈ Σ * and all u ∈ Σ N , we have 1 c0 ≤ diam K i |DS i (πu)| ≤ c 0 , see, for example, [Fa2] or [Pa]. It is not difficult to see that the desired result follows from this and the fact that the function Λ : Σ N → R defined by Λ(i) = log |DS i1 (πSi)| for i = i 1 i 2 . . . ∈ Σ N satisfies Conditions (C1)-(C3) in Section 2.1. (2) Fix r > 0. Let (∆ n ) n be the sequence from (1). Since ∆ n → 0, we can find a positive integer N r such that if n ≥ N r , then ∆ n ≤ r. Consequently, using (3.10) in Part (1), for s ∈ R, we have (3.14) The desired results follow immediately from inequalities (3.13) and (3.14). We can now prove Theorem 3.2. 3.3. Multifractal spectra of self-similar measures. Due to important role self-similar measures play in fractal geometry, it is instructive to note the following special case of Theorem 3.1. Theorem 3.6. Multifractal zeta-functinons for multifractal spectra of self-similar measures. Assume that the maps S 1 , . . . , S N are contracting similarities and let r i denote the contraction ratio of S i . For i = i 1 . . . i n ∈ Σ * , let r i = r i1 · · · r in . Let (p 1 , . . . , p N ) be a probability vector, and let µ denote the self-conformal measure associated with the list V , X , For a closed set C ⊆ R, we define the self-similar multifractal zeta-function by For a closed set C ⊆ R and r > 0, we define the self-similar multifractal zeta-function by and if α ∈ R and C = {α} is the singleton consisting of α, then we write ζ C (s; r) = ζ α (s; r), i.e. we write for q ∈ R. Let C be a closed subset of R. Then the following hold: In particular, if α ∈ R, then we have lim rց0 σ ab ζ sim α (·; r) = β * (α) . (1.2) If the OSC is satisfied, then we have In particular, if the OSC is satisfied and α ∈ R, then we have (2.1) If C is an interval and log ri = ∅ and the OSC is satisfied, then we have log µ(B(x, r)) log r ⊆ C . Proof Theorem 3.6 follows immediately from Theorem 3.1. It is, of course, also possible to formulate a version of Theorem 3.2 for a finite list self-similar measures. However, for sake of brevity we have decided not to do this. 3.4. Multifractal spectra of ergodic Birkhoff averages. We first fix γ ∈ (0, 1) and define the metric d γ on Σ N by d γ (i, j) = γ max{n | i|n=j|n} ; throughout this section, we equip Σ N with the metric d γ and continuity and Lipschitz properties of functions f : Σ N → R from Σ N to R will always refer to the metric d γ . Multifractal analysis of Birkhoff averages has received significant interest during the past 10 years, see, for example, [BaMe,FaFe,FaFeWu,FeLaWu,Oli,Ol3,OlWi2]. The multifractal spectrum F erg f of ergodic Birkhoff averages of a continuous function f : Σ N → R is defined by for α ∈ R; recall that the projection map π : Σ N → R d is defined in Section 3.1 and that S : Σ N → Σ N denotes the shift map. One of the main problems in multifractal analysis of Birkhoff averages is the detailed study of the multifractal spectrum F erg f . For example, Theorem D below is proved in different settings and at various levels of generality in [FaFe,FaFeWu,FeLaWu,Oli,Ol3,OlWi2]. Before we can state we introduce the following notation. If (x n ) n is a sequence of real numbers, then we write acc n x n for the set of accumulation points of (x n ) n , i.e. acc n x n = x ∈ R x is an accumulation point of (x n ) n . Also, recall that P S (Σ N ) denotes the family of shift invariant Borel probability measures on Σ N and that h(µ) denotes the entropy of µ ∈ P S (Σ N ). We can now state Theorem D. Theorem D. [FaFe,FaFeWu,FeLaWu,Oli,Ol3,OlWi2]. Let f : Σ N → R be a Lipschitz function. Define Λ : Σ N → R by Λ(i) = log |DS i1 (πSi)| for i = i 1 i 2 . . . ∈ Σ N . Let C be a closed subset of R. If the OSC is satisfied, then In particular, if the OSC is satisfied and α ∈ R, then we have As a third application of Theorem 2.1 we obtain a zeta-function whose abscissa of convergence equals the multifractal spectrum F erg f of ergodic Birkhoff averages of a Lipschitz function f . This is the content of the next theorem. Theorem 3.7. Multifractal zeta-functinons for multifractal spectra of of ergodic Birkhoff averages. Let f : Σ N → R be a Lipschitz function. For i ∈ Σ * , let and write i = iii . . . ∈ Σ N . For a closed set C ⊆ R M , we define the self-similar multifractal zetafunction of f by ζ erg and if α ∈ R and C = {α} is the singleton consisting of α, then we write ζ C (s; r) = ζ α (s; r), i.e. we write Then the following hold: (1) We have In particular, if α ∈ R, then we have (2) If the OSC is satisfied, then we have In particular, if the OSC is satisfied and α ∈ R, then we have We will now prove Theorem 3.7. Recall that the function Λ : Σ N → R is defined by ( 3.21) and note that if i ∈ Σ * , then Hence, for C ⊆ R we have In order to prove Theorem 3.7, we first prove the following auxiliary result, namely, Proposition 3.8. (1) There is a sequence (∆ n ) n with ∆ n > 0 for all n and ∆ n → 0 such that for all closed subsets C of R and for all n ∈ N, i ∈ Σ n and u ∈ Σ N , we have (2) We have lim rց0 σ ab ζ erg C (·; r) = lim rց0 σ ab ζ U,Λ C (·; r) . Proof (1) Let Lip(f ) denote the Lipschitz constant of f . It is clear that for all n ∈ N, i ∈ Σ n and u ∈ Σ N , we have . (3.23) It is not difficult to see that the desired result follows from (3.23). (2) This statement follows from Part (1) by an argument very similar to the proof of Part (2) and Part (3) in Proposition 3.5, and the proof is therefore omitted. We can now prove Theorem 3.7. Proof of Theorem 3.7 (1) This statement follows immediately from Theorem 2.1 and Proposition 3.8. (2) This statement follows immediately from Part (1) using Theorem 2.2 and Theorem D. Preliminary results The purpose of this short section is to prove Proposition 4.1 establishing various auxiliary results needed for the proof of Theorem 2.1. Let c min and c max be the constants from the Condition (C2) in Section 2.1 and write s min = e cmin , s max = e cmax . (4.1) we can now state and prove Proposition 4.1. Recall, that for i ∈ Σ n , the number s i is defined by Proposition 4.1. Let c be the constant from Condition (C3) in Section 2.1. Let i, j ∈ Σ * . ( (4) For k ∈ Σ N and a positive integer n, we have exp (5) For k ∈ Σ N and a real number α, the following two statements are equivalent: (ii) 1 n log s k|n → α. Proof of inequality (2.1) The purpose of this section is to prove Theorem 5.5 providing a proof of inequality (2.1). The proof of (2.1) is based on results from large deviation theory. In particular, we need Varadhan's [Va] large deviation theorem (Theorem 5.1.(i) below), and a non-trivial application of this (namely Theorem 5.1.(ii) below) providing first order asymptotics of certain "Boltzmann distributions". Definition. Let X be a complete separable metric space and let (P n ) n be a sequence of probability measures on X. Let (a n ) n be a sequence of positive numbers with a n → ∞ and let I : X → [0, ∞] be a lower semicontinuous function with compact level sets. The sequence (P n ) n is said to have the large deviation property with constants (a n ) n and rate function I if the following two condistions hold: (i) For each closed subset K of X, we have lim sup n 1 a n log P n (K) ≤ − inf x∈K I(x) ; (ii) For each open subset G of X, we have lim inf n 1 a n log P n (G) ≥ − inf x∈G I(x) . Theorem 5.1. Let X be a complete separable metric space and let (P n ) n be a sequence of probability measures on X. Assume that the sequence (P n ) n has the large deviation property with constants (a n ) n and rate function I. Let F : X → R be a continuous function satisfying the following two conditions: (i) For all n, we have exp(a n F ) dP n < ∞ . (ii) We have lim M→∞ lim sup n 1 a n log {M≤F } exp(a n F ) dP n = −∞ . (Observe that the Conditions (i)-(ii) are satisfied if F is bounded.) Then the following statements hold. (1) We have lim n 1 a n log exp(a n F ) dP n = − inf x∈X (I(x) − F (x)) . (2) For each n define a probability measure Q n on X by Q n (E) = E exp(a n F ) dP n exp(a n F ) dP n . Then the sequence (Q n ) n has the large deviation property with constants (a n ) n and rate function ( Proof We start by introducing some notation. If i ∈ Σ * , then we define i ∈ Σ N by i = ii . . . . We also define M n : Σ N → P S (Σ N ) by for i ∈ Σ N ; recall, that the map L n : Σ N → P(Σ N ) is defined in Section 2. Furthermore, note that if i ∈ Σ N , then M n i is shift invariant, i.e. M n maps Σ N into P S (Σ N ) as claimed. Next, let P denote the probability measure on Σ N given by Finally, we define F : Observe that since Λ is bounded, i.e. Λ ∞ < ∞, we conclude that F ∞ = |t| Λ ∞ < ∞. Also, for a positive integer n, define probability measures P n , Q n ∈ P(P S (Σ N )) by We now prove the following two claims. This completes the proof of Claim 2. Combining Claim 1 and Claim 2 shows that Let c be the constant from Condition (C3) in Section 2.1, and notice that it follows from Proposition 4.1 that if i ∈ Σ N and n is a positive integer, then we have s t i|n ≤ c |t| exp( t n−1 k=0 ΛS k ( i|n ) ). We conclude from this and (5.2) that Next, we observe that it follows from [El] that the sequence (P n = P • M −1 n ) n ⊆ P P S (Σ N ) has the large deviation property with respect to the sequence (n) n and rate function I : P S (Σ N ) → R given by I(µ) = log N − h(µ). We therefore conclude from Part (1) of Theorem 5.1 that lim sup Also, since the sequence (P n = P • M −1 n ) n ⊆ P P S (Σ N ) has the large deviation property with respect to the sequence (n) n and rate function I : P S (Σ N ) → R given by I(µ) = log N − h(µ), we conclude from Part (2) of Theorem 5.1 that the sequence (Q n ) n has the large deviation property with respect to the sequence (n) n and rate function (I − F ) − inf ν∈PS (Σ N ) (I(ν) − F (ν)). As the set {U ∈ B(C, r)} = U −1 (B(C, r)) is closed, it therefore follows from the large deviation property that lim sup This completes the proof. We will now use Theorem 5.2 to prove Theorem 5.5 providing a proof of inequality (2.1). However, we first prove two small lemmas. Lemma 5.3. Let X be a metric space and let f, g : X → R be upper semi-continuous functions with f, g ≥ 0. Then f g is upper semi-continuous. Proof Since f and g are upper semi-continuous with f, g ≥ 0, this result follows easily from the definition of upper semi-continuity, and the proof is therefore omitted. Lemma 5.4. Let X be a metric space and let Φ : X → R be an upper semi-continuous function. Let K 1 , K 2 , . . . ⊆ X be non-empty compact subsets of X with K 1 ⊇ K 2 ⊇ . . . . Then Proof First note that it is clear that inf n sup x∈Kn Φ(x) ≥ sup x∈∩nKn Φ(x). We will now prove the reverse inequality, namely, inf n sup x∈Kn Φ(x) ≤ sup x∈∩nKn Φ(x). Let ε > 0. For each n, we can choose x n ∈ K n such that Φ(x n ) ≥ sup x∈Kn Φ(x) − ε. Next, since K n is compact for all n and K 1 ⊇ K 2 ⊇ . . . , we can find a subsequence (x n k ) k and a point x 0 ∈ ∩ n K n such that Finally, letting ε ց 0 gives the desired result. We can now state and prove Theorem 5.5. Theorem 5.5. Let X be a metric space and let U : P(Σ N ) → X be continuous with respect to the weak topology. Let C ⊆ X be a closed subset of X and r > 0. Proof (1) For brevity write We must now prove that if t > u, then Let t > u and write ε = t−u 3 > 0. It follows from the definition of u that if µ ∈ P S (Σ N ) with U µ ∈ B(C, r), then we have − h(µ) where we have used the fact that Λ dµ < 0 because Λ < 0. This implies that if µ ∈ P S (Σ N ) with U µ ∈ B(C, r), then We deduce from this inequality and Theorem 5.2 that lim sup This completes the proof of (1). (2) It follows immediately from Part (1) that (5.10) Also, the function r → sup µ∈PS (Σ N ) , Uµ∈B(C,r) − h(µ) Λ dµ is clearly increasing, and it therefore follows that lim sup Next, since the function U : P(Σ N ) → X is continuous, we conclude that the set U −1 B(C, 1 k ) is closed, and it therefore follows that the set K k = P S (Σ N ) ∩ U −1 B(C, 1 k ) is compact. Also, since the entropy function h : P S (Σ N ) → R is upper semi-continuous (see [Wa,Theorem 8.2]) with h ≥ 0 and the function f : continuous) with f ≥ 0, we conclude from Lemma 5.3 that the function Φ : is upper semi-continuous. Lemma 5.4 applied to Φ therefore implies Combining (5.12) and (5.13) gives (5.14) Finally, the desired result follows by combining (5.10), (5.11) and (5.14). Proof of inequality (2.2) The purpose of this section is to prove Theorem 6.6 providing a proof of inequality (2.2). We first state and prove a number of auxiliary results. For i, j ∈ Σ N with with i = j, we will write i ∧ j for the longest common prefix of i and j (i.e. i ∧ j = u where u is the unique element in Σ * for which there are k, l ∈ Σ N with k = k 1 k 2 . . . and l = l 1 l 2 . . . such that k 1 = l 1 , i = uk and j = ul). We will always equip Σ N with the metric d Σ N defined by for i, j ∈ Σ N . In the results below, we will always compute the Hausdorff dimension of a subset of Σ N with respect to the metric d Σ N . Note that when Σ N is equipped with the metric d Σ N , then Lemma 6.1. Let (X, d) be a metric space and let U : P(Σ N ) → X be continuous with respect to the weak topology. Let C be a closed subset of X and r > 0. (1) There is a positive integer M r such that if k ≥ M r , u ∈ Σ k and k, l ∈ Σ N , then (2) There is a positive integer M r such that if m ≥ M r , then Proof (1) For a function f : and define the metric L in P(Σ N ) by we note that it is well-known that L is a metric and that L induces the weak topology. Since U : P(Σ N ) → X is continuous and P(Σ N ) is compact, we conclude that U : P(Σ N ) → X is uniformly continuous. This implies that we can choose δ > 0 such that all measures µ, ν ∈ P(Σ N ) satisfy the following implication: Next, choose a positive integer M r such that 1 M r (1 − s max ) < δ ; (6.4) recall, that s max is defined in (4.1). If k ≥ M r , u ∈ Σ k and k, l ∈ Σ N , then it follows from (6.4) that and we therefore conclude from (6.3) that d( U L k (uk) , U L k (ul) ) ≤ r 2 . (2) It follows from (1) that there is a positive integer M r such that if k ≥ M r , u ∈ Σ k and k, l ∈ Σ N , then d( U L k (uk) , U L k (ul) ) ≤ r 2 . We now claim that if m ≥ M r , then In order to prove this inclusion, we fix m ≥ M r and i ∈ Σ N with U L k i ∈ B(C, r 2 ) for all k ≥ m. We must now prove that U L k [i|k] ⊆ B(C, r) for all k ≥ m. We therefore fix k ≥ m and j ∈ [i|k]. We must now prove that U L k j ∈ B(C, r). For brevity write u = i|k. Since j ∈ [i|k] = [u], we can now find (unique) k, l ∈ Σ N such that i = uk and j = ul. We now have (6.5) However, since k ≥ m ≥ M r and u ∈ Σ k , we conclude that d( U L k (uk) , U L k (ul) ) ≤ r 2 . Also, since k ≥ m, we deduce that U L k i ∈ B(C, r 2 ), whence dist( U L k i , C ) ≤ r 2 . It therefore follows from (6.5) that This completes the proof. Lemma 6.2. Let X be a metric space and let U : P(Σ N ) → X be continuous with respect to the weak topology. Let C ⊆ X be a closed subset of X. Then recall that dim H denotes the Hausdorff dimension. Proof For a subset Ξ of Σ N , we let dim B Ξ denote the lower box dimension of Ξ; the reader is referred to [Fa1] for the definition of the lower box dimension. We will use the fact that dim H Ξ ≤ dim B Ξ for all Ξ ⊆ Σ N , see, for example, [Ed]. We now introduce the following notation. For brevity write Also, for a positive integer m and a positive real number r > 0, write Observe that if M is any positive integer, then we clearly have for all r > 0. We also observe that it follows from Lemma 6.1 that for each positive number r > 0 there is a positive integer M r such that for all m ≥ M r . It follows from (6.6) and (6.7) that for all r > 0. Fix a positive integer m. We now prove that ,r) [i] (6.9) for all 0 < δ < s m min and all r > 0. Indeed, fix j ∈ ∆ m (r). Now, let k 0 denote the unique positive integer such that if we write j 0 = j|k 0 , then s j0 ≤ δ < s j0 , i.e. s j0 ≈ δ. Since it follows from Proposition 4.1 that s k0 min = s |j0| min ≤ s j0 ≤ δ < s m min , we conclude that k 0 ≥ m, and the fact that j ∈ ∆ m (r) therefore implies that U L |j0| [j 0 ] = U L k0 [j|k 0 ] ⊆ B(C, r) This shows that j 0 ∈ Π U,Λ δ (C, r), whence j ∈ [j|k 0 ] = [j 0 ] ⊆ ∪ i∈Π U,Λ δ (C,r) [i]. This proves (6.9). Inclusion (6.9) shows that for all 0 < δ < s m min , the family ( for all r > 0. Since (6.10) holds for all m, we conclude that for all r > 0. Combining (6.8) and (6.11) now shows that for all r > 0. Finally, letting r ց 0 in (6.12) completes the proof. In order to statement and prove the next lemma we introduce the following notation. Namely, for a Hölder continuous function ϕ : Σ N → R, we will write P (ϕ) for the topological pressure of ϕ. We can now state and prove Lemma 6.3. Lemma 6.3. Let µ ∈ P S (Σ N ) with supp µ = Σ N . (Here supp µ denotes the topological support of µ.) Then there exists a sequence (µ n ) n of probability measures on Σ N satisfying the following three conditions. (2) For each n, the measure µ n is ergodic. ( Proof Fix a positive integer n. Since supp µ = Σ N , we deduce that µ[i] > 0 for all i ∈ Σ * . Hence, for m ∈ N and i 1 . . . i m ∈ Σ m , we can define p n,i1...im by for n < m. (6.13) Since clearly i p n,i = 1 and i p n,i1...imi = p n,i1...im for all m and all i 1 . . . i m ∈ Σ m , there exists a (unique) probability measure µ n on Σ N such that for all m and all i 1 . . . i m ∈ Σ m (cf. [Wa,p. 5]). Claim 1. We have µ n → µ weakly. Proof of Claim 1. It follows from definition (6.13) that µ n [i] = µ[i] for all i ∈ Σ n . This clearly implies that µ n → µ weakly. This completes the proof of Claim 1. Claim 2. For each n, there is a Hölder continuous function ϕ n : Σ N → R such that the following conditions hold. (1) P (ϕ n ) = 0 , (2) The measure µ n is a Gibbs state of ϕ n . Proof of Claim 2. We first note that µ n is shift invariant. Indeed, since µ is shift invariant, a small calculation shows that i µ n [ii] = µ n [i] for all i ∈ Σ * . This implies that µ n (S −1 [i]) = µ n [i] for all i ∈ Σ * , whence µ n (S −1 B) = µ n (B) for all Borel sets B. Next we show that µ n is a Gibbs state for a Hölder continuous function. Define ϕ n : Σ N → R by The map ϕ n is clearly Hölder continuous, and it follows from the definition of µ n that for all i ∈ Σ N and all m > n. This shows that µ n is the Gibbs state of ϕ n , and that the pressure P (ϕ n ) of ϕ n equals 0, i.e. P (ϕ n ) = 0; cf. [Bo]. This completes the proof of Claim 2. Claim 3. For each n, the measure µ n is ergodic. Proof of Claim 3. It follows from Claim 2 that µ n is the a Gibbs state of a Hölder continuous function. This implies that µ n is ergodic. This completes the proof of Claim 3. Claim 4. We have h(µ n ) → h(µ). Proof of Claim 4. For measurable partitions A, B of Σ, let h(µ; A) and h(µ; A|B) denote the entropy of A with respect to µ, and the conditional entropy of A given B with respect to µ, respectively. This completes the proof of Claim 4. The proof now follows from Claim 1, Claim 3 and Claim 4. The next auxiliary result provides a formula for the upper Hausdorff dimension of is a probability measure. If µ is a probability measure on Σ N , we define the upper Hausdorff dimension of µ by (Recall that dim H denotes the Hausdorff dimension.) The next result provides a formula for the upper Hausdorff dimension of an ergodic probability measure on Σ N . This result is folklore and follows from the Shannon-MacMillan-Breiman theorem and the ergodic theorem. However, for sake of completeness we have decided to include the short proof. Proposition 6.4. Let µ be an ergodic probability measure on Σ N . Proof Since µ is ergodic, it follows from the Shannon-MacMillan-Breiman theorem that Next, for each i ∈ Σ N and r > 0, let n i,r denote the unique integer such that s i|n i,r < r ≤ s i|n i,r . It follows from the definition of the metric d Σ N on Σ N (see (6.1) and (6.2)) that B(i, r) = [i|n i,r ]. Also, if we let c denote the constant from Condition (C3) in Section 2.1, then it follows from Proposition 4.1 that s i|n i,r < r ≤ s i|n i,r ≤ c smin s i|n i,r . Combining these facts, we now deduce from (6.18) that where µ-ess sup denotes the µ essential supremum. Finally, we note that it is well-known that dim H µ = µ-ess sup i lim inf rց0 log µ(B(i,r)) log r (see, for example, [Fa2]), and it therefore follows immediately from (6.19) that dim H µ = µ-ess sup i lim inf rց0 The final auxiliary result says that the map C → f U,Λ (C) is upper semi-continuous. In order to state this result we introduce the following notation. For a metric space X, we write F (X) = F ⊆ X F is closed and non-empty (6.20) and we equip F (X) with the Hausdorff metric D; recall, that since X may be unbounded, the Hausdorff distance D is defined as follows, namely, for E, F ∈ F (X), write ∆(E, F ) = min sup x∈E dist(x, F ) , sup y∈F dist(y, E) (6.21) and define D by D = min(1, ∆) . (6.22) Lemma 6.5. Let X be a metric space and let U : P(Σ N ) → X be continuous with respect to the weak topology. Equip F (X) with the Hausdorff metric D. Then the function f U,Λ : F (X) → R is upper semicontinuous, i.e. for each C ∈ F (X) and each ε > 0, there exists a real number ρ > 0 such that if F ∈ F (X) and D(F, C) < ρ, then We now claim that if F ∈ F (X) and D(F, C) < ρ, then To prove this, let F ∈ F (X) with D(F, C) < ρ. It follows from Claim 1 and (6.23) that if 0 < r < ρ, then Since this inequality holds for all 0 < r < ρ, we finally conclude that f U, We can now state and prove the main result in this section, namely, Theorem 6.6 providing a proof of inequality (2.2). Theorem 6.6. Let X be a metric space and let U : P(Σ N ) → X be continuous with respect to the weak topology. Let C ⊆ X be a closed subset of X. We have Let ε > 0. Next, fix µ ∈ P S (Σ N ) with U µ ∈ C. We will now prove that Let F (X) be denied as in (6.20), i.e. F (X) = {F ⊆ X | F is closed and non-empty}, and and equip F (X) with the Hausdorff metric D, see (6.21) and (6.22). It follows from Lemma 6.5 that the function f U,Λ : F (X) → R is upper semi-continuous, and we can therefore choose ρ ε > 0 such that: if F ∈ F (X) and D(F, C) < ρ ε , then (6.26) Next, observe that we can choose an S-invariant probability measure γ on Σ N such that supp γ = Σ N . For t ∈ (0, 1), we now write µ t = (1 − t)µ + tγ ∈ P S (Σ N ). As U is continuous with U µ ∈ C and µ t → µ weakly as t ց 0, there exists 0 < t ε < 1 such that for all 0 < t < t ε , we have dist(U µ t , C) < ρ ε . Claim 1. For all 0 < t < t ε , we have Proof of Claim 1. Using the fact that the entropy map h : P S (Σ) → R is affine (cf. [Wa]) we conclude that However, since Λ is continuous and µ t,n → µ t weakly (by (6.28)), we conclude that Λ dµ t,n → Λ dµ t . We deduce from this and the fact that h(µ t,n ) → h(µ t ) (by (6.30)) that (6.34) Combining (6.33) and (6.34) now yields Also, since µ t,n is ergodic (by (6.29)), it follows from Proposition 6.4 that dim H µ t,n = − h(µt,n) log N , and we therefore conclude from (6.35) that This completes the proof of Claim 1. Proof of Claim 2. It follows immediately from the ergodicity of µ t,n and the ergodic theorem that µ t,n ({i ∈ Σ N | lim m L m i = µ t,n }) = 1. Hence This completes the proof of Claim 2. Since µ ∈ P S (X) with U µ ∈ C was arbitrary, it follows immediately from (6.25) that Finally, letting ε ց 0 gives the desired result. Proof of inequality (2.3) The purpose of this section is to prove Theorem 7.1 providing a proof of inequality (2.3). Theorem 7.1. Let X be a metric space and let U : P(Σ N ) → X be continuous with respect to the weak topology. Let C ⊆ X be a closed subset of X and r > 0. Proof of Claim 1. Indeed, if i = i 1 . . . i m ∈ Σ m with s i ≈ ρ n , then s i ≤ ρ n < sî, whence s i ≤ ρ n . It also follows from Proposition 4.1 that s i = sî im ≥ 1 c sîs im > 1 c ρ n s min = smin cρ ρ n+1 ≥ ρ n+1 where the last inequality is due to the fact that smin cρ ≥ 1 because ρ < min( smin c , δ ε ) ≤ smin c . This completes the proof of Claim 1. Also, for n ∈ N and i ∈ Σ * , the following implication follows from Claim 1: We conclude immediately from (7.3) that However, if i ∈ Π U s,ρ n (C, r), then s i ≈ ρ n , and it therefore follows from Claim 1 that ρ n+1 < s i ≤ ρ n , whence s i ≥ ρ nt ρ |t| . We conclude from this and (7.5) that Finally, since ρ n ≤ ρ < min( smin c , δ ε ) ≤ δ ε , we deduce from (7.1) that ρ −nt = (ρ n ) −t ≤ N U s,ρ n (C, r). This and (7.6) now implies that This completes the proof of Claim 2. Proof Let y ∈ B I(C, ε) , r . We must now prove that y ∈ C. Assume, in order to reach a contradiction, that y ∈ C. Since I(C, ε) is a closed, it follows that we can find x ∈ I(C, ε) such that |y − x| = dist y , I(C, ε) . Also, since x ∈ I(C, ε) ⊆ C and y ∈ C, it follows from Lemma 8.2 that there is v ∈ [[x, y]] ∩ ∂C. We now conclude that r ≥ dist y , I(C, ε) [since y ∈ B I(C, ε) , r ] = |y − x| ≥ ε . Proof of Theorem 2.2 We first note that it follows from Theorem 2.1 that Hence it suffices to prove that σ ab ζ U,Λ C = f U,Λ (C) . Proof of Theorem 2.3 The purpose of this section is to prove Theorem 2.3. Proof of Theorem 2.3 For brevity write G = {s ∈ C | Re(s) > σ ab ( ζ U,Λ C )}. Since sup |i|=n 1 log s i → 0 as n → ∞ (because sup |i|=n s i → 0 as n → ∞), we conclude that the series Z It follows from the calculations involved in establishing (9.1) that the product Q U,Λ C (s) converges and that Q U,Λ C (s) = 0 for all s ∈ G. In addition, we deduce from (9.1) that for all s ∈ G, we have
2013-07-18T17:00:58.000Z
2013-07-18T00:00:00.000
{ "year": 2017, "sha1": "a820d79030f1bd93cb7e366f3c0addda998afc0b", "oa_license": "CCBY", "oa_url": "https://research-repository.st-andrews.ac.uk/bitstream/10023/10071/1/Mijovic_2016_Multifractal_AM_CC.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "84f48db964b7d921573295060987faba42d450bd", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
233875692
pes2o/s2orc
v3-fos-license
Tejaas: reverse regression increases power for detecting trans-eQTLs Trans-acting expression quantitative trait loci (trans-eQTLs) account for ≥70% expression heritability and could therefore facilitate uncovering mechanisms underlying the origination of complex diseases. Identifying trans-eQTLs is challenging because of small effect sizes, tissue specificity, and a severe multiple-testing burden. Tejaas predicts trans-eQTLs by performing L2-regularized “reverse” multiple regression of each SNP on all genes, aggregating evidence from many small trans-effects while being unaffected by the strong expression correlations. Combined with a novel unsupervised k-nearest neighbor method to remove confounders, Tejaas predicts 18851 unique trans-eQTLs across 49 tissues from GTEx. They are enriched in open chromatin, enhancers, and other regulatory regions. Many overlap with disease-associated SNPs, pointing to tissue-specific transcriptional regulation mechanisms. Supplementary Information The online version contains supplementary material available at (10.1186/s13059-021-02361-8). Contents List of Tables S1 Summary of number of trans-eQTLs before and after cross-mappability lter . . 35 S2 GWAS enrichment of trans-eQTLs for di erent tissues and disease categories . . 41 Forward regression A trans-eQTL is generally expected to in uence the expression levels of tens to hundreds of genes and we take advantage of this signature to increase the sensitivity to detect them. Brynedal et al. proposed a method called cross-phenotype meta-analysis (CPMA) to analyze the p-value distribution of pairwise associations of a candidate SNP with all measured genes [1]. We follow the same idea but use a di erent statistic for forward regression (FR) to nd the trans-eQTLs. . Notation We use bold capital cases for matrices. X is the × matrix of genotypes for SNPs and samples. Y is the × matrix of gene expression levels for genes and samples. For the FR analysis, both X and Y are centered and normalized. We use bold small cases for vectors. The rows of X and Y are denoted as x and y respectively for every SNP ∈ {1, . . . , } and for every gene ∈ {1, . . . , }. The columns of X and Y are denoted as x and y respectively for every sample ∈ {1, . . . , }. Both x and y are vectors of size , while x and y are of sizes and respectively. . FR-score For each SNP x , we calculated the p-values of association with y for all the ∈ {1, . . . , } genes independently. Under the null hypothesis that the SNP is not a trans-eQTL, these p-values will be independent and identically distributed (iid) with a uniform probability density function, If SNP is a trans-eQTL, then more genes than expected by chance will have low p-values for association with the SNP, leading to a higher density of p-values near zero. We de ned the FRscore (q fwd ) as a statistic that estimates the di erence between the observed p-value distribution from the data and the uniform distribution expected by chance. We sort the p-values in increasing order and the th smallest value is called the th order statistic, and is denoted as ( ) . If is uniformly distributed ( ∈ (0, 1)), then ( ) will be a Beta-distributed random variable, ( ) ∼ Beta ( , + 1 − ) . ( For any random variable ∼ Beta ( , ), where denotes the digamma function. Using the identities in Eq. 3 and noting that = and = + 1 − , we obtained the expectation of ln ( ) as, If the candidate SNP is a trans-eQTL and there is an enrichment of p-values near zero, then the observed values of ln ( ) will be lower than that expected by chance when is small. Therefore, the cumulative sum of E ln ( ) − ln ( ) over will increase monotonically, pass through a maximum and then decrease to an asymptotic value of zero. Hence, we de ned the FR-score as, Intuitively, if the candidate SNP is not a trans-eQTL, =1 E ln ( ) − ln ( ) will uctuate around zero. Hence, q fwd will remain close to 0 for these SNPs. However, a trans-eQTL will be associated with many genes and the FR-score will be high (q fwd 0) because there will be many genes with a lower p-value than expected by chance. Of course, there will be other genes which are not associated with the trans-eQTL and the p-values for association with these genes will not contribute to the q fwd . Therefore, it would be su cient to calculate the q fwd from the rst genes with lowest p-values, instead of all genes: . Null model and p-values for FR-score We need to de ne a null distribution to evaluate the signi cance of any observed q fwd . Let q null fwd be an iid sample from the null distribution. Since it is analytically intractable to de ne a probability density of q null fwd , we de ne an empirical null distribution. The empirical cumulative distribution function (ECDF) of q null fwd can be used to ascertain signi cance of any observed q fwd to obtain a p-value, denoted as q fwd , One possible way to obtain a null model is by calculating q null fwd for a large number of null SNPs assuming they are not trans-eQTLs. Following our hypothesis in Eq. (1), the p-values for association of each of these null SNPs with the genes can be sampled each time from Unif (0, 1) distribution. However, the iid assumption of p-values breaks down in real data because the gene expressions are strongly correlated. Therefore, the p-values for association between the SNP and the genes are strongly correlated and cannot be sampled from the Unif (0, 1) distribution. To account for the correlation, we randomized the sample labels of X by permuting the columns of the real genotype matrix -thereby removing any association with the gene expression Y but retaining the correlation between the gene expression levels. We then calculated q null fwd for all SNPs using the 'permuted' genotype and the gene expression. We used this empirical null distribution to calculate q fwd from Eq. 7 . The estimate of q fwd gets increasingly noisy with higher values of q fwd (low p-values) because of lack of sampling points in that region. In that regime it is better to rely on a parametric t of the exponentially falling part of the cumulative distribution. We sorted the q null fwd in an increasing order. We set the limit beyond which we will use the exponential extrapolation at top = min{100, 0.1 }, where is the total number of SNPs being used for the calibration of the null model. This ensures that the extrapolation is applied at most to the top 10% of points, and, if more data are available, only to the top 100. Let us denote the q null fwd at this cut-o point as . For any observed q fwd ≥ , we use a maximum-likelihood estimate based on the top data points and is given by, where null jpa, is the th value in the ordered sequence of q null fwd . Note that for q fwd = this equation yields = top / , the same as the empirical value. . Comparison with CPMA Brynedal et al.used a similar idea for nding trans-eQTLs by testing each SNP for enrichment of association − ln( ) values across all genes with a null hypothesis that − ln( ) should be exponentially distributed, with a decay parameter = 1. Under the joint alternative hypothesis, a subset of association statistics are non null and ≠ 1. They compared the evidence for these hypotheses as a likelihood ratio test for CPMA, with the statistic CPMA de ned as, whereˆ is the observed exponential parameter from the data. They accounted for the extensive correlation among the gene expression levels by testing the signi cance empirically using a simulated gene expression matrix with the same covariance as the real gene expression. To de ne high-con dence trans-eQTLs, they combined empirical CPMA statistic from multiple data sets using sample size weighted meta-analysis. FR measures the signi cance of the enrichment in the p-value distribution near zero, and CPMA measures the signi cance of the enrichment in the − ln( ) distribution. Theoretically, both should give similar ranking of SNPs for being trans-eQTLs. Since there are no currently available software for CPMA, we implemented FR-score in Tejaas and compared the performance with RR-score. Reverse regression In this section, we discuss the reverse regression (RR) model and introduce the RR-score, denoted as q rev for nding the trans-eQTLs. We also describe a null model and explain how to obtain signi cance p-values for the q rev of a candidate SNP, denoted as q rev . We also discuss the numerical implementation of RR-score in Tejaas and several other options used in Tejaas. . Notation We use the same notations from Sec. 1.1. The genotype vector x for every th SNP is centered but not normalized to allow for null model calculation. Y is centered and normalized. . Model description We model x with a univariate normal distribution whose mean depends linearly on the gene expression through a column vector of regression coe cients (∈ R ), where the variance of the th SNP is given as 2 . For the rest of the manuscript, we drop the subscript for ease of reading although we note that all calculations are done for the th SNP. The log likelihood of this regression task is where is the minor allele count of the th SNP in th sample. The number of samples will usually be on the order of a hundred to a few thousand, much smaller than the number of explanatory variables ≈ 20 000. Therefore, simple maximization of the likelihood would lead to a dramatically overtrained which would perfectly predict x on the training data but which would achieve very bad performance on unseen data. The solution is to de ne a prior on . We assume that every th element ( ∈ {1, . . . , }) of is sampled from a normal distribution with variance 2 , ∼ N | 0, 2 (13) and maximize the log posterior probability, . Bayesian model comparison Let H 1 be the trans-eQTL model which allows ≠ 0 and H 0 be the null model for which = 0. According to Bayes' theorem, The probability for the model H 1 is a monotonically increasing function of the likelihood ratio, where the second line is obtained by using the model de ned in Eq. 11 and the prior for de ned in Eq. 13. We then de ned := YY T + 2 / 2 I and used the technique of quadratic complementation to obtain, Using Eqs. 15 and 17, we obtained the probability of the trans-eQTL model, Thus, it is possible to obtain the probability of each SNP being a trans-eQTL but the calculation is computationally expensive and requires the prior probabilities (H 0 ) and (H 1 ). These prior probabilities can be set to a constant. . RR-score: Definition Motivated by the probability obtained in Eq. 18, we de ned our test statistic RR-score, denoted q rev , as follows: Suppose we have obtained a score q rev for the th SNP and we would like to know how signi cant this score is. In order to rank it with respect to all the other SNPs in the genome, we need to calculate a null distribution of q null rev and from it the p-value of our observed score q rev . Note that the statistic q rev is not proportional to the probability given by Eq. 18 and hence cannot be directly used for ranking the SNPs. However, in Sec. 2.6, we derive a p-value to ascertain the signi cance of the q rev of each SNP. . Numerical calculation The RR-score de ned in Eq. (19) involves computing the inverse of a × matrix W. To reduce the complexity, we rst perform a singular value decomposition of Y T , with two orthogonal matrices, U ∈ R × and V ∈ R × , and a diagonal matrix S ∈ R × that whose all elements are zero except for the = min{ , } singular values on its diagonal. Substituting the SVD into W gives This allows us to expand the matrix W using the singular values and the eigenvectors u , Therefore, we can write the RR-score as .6 Null model To obtain a p-value for the q rev score of SNP with column x in the genotype matrix X, we need a null distribution of q rev scores. The p-value is then the probability mass of the null distribution with values higher than the actually obtained q rev score. We can obtain such null distribution by permuting the genotype entries in x while keeping the rows of the gene expression matrix Y unpermuted. The distribution of the resulting q null rev scores can di er between SNPs depending on their minor allele frequency (MAF) and the variance of the genotype ( 2 ). In principle we could derive the distribution of q null rev empirically by permuting the elements of x a large number of times. However, to obtain p-values of 5 × 10 −8 or below, corresponding to genome-wide signi cance, we would need to draw at least 2 × 10 7 permuted samples and compute their q rev scores. This would be too time consuming. Instead, we derive in Appendix 1 analytical expressions for the expectation value := q null rev and variance 2 := Var q null rev of q null rev = x T Wx under the permutation null model for any symmetric matrix W and any centered vector x. In Fig. S1a and b, we show that our analytical calculation of and match those obtained from the empirical permutation of x. With this empirical permutation, we could verify a p-value up to 1 × 10 −12 is accurately approximated by our normal approximation. In our subsequent analyses of GTEx, we found that 2781 association tests (out of 49 × 8048655 association tests) were beyond the empirically veri ed range. Normal approximation. Under the null model assumption that the SNP is not a trans-eQTL, the u T x are distributed as the projections of a unit vector with random direction onto Cartesian coordinates given by the eigenvectors u . The u T x will be normally distributed and u T x 2 will have a chi-square distribution. In the limit of 1, the sum over 2 / 2 + 2 / 2 u T x 2 will have a normal distribution according to the central limit theorem. Therefore, we approximate q null rev by N , 2 . In Fig. S1c, we show the QQ plot which numerically validates the approximation. Finally, the p-value of q rev for a candidate SNP is where Φ( ) denotes the cumulative normal distribution for a random variable . Constraint on covariate correction. Generally, the gene expression matrix Y should have rank (Y) = = min{ , }. This means that the SVD in Eq. (21) should have singular values . To obtain q rev , the sum in Eq. (24) runs over components of u T x 2 . If known covariates are now corrected from Y using linear regression (see CCLM below, Sec. 3.1), then of these singular values become zero for the residual expression Y . This is equivalent to subtracting components from the sum in Eq. (24). When the subtracted part is not normally distributed, as would happen if is too small for the central limit theorem to be valid, then q null rev is not normally distributed and the p-values are not accurate. If is large, then the subtracted part and consequently, q null rev , remains normally distributed. In practical situations, only a few known covariates are used to correct the gene expression and in this limit of small , the null model is not Gaussian. In Fig. S2, we show that q null rev is normally distributed for a given SNP and a gene expression matrix with = 581 (from adipose subcutaneous tissue of GTEx). Setting the rst seven singular (c) QQ plot of the empirical null distribution of q rev for four representative SNPs with minor allele frequencies of . , . , . and . . On the x-axis, we plot the theoretical quantiles of N ( , ) and on the y-axis, we plot the observed quantiles of (q revq ) / q . With a sampling size of e empirical permutations, the maximum observed quantiles correspond to p-values of . e-, 6. e-, . e-and . -respectively for the four SNPs. For all plots in this figure, we used the gene expression of adipose subcutaneous tissue from GTEx. For all q rev , we used = . . values to zero (equivalent to correcting seven covariates) breaks the normal distribution of q null rev . However, if 200 singular values are set to zero, then the assumption of normal distribution of q null rev becomes valid again. This illustrates the problem of using CCLM with few confounders in combination with Tejaas. . Comparison of RR-score with sequence kernel association test (SKAT) The aggregation of weak signals from many covariates in the RR-score of Tejaas is reminiscent of methods developed for nding rare genetic variants associated with a trait that aggregate the e ects of many rare variants in an entire locus or gene, such as the burden test [2] or the sequence kernel association test (SKAT) [3,4]. Their test statistics take a similar form to that of Tejaas. All three can be written as (20)), R = 11 T for the burden test and R = I for SKATor, in the weighted version with diagonal weight matrix Π = diag( 1 , . . . , ), R = Π 11 T Π and R = Π 2 , respectively (where 1 T (1, . . . , 1)). For → 0, RR-score of Tejaas tends towards the unweighted SKAT statistic. However, burden and SKAT are classical hypothesis tests, testing whether to reject the null hypothesis = 0. In contrast, Tejaas uses Bayesian model comparison, comparing the null model = 0 with the alternative trans-eQTL model ≠ 0 (Eq. (15)) while integrating out the unknown e ect strengths (Eq. (17)). Since the trans-eQTL model includes the expected scale of e ect sizes via its prior ( ) = N | 0, 2 , the test statistic of Tejaas depends on this scale. As can be seen in Eq. (23), the multiplication with R leads to a saturation of the e ects of principle components of Y with singular values larger than / . Furthermore, whereas the burden test and SKAT commonly assume the response variable under the null hypothesis to be identically and independently normally distributed, this assumption would be unsuitable for centered minor allele frequencies . Instead, Tejaas assumes that patient indices are randomly permuted under the null model (Sec. 2.6). Whereas the SKAT statistic was devised because it was deemed useful to distinguish positives from negatives while being easy to compute, we used the Bayesian approach to derive a test statistic that should be reasonably good in distinguishing positives from negatives. Indeed, despite the similarity with the SKAT statistic, the equivalent TEJAAS kernel contains a correction R for the correlation between covariates, which does not appear in SKAT and related methods. This correction has its origin in the non-null model, for which regression coe cients deviating substantially from 0 are considered. Once derived, we strove to make the TEJAAS statistic fast to compute by means of analytically calculating the rst and second moments (Appendix 1). Its time complexity is dominated by the inversion of the matrix 2 / 2 I + Y T Y, which takes ( min{ , }), while p-values are computed in ( 2 ) (Appendix 1). In comparison, SKAT scores are computed in ( ), while the computation of the p-values require the inversion of an × matrix, taking time ( 3 ). .8 Choice of the regularizer The standard deviation of the normal prior is not learned from the data, but is chosen empirically. It is important to choose such that it avoids over tting and also restricting the regression coe cients too much.. In the limit of large , the normal approximation to q rev will break down. To see this, we rst observe that from Eq. (24) Therefore, as 2 / 2 becomes smaller than most squared singular values { 2 : 1 ≤ ≤ }, q rev will approach 1. The distribution of q rev under the permutation null model will therefore become extremely skewed with an abrupt drop towards 1. Such a distribution will not be approximated well by a Gaussian, but luckily this situation corresponds to the irrelevant regime of extreme over tting. This can be seen by noting that the prediction of x at the maximum likelihood solution which means that the regression results in a perfect prediction on the training samples, in other words, it results in complete over tting. At the other extreme, when gets so small that 2 / 2 max{ 2 : 1 ≤ ≤ }, we can approximate where is the × sample covariance matrix. Since our gene expression matrix Y is normalized, the diagonal elements of should be equal to 1, whereas the o -diagonal elements will be more or less randomly distributed around 0. Intuitively, we expect that in this regime q rev will be will be too restrictive for the model and lead to false signals even with chance correlations of a single gene with a genotype. Non-Gaussian parameter. From Eq. (25), the q null rev should have a normal distribution and the standardized q null rev should have a standard normal distribution, The kurtosis of a standard normal distribution is 3. We use this property to de ne a non-Gaussian parameter ( ) to measure the deviation of the distribution from a standard normal distribution, where ( ; ) = q null rev − / . Given a gene expression matrix, we simulated genotypes for 5000 SNPs and calculated ( ) for di erent values of . We recommend choosing opt such that opt ≥ arg min ( ). Ideally, we also want a high value for the standard deviation of to ensure a broad distribution of q null rev . . Target genes There are two separate tasks in the analysis of trans-eQTLs: (i) identifying the trans-eQTL SNPs, and (ii) identifying their target genes. However, the second task is subordinate in the sense that if trans eQTLs cannot be predicted correctly there is no use in predicting target genes. Conversely, if the trans-eQTLs can be predicted correctly then some complementary method can be used for predicting their target genes. Reverse Regression o ers evidence for the presence of a trans-eQTL without identifying which genes are targeted. Hence, we do not get the set of target genes from Reverse Regression. In fact, multiple regression with 2 penalty (ridge regression) is not optimal for variable selection. Therefore, in Tejaas, we included a single SNP-gene pairwise regression to nd a set of candidate genes as targets for each predicted trans-eQTL. Under the hood, our software performs the following tasks: • perform reverse regression to rank all trans-eQTL SNPs and select the top trans-eQTLs below a cuto (default 0.001), can be speci ed by --psnpthres. • perform single SNP-gene linear regression for all preselected trans-eQTL SNPs and estimate the Benjamini-Hochberg (BH) false discovery rate for the association of all genes with the candidate trans-eQTL SNP. The output of Tejaas contains both the trans-eQTL SNPs ranked by p-values and all SNP-gene pairs below a p-value cuto ( --pgenethres, default 0.01) with their corresponding BH adjusted p-values at 50% FDR (--fdrgenethres). As discussed below (Sec. 3), reverse regression works better with KNN correction while pairwise regression (for target gene discovery) works better with conventional covariate-corrected gene expression. To get the best of both worlds, our software accepts two separate input options for gene expression les: (1) --gx to provide the raw gene expression le for KNN correction and trans-eQTL discovery, and (2) --gxcorr to provide the covariate-corrected gene expression le for single SNP-gene pair regression to discover target genes. . Cis-masking In Tejaas, we are interested to nd long-range SNP-gene interactions. However, the strong e ect size of the cis-eQTLs can sometimes lead to high q rev . To avoid wrongly identifying cis-eQTLs as trans-eQTLs, we introduced an option in our software to exclude all genes in the vicinity of each SNP. We call this procedure 'cis-masking'. It can be invoked with the ag --cismask. The width of the vicinity can be speci ed using the option --window. To implement cis-masking, we calculated the SVD of the expression matrix (see Eq. (21)) after removing (masking out) the genes that occur within ±1Mb (or any distance speci ed by --window) from the SNP position. Doing this for every SNP would be computationally expensive as it would involve ∼ 10 7 matrix inversions (once for each SVD). But since many SNPs generally have identical genes in the vicinity, we group them and calculate the SVD once for each group, which brings down the number of SVD calculations to ∼ 10 4 . . Computational requirements To determine run times and memory requirements of Tejaas, we used the adipose subcutaneous tissue expression from GTEx, which contains a total of 15673 genes for 581 samples. We used the GTEx genotype for chromosome 1. All tests were run on a node with 2× Intel Xeon CPU E5-2640 v3 @ 2.60GHz with 8 physical cores each and 128Gb of RAM. We subsampled the expression matrix as well as the number of SNPs used to run Tejaas to evaluate di erent possible scenarios. The most expensive step for Tejaas is the SVD decomposition of the gene expression matrix. The number of times the SVD decomposition is performed depends directly on the number of SNPs selected. While using --cis-masking (Sec. 2.10), di erent SNPs will have di erent cis-genes that are required to be masked out and hence, a new SVD decomposition is required. Run times increase with the number of SNPs (Fig. S3a, left panel), since the number of cismasks needed (top x-axis) is proportional to the number of SNPs (bottom x-axis). Memory usage increases with the number of SNPs and samples, since the dosage matrix is being held in memory. We implemented Tejaas with an option to run Tejaas on a multicore server using MPI parallelization. Fig. S3b shows how run times decrease with the number of cores used. Total memory usage, as expected, increases with the number of cores. In the designed parallelization scheme, each core will compute the SVD decomposition for a given cismask. With an increase in the number of cores, the expression matrix needs to be copied to each of them for processing and hence the memory usage scales proportionally. Confounder correction Gene expression measurements are notorious for being dominated by strong confounding e ects that can arise from technical details of RNA recovery, conservation, and sequencing or from environmental and biological factors such as age, gender, diseases, nutrition and lifestyle, drug regimes etc. The subtle e ects of trans-eQTLs are at risk of being drowned out by the strong systematic noise from confounding e ects. In this section, we rst discuss the standard confounder correction method for eQTL analyses. Then, we introduce our K-nearest neighbor correction that is specially suitable for use with Tejaas. . Residuals from multiple linear regression Given a matrix C ∈ R × of covariates and samples, a multiple linear regression is performed for the th gene expression y , where ∈ R is a vector of e ect sizes of the covariates for gene . The estimate of the e ect sizes from the multiple linear regression is then used to calculate the expression residuals y = y − C which are linearly uncorrelated with the covariates. The residual y is used as the corrected gene expression for downstream analyses. In this manuscript, we denote this correction as 'CCLM'. This is a simple and e ective method to correct gene expression confounders and is routinely used to discover biologically signi cant eQTLs. CCLM can be used for known covariates such as age, gender, etc. as well as other hidden covariates inferred using PEER [5] or other similar methods. However, if rank (Y) = , then it can be shown that for the residual matrix, rank (Y ) ≤ − . Therefore, as noted in Sec. 2.6, this may compromise the normality of the distribution of q null rev . Hence, the CCLM correction is problematic to use with Tejaas. . K-nearest neighbor (KNN) correction For the KNN correction, we assume that confounding e ects dominate the gene expression. We expect that the expression levels of the genes of nearest neighbors (NN ) of each sample will be a ected by the same dominant confounder variables. In other words, if the samples are close to one another in the expression space, we expect them to be close to one another in the confounder space. Hence, we can correct at least a good part of the confounding e ects by centering the expression y and genotype x of sample using the samples whose gene expressions are the most similar to that of sample , The nearest neighbors NN are calculated using the denoised Euclidean distances between gene expression vectors. To reduce noise in the distance calculation, we remove all but the leading principle components of the singular value decomposition of the expression matrix Y. After subtracting the mean gene expressions and genotype vectors of the nearest neighbors, we center the resulting corrected gene expression and genotype vectors again. Since KNN does not reduce the rank of the gene expression matrix, it works well with Tejaas. As shown in the QQ plots of Fig. S4, the empirical null distribution indeed follows the normal distribution. The choice of should be such that it captures the locally varying e ects of the confounders. As shown later in Fig. S10, too small a value of would lead to excessive statistical noise, while increasing too much will remove too little of the confounding e ects as these start to average out within the neighbors of each data point. In Fig. S5, we show how KNN correction removes the strong dissimilarity between samples in the data. Before KNN correction (Fig. S5a), the samples are strongly dissimilar. We could observe several sample groups which are neighbors in the expression space, probably due to external confounders. Although the samples were clustered in the expression space, we observed distinctive patches in the genotype space. This happens because several confounders (such as population substructure) a ect both genotype and gene expression, leading to unwanted correlation between the rst principal component of the genotype and the expression levels. We could correct for these confounders by centering both x and y in the KNN method (Fig. S5b). The KNN correction has several bene ts in comparison to CCLM and other linear corrections. First, it can correct out non-linear confounding e ects, so, in contrast to the CCLM correction, it should also work when the confounding e ects are not well approximated by linear, additive e ects. Second, since it is non-parametric (except for ), over tting does not typically occur [6,7]. Third and most importantly, it does not require the confounders to be known. Therefore, it can be used for removing hidden confounders, which is often the case with real data. Trans-eQTL simulation This section describes details of a simulation study that evaluates the power of our method to nd trans e ects in gene expression data. We followed the strategy of Hore et al. for the simulation, as described in the Supplementary section of their article [8]. We discuss their simulation details below, partly verbatim. Any di erence with their strategy is explicitly noted. We also compare the . Data simulation Simulated data consisted of genotype and gene expression for = 450 individuals to closely resemble the sample size of the Genotype Tissue Expression (GTEx) project [9][10][11]. We simulated the gene expression data for = 12 639 genes, containing non-genetic signals (background correlation estimated from the GTEx tissues and confounding factors) and genetic signals (cis and trans e ects). Following the strategy of Hore et al. [8], we assumed that each gene contained only one SNP; this simpli ed case is equivalent to assuming that there is at most one cis-eQTL for each gene. . . Genotypes We used the real genotype data from the GTEx individuals, while Hore et al.used simulated genotypes. After pre-ltering of the GTEx genotype, we randomly sampled = 12 639 SNPs (corresponding to = 12 639 genes) from the genotype data of 450 individuals. From the SNPs sampled, we randomly selected cis = 800 SNPs to be cis-eQTLs. From these cis-eQTLs, we selected a subset trans = 30 SNPs to be trans-eQTLs. These trans-eQTLs were originally cis-eQTLs associated with a nearby gene, which in turn regulated the expression of other distant genes. All SNPs were sampled independently from the genotype. Let X ∈ R × be the sampled matrix of genotypes. . . Gene expression Let Y ∈ R × be the matrix of simulated gene expression data. The data was simulated in stages. First, we generated the non-genetic component of all genes using background correlation (noise) and confounding factors. Second, we added the cis e ects and nally, we added the trans component on the target genes of the trans-eQTLs. Noise. Hore et al.used heteroscedastic background noise. However, to replicate the underlying correlation of the gene expressions in the GTEx samples, we created a Gaussian noise with a covariance matrix obtained from the gene expressions in the artery aorta tissue of GTEx. Let Y as be the observed gene expression in GTEx samples. We decomposed the covariance matrix of Y as such that Cov We de ned the noise as where Z is a × matrix, with every row sampled from a normal distribution with zero mean and unit variance. Note that Cov (Y noise ) = Cov (Y as ) and therefore Y noise retains the background correlation structure of the adipose subcutaneous tissue. Confounding factors. We generated = 10 confounding factors, of which pop = 3 were considered to be due to population substructure. Let W be the × matrix of confounding factors. The three population substructure confounders (W pop ) were set to be the rst, second and third principal components of the corresponding genotype matrix. The remaining ind = − pop independent confounders (W ind ) were sampled from a standard normal distribution, Each confounding factor was assumed to be a ecting only a few target genes. Hence, the e ect size for the th confounding factor on the th gene was sampled with a sparsity , to give us the coe cient matrix of size × . We used = 1.0 unless otherwise mentioned. Finally, we obtained the e ect of confounding factors on the gene expression using an additive model, Cis e ects. In this simulation framework, every gene spatially overlaps with a corresponding SNP in the genotype, i.e., we have (= ) SNP-gene pairs. If the SNP in the th position has a cis e ect ( ∈ cis ), then the expression of the gene is modi ed with a strength , where the direction of the e ect size is random, i.e., sgn ( ) is assigned a value of −1 or +1 randomly. The strength of the cis e ect, i.e., | | depends on whether the SNP also has a trans e ect or not and is given by if SNP also acts as a trans-eQTL (∈ trans ) Gamma (4, 0.1) , otherwise. For all the remaining SNP-gene pairs ( ∉ cis ) with no cis e ects, Y cis = 0. Combining noise, confounding factors and cis e ects. We obtained a temporary set of expression levels for all genes by additively combining the noise, the e ect of confounding factors and the e ect of cis-eQTLs, Note that the Y cis has non-zero values only for the targets of cis = 800 cis-eQTLs, and has zero values for all remaining genes. Trans e ects. For simulating the trans e ects, Hore et al.assumed that the trans-eQTLs regulated a nearby gene (via cis e ect) and this cis target gene was a transcription factor (TF) and regulated multiple target genes downstream (excluding other TF genes; could be any other gene including other cis target genes). This ensured that the trans-eQTLs were indirectly associated with the target genes with practically low e ect sizes. Let trans be the number of target genes regulated by the TF and the target genes of every TF were selected at random. Let be the set of TFs which regulate the expression of the th gene and the trans e ect on this gene was simulated as, ∼ Gamma trans , 0.02 . was the relative e ect of TF on the gene . If a gene was not a target for any of the trans = 30 TFs, then Y trans = 0. Combining all of the data. Finally, the contributions from the trans-eQTLs were incorporated to create a nal set of simulated expression levels, and was used for subsequent analyses. . . Choice of simulation parameters As described above, there are several parameters which can be tuned in the simulation to generate a wide range of confounding e ects, cis e ects and trans e ects. Three parameters determine the strength of a simulated trans-e ect: 1. E ect of the cis-eQTL on the TF ( for ∈ cis ). 2. E ect of the TF on the target genes. This is determined by trans and the mean e ect of the TFs on the target genes is given by = 0.02 trans 3. The number of target genes ( trans ). Hore et al. [8] selected these parameters with reference to the KLF14 trans signal [12] so that the signal strengths were similar to the real data. For our simulation, we chose the same values for these parameters, i.e., = 0.6 for ∈ cis , trans = 20 and trans = 150. We also explored several other choices of parameters by reducing the strength of trans e ects, particularly by tuning trans ( = 5, 10 and 20) and trans (= 50 and 100) as shown in Fig. 2 of the main manuscript. In order to get an idea of how realistic the simulations are, we looked at the e ect sizes of the trans-eQTLs and the estimated expression heritabilities contributed by the trans-eQTLs. E ect sizes. The expression of the th gene is given by Hence the e ect of an individual SNP on gene in an additive model, will be given by = and all other terms will be included in the error term . Simple linear regression (methods like MatrixEQTL) calculates an estimate for the coe cient. The standard error for the estimated coe cient will be given by where the residual can be obtained as 2 = Var Y − 2 Var X . For comparison across multiple SNPs and studies the e ect size is scaled by the standard error to obtain the score, Since we planted the trans-eQTLs in simulations, we know the "true" values of the coe cients, i.e., = , and can calculate the scores for all SNP-gene pairs where > 0. We show the distribution of the | | scores in Fig. S6a and compare it with the | | scores of the most signi cant eQTLs predicted by GTEx [13]. It should be noted that the predicted cis-eQTLs and trans-eQTLs from GTEx do not represent the background distribution because there could be many other cis or trans-eQTLs which might have not been predicted. However, the distributions give an idea of the e ect sizes that are required to be considered signi cant by simple regression. The distribution of cis-eQTL | | scores overlap with that of predicted cis-eQTLs in GTEx, indicating that the cis-eQTL e ect sizes are realistic and are expected to be predicted by standard eQTL mapping. The e ect size of the simulated trans-eQTLs are much weaker compared to the 156 signi cant trans-eQTL and target gene pairs discovered by the GTEx consortium. Consistent with our expectations, (i) the trans-eQTLs have weaker e ect sizes compared to cis-eQTLs and (ii) they cannot be discovered by standard eQTL mapping with the given sample size. However, it remains unknown how much weaker the trans-eQTLs really are. Heritability. The expression heritability (narrow-sense) and their cis and trans components for the th gene can be calculated as where the denominator Var Y depends non-trivially on the correlated noise and other confounding e ects: where we have used We can assume that the last term Cov Y cis , Y trans = 0 because if gene is targeted by both a cis-eQTL (X ) and a trans-eQTL (X , ∈ ), then the trans-eQTL must be distally separated from the cis-eQTL and therefore, probably not correlated so that Cov X , X = 0. It follows that the last term in the numerator of Eq. (53) can also be assumed to be zero. To obtain the cis and trans heritability, we empirically calculated Var Y from the simulated expression data and calculated the numerators in Eq. (52) and (53) from the input e ect sizes and sampled genotypes for the simulation. The mean cis and trans heritability over all genes can be obtained as, where cis are the genes targeted by the cis-eQTLs and trans are the genes targeted by the trans-eQTLs. The proportion of trans-heritability is given by The mean cis heritability across all simulations was found to be ℎ 2 cis = 3.8 × 10 −4 . The distribution of trans over the 20 simulation replicates for each set of simulation parameters used in Fig. 2 (main text) is shown in Fig. S6b. In real data, trans is expected to be ≥ 0.7 from recent estimates [14] and the proportion of trans-heritability in our simulations is lower than these recent estimates. Since the trans e ect sizes used in the simulations are close to our expectation in real data, the number of TFs and / or target genes used in our simulation are possibly lower than what would be expected in real data. With fewer target genes it is harder to predict the trans-eQTLs using FR and RR of Tejaas, while simple regression methods like MatrixEQTL does not depend on the number of target genes. Judging by the e ect sizes and the heritability estimates, the parameters in our simulations represent a conservative estimate of our expectations in reality. . ROC pAUC analysis We ranked the trans-eQTL predictions from each tool by descending score and computed the cumulative number of true and false positives (TP, FP) up to each score. We obtained the true positive rate (TPR) as the proportion of positives (with respect to total ground truth positives) correctly identi ed at that threshold; and the false positive rate (FPR) as the proportion of false positives (with respect to total ground truth negatives) identi ed at that threshold. This is often summarized as, where FN and TN denotes the false negative and the true negative respectively. The Receiver Operating Characteristic (ROC) curve shows the TPR against FPR at various thresholds (Fig. S7). Traditionally, ranking performance is measured by the full area under the ROC curve (AUC). However, AUC summarizes the entire ROC curve, including regions that are not relevant to predicting trans-eQTLs (e.g., regions with low levels of speci city). In order to alleviate this de ciency while bene ting from some of the advantageous properties of the AUC, we used a partial area under the ROC curve (pAUC) which summarizes a portion of the curve over the pre-speci ed range of interest, FPR ≤ 0.1 (Fig. S7). Specifying the range of interest as FPR ≤ 0.1 is arbitrary. To give a concrete example, we have 12639 SNPs of which 30 are trans-eQTLs. FPR = 0.1 corresponds to a threshold at which top SNPs are selected such that there are 1261 false positives (10% of total 12609 true negatives) among those selected SNPs. For the three methods compared here, suppose the number of predicted trans-eQTLs are Tejaas , FR and MEQTL for FPR=0.1. The higher the number of true positives at that speci city ( = 1 -FPR = 0.9), the better is the method. However, the threshold p-value or FDR required for choosing Tejaas , FR and MEQTL are di erent. . Methods compared We compared Tejaas RR-score with two methods: (1) MatrixEQTL [15] as a representative for current standard eQTL pipelines, and (2) Tejaas FR-score, as an alternative for CPMA [1] which uses the same underlying property of trans-eQTLs that they target multiple genes simultaneously. MatrixEQTL. We used the R package for MatrixEQTL with default options. The covariate correction was done separately but used the same linear model correction implemented in Matrix-EQTL. Since we were not interested in the cis-eQTLs, we used a cis-window of 0Kb. FR (∼CPMA). Since CPMA is not implemented as a software, we implemented the FR-score (q fwd ) within Tejaas as an alternative. Internally, the method runs over the data twice: (1) First, create an empirical null model from the data and (2) then calculate q rev as well as an empirical p-value using the previously generated null model. This is the same procedure as CPMA, as described in their manuscript [1]. . Supplementary simulation results Selecting the regularizer. We plotted the non-Gaussian parameter ( ) and the standard deviation of at di erent values of in Fig. S8a. As noted above in Sec. 2.8, the best choice of should be greater than arg min ( ). Ideally, we also want a high value for the standard deviation of to ensure a broad distribution of q null rev . Therefore, we chose = 0.2. To validate our choice, we looked at the ranking of the trans-eQTLs at di erent values of in Fig. S8b. Each ROC curve was obtained by averaging over 20 simulations using default simulation parameters (see above) and KNN correction with 30 neighbors. In the inset, we show the partial areas under the ROC curves (pAUC) where the false positive rate (FPR) ≤ 0.1. We found that Tejaas works best at this choice of . The rest of the simulations were performed using = 0.2. Note on inverse normal transform. Current GTEx pipeline uses a two-step approach for normalizing the read counts of the genes: (1) read counts are normalized using TMM (Trimmed Mean of M-values [16]) and ltered for thresholds, (2) expression values for each gene are then inverse normal transformed across samples. The gene expression generated in our simulations are not read counts, but equivalent to the TMM values. Hence we skipped the rst step. We did perform the second step of inverse normal transformation before the confounder corrections. As expected, we found that CCLM correction (Sec. 3.1) bene ts signi cantly by using the inverse normal transformation (not shown). For the KNN correction (Sec. 3.2), however, we found that the inverse normal transformation reduces the accuracy of Tejaas (Fig. S9). It might be because the neighbor information gets skewed with this transformation. Hence, we performed KNN correction without the inverse normal transformation both for the simulations and the GTEx application. Number of neighbors for KNN correction. Another important aspect of Tejaas is to choose the number of neighbors for the unsupervised KNN correction. To ascertain the robustness of the KNN correction, we performed simulations with di erent number of confounding factors, = 10, 20 and 30. We then compared the ranking of Tejaas using di erent number of nearest neighbors, as shown in Fig. S10. The rankings were compared using the pAUC where the false positive rate (FPR) ≤ 0.1. We found that the choice of KNN neighbors does not depend on the number of confounding factors, and we obtained the best accuracy with 30 nearest neighbors. Ranking of trans-eQTLs. Complementary to the partial area under ROC curves (Fig. 2 of main text), we also show the proportion of true positives (on average) included in the top ranked trans-eQTL SNPs in Fig. S12. Target gene discovery. We investigated whether the increased accuracy of Tejaas to detect trans-eQTL SNPs also increases the accuracy of predicting trans target genes. As discussed in Sec. 2.9, Tejaas calculates single SNP-gene pairwise regression to nd the set of candidate target genes. However, unlike MatrixEQTL, the trans-eQTL SNPs are preselected using reverse regression. The simulations provide us the ground truth to compare the performance of the di erent methods in predicting correct SNP-gene pairs. The SNP-gene pairs reported by Tejaas, FR and MatrixEQTL were ranked by their p-values. Each true positive is a correctly predicted pair of trans-eQTL SNP and target gene, each false positive is a wrongly predicted pair. From the ROC curve, we calculated the pAUC up to a FPR of 0.1. The mean pAUC averaged over 20 simulations for each method and di erent simulation settings are shown in Fig. S13. For Tejaas and FR, we can limit the number of SNP-gene pairs to test by selecting trans-eQTL SNPs with RR or FR ≤ cuto . We tested three increasingly signi cant p-value cuto s, namely 1 × 10 −2 , 1 × 10 −3 and 1 × 10 −4 , and evaluated the sensitivity and speci city to discover the simulated true SNP-gene pairs. MatrixEQTL only tests single SNP-gene pair associations and therefore, we cannot perform the pre-ltering step. Methods Tejaas achieves higher accuracy for nding true SNP-gene pairs compared to FR and Matrix-EQTL over a wide range of simulation settings. All methods use the same association test for predicting SNP-gene pairs and the only di erence between the three methods is the preselection of trans-eQTL SNPs. Hence, the improvement of Tejaas over MatrixEQTL must be due to accurate preselection of trans-eQTL SNPs (see Fig. 2 of main text). MatrixEQTL needs to test a large number of comparisons ( × ), and cannot bene t from the preselection step. FR performs poorly due to its poor sensitivity to predict trans-eQTLs in the preselection step (see Fig. 2 of main text). This analysis quantitatively shows that an increased accuracy to detect trans-eQTL SNPs can improve the prediction of trans-eQTL target genes, even when using simple regression methods. GTEx analysis To illustrate Tejaas in a real data set, we analyzed trans-eQTLs across 49 human tissues using data from the Genotype Tissue Expression version 8 (GTEx v8) project [9][10][11]. In this section, we explain the preprocessing steps used for the GTEx analysis and report supporting results related to our analysis. RNA-Seq expression data We downloaded the phased RNA-seq read count expression matrix from the dbGaP portal ( lename: phASER_GTEx_v8_matrix.txt.gz). The phASER processed expression uses read-backed mapping for each RNA-seq sample, using the genotype from the same individual as reference to map and phase RNA-seq reads. For each gene, two read count values are reported, one for each expressed allele. We added up both count values and calculated TPMs (Transcripts Per Million). For quality control, we retained genes with expression values > 0.1 and more than 6 mapped reads in at least 20% of the samples. . Covariate correction Tejaas uses the TPMs (or TMMs) for KNN correction to remove confounders and subsequent trans-eQTL discovery. However, as discussed in Sec. 2.9, the explicit covariate-corrected gene expression is required for nding target genes of the trans-eQTLs. We downloaded the covariate les from the GTEx portal [17] ( lename: GTEx_Analysis_v8_eQTL_covariates.tar.gz). The covariates include the rst 5 principal components of the genotype (see Supplementary Material of [13] for details), donor sex, WGS sequencing platform (HiSeq 2000 or HiSeq X) and WGS library construction protocol (PCR-based or PCR-free). Additionally, from phenotype les available in dbGaP, we included donor age and post mortem interval in minutes ('TRISCHD') as covariates. We inverse normal transformed the TPMs (or TMMs) and used CCLM (Sec. 3.1) to remove the contribution of all the covariates and used the residuals for target gene discovery. One possible way to check the e cacy of the unsupervised KNN correction is to check whether it can be explained by the confounding e ects of known covariates and if so, to what extent. To test this, we checked the correlation of the 'KNN confounder' with known technical and biological covariates of GTEx samples and subjects. For each tissue , the KNN confounder (C knn ∈ R × ) is simply the term we subtracted from the expression values of all the genes in Eq. (32), Categorical covariates with ≤ 4 categories were binarized or converted to integers otherwise. Values with 'Not Reported' or 'Unknown' status were considered as missing values and were not considered in the analysis. For each covariate, we selected only those with at least 50 observations in any given tissue. We then performed a simple linear regression (SLR) to explain C knn with C gtex as predictors using Python's sklearn.linear_model.LinearRegression, From the tted models, we obtained the coe cient of determination 2 for every covariate in tissue . In other words, the 2 corresponds to the proportion of the variance of C knn that can be explained by the th known covariate in tissue . The resulting 2 values are reported in Fig. S14 (bottom panel). Indeed, the variance of KNN correction are explained to di erent extents by several known covariates. For example, the rst principal component of the genotype (PC1) and the sample's reported race both explain a signi cant proportion of the variance of C knn . The next four principal components, PC2 to PC5 do not contribute to explaining the variance of C knn . Interestingly, sample covariates related to library size and quality as well as various rates for mapped reads explain di erent proportions of the variance of C knn in di erent tissues, indicating technical confounders in many GTEx tissues. Next, we wanted to check the total proportion of the variance of C knn explained by known covariates, which cannot be done using the SLR set up described above. For each tissue , we selected those covariates with 2 ≥ 0.05 in the SLR set up. We then performed a multiple linear regression (MLR), where is the number of covariates retained in tissue . The percentage of total explained variance of C knn for each tissue is shown in the top panel of Fig. S14. On average across all tissues, the known covariates that we used explained 39.9% of the variance of KNN correction. . Tejaas regularizer selection for GTEx As in simulations (Sec. 4.4), we selected the regularizer for the GTEx tissues using the non-Gaussian parameter (Sec. 2.8). From Fig. S15, we found that we could use = 0.1 for most of the tissues in GTEx v8. However, there were four tissues for which ( ) at = 0.1 deviated strongly from the minimum. Hence for these four tissues, namely heart atrial appendage, pancreas, spleen and whole blood, we chose = 0.006. We used a binomial test to calculate the p-values for the enrichment . If is the number of trans-eQTLs in the tissue, then the probability of nding of them with a feature is, and ( > ) gives us the p-value for the tissue-GWAS pair. . Effect of cis-masking We applied Tejaas on 38 GTEx tissues (except brain tissues and bladder) with and without cismasking (Sec. 2.10) to check the e ect of cis-masking option of Tejaas for discovering trans-eQTLs in the GTEx data. We found that ∼ 97% of the trans-eQTLs discovered with cis-masking (shown with green in Fig. S17a) are also discovered without cis-masking (shown with red in Fig. S17a). However, if no cis-masking is used, Tejaas discovers ∼ 11% more trans-eQTLs compared to that with cis-masking. This increase in signi cant trans-eQTLs might be due to strong cis-eQTLs, as envisioned in Sec. 2.10. We can understand the e ect of cis-masking by looking at how many of our trans-eQTLs are also reported as cis-eQTLs by the GTEx Consortium [13]. We de ned the proportion of cis-eQTLs 0.0 0. 5 among the trans-eQTLs compared to the proportion expected by chance as cis-eQTL enrichment ( cis-eQTL ). In Fig. S17b, we compare the log 2 ( cis-eQTL ) for trans-eQTLs discovered with and without cis-masking in all the 38 tissues. The mean log 2 enrichment of cis-eQTLs across all tissues (shown in inset of Fig. S17b) is 0.10 with cis-masking and increases to 0.77 without cis-masking. This increase comes from the extra ∼ 11% SNPs discovered without cis-masking, indicating that cis-masking is crucial to avoid strong cis-eQTLs being falsely reported as trans-eQTLs by Tejaas. Effect of cross-mappable genes False trans-eQTL signals could arise from multi-mapped reads within genes with high sequence similarity, where reads from a given gene are mapped to one or more other genes elsewhere in the genome and showing an association resembling a long range regulatory e ect. This problem was explored by Saha and Battle [18] and can be mitigated using cross-mappability scores. We obtained pre-computed cross-mappability scores from [18], with settings of k-mer length 75 for exons and 36 for UTRs. In Tejaas we extended the cis-masking approach to exclude all genes that cross-map (cross-mappability score > 1) with any of the cis-genes listed in a given cis-masking group and we call this the cross-mappability lter. We applied our trans-eQTL discovery pipeline on all non-brain tissues of GTEx with the cross- Table S | Summary of number of trans-eQTLs before and after cross-mappability filter. We report the two sets of tissues, Set contains non-brain tissues for which we used = 0.1 in our analysis, and Set contains tissues for which we used = 0.006. We report the total number of lead trans-eQTLs after LD pruning for the different analyses. Numbers in bracket are the number of unique trans-eQTLs in that given set. mappability lter. The cross-mappability lter masks out a large number of genes, often more than thousands. For example, the average number of cis-genes masked in whole blood tissue is ∼ 9, while the average number of cross-mapped genes masked for whole blood tissue becomes ∼ 3508. About 34% of trans-eQTLs were ltered out for tissues with = 0.1 (Table S1 and Fig. S18a), indicating that they might be false signals from cross-mapped genes. However, we unexpectedly observed an increase in the number of predicted trans-eQTLs in the set of four tissues (whole blood, heart atrial appendage, spleen and pancreas) where we used = 0.006. This is because removing the large number of cross-mapped genes breaks down the normality assumption of the null model for these tissues, which are very sensitive to the choice of , as can be seen from their non-Gaussian parameter (Fig. S15). For the other tissues, the choice of is relatively stable and removing the cross-mapped genes does not a ect the choice. For our assumptions to hold, we need to optimize for every SNP in these tissues, to consider every set of masked genes when using the cross-mappability lter. This optimization is time consuming. However, with an approximate choice of = 0.008 obtained by optimizing ( ) after removing a random set of 3000 genes from the expression data, the results are comparable to those found in other tissues (Table S1 and Fig. S18b). We compared the enrichment in functional regions and cis-eQTLs before and after crossmappability lter. Enrichment of trans-eQTLs in DHS regions and cis-eQTLs does not change signi cantly ( > 0.1) with and without the cross-mappability lter (Fig. S19). Despite this, we do observe a slight decrease in the enrichment of cis-eQTLs for many tissues, consistent with the false positive trans-signal that the cross-mappability lter is meant to remove. . Effect of GTEx populations in predicted trans-eQTLs To assess the e ect of population di erentiation in GTEx, we calculated xation index values ( values) as in [19] on GTEx genotype data. We compared the distribution of values for all GTEx SNPs with the set of trans-eQTLs predicted with Tejaas (Fig. S20, left panel). The average value for the predicted trans-eQTLs in each tissue is shown on the right panel of Fig. S20. High values are indicative of allele frequency di erences between subpopulations present in the data. Population substructure could give spurious indications of eQTL associations [20]. To rule out confounding e ects of population di erentiation from our enrichment analysis, we sampled null sets with the same distribution as that of the predicted trans-eQTLs. We re-calculated DHS functional enrichments (Fig. S21) and GWAS enrichemnts (data not shown). However, we did not observe any signi cant deviation from the previously calculated enrichments. . Regulatory element enrichment in brain tissues Brain tissues in GTEx have generally lower number of RNA-seq samples compared to other tissues. This is one of the reasons why they display lower number of predicted trans-eQTLs with Tejaas. Other unknown confounders in the expression measurements could also a ect Tejaas predictions, as it has been reported that brain tissues have markedly distinct expression signatures in GTEx that separates them from the rest of the tissues. This could be due to the post mortem nature of the samples. We report DHS, raQTL and functional region enrichments for trans-eQTLs predicted in brain tissues in Fig. S22. Due to the low number of predictions, the reported enrichments are not reliable. Performance of Tejaas on null data On null data, Tejaas does not show any spurious association (Fig. S23). We used the same gene expression as used for trans-eQTL discovery to check if the correlation of the gene expression can lead to false discovery. We removed any possible trans-eQTL signal by permuting the sample labels of the genotype. We found no signi cant association in any GTEx tissue. . Replication in eQTLGen We downloaded the list of signi cant trans-eQTLs discovered in the eQTLGen study from https: //www.eqtlgen.org/trans-eqtls.html (version from 2018-09-04). It contains 59 786 SNP-gene pairs and a total of 3 853 unique trans-eQTLs in whole blood. In Appendix 3, we report the overlap between trans-eQTL from eQTLGen with the lead trans-eQTLs discovered with Tejaas in all GTEx tissues. Enrichments were calculated as in Sec. 5.7. Replication across all GTEx tissues is low, with whole blood showing the highest number of replicated variants. Although we don't expect to replicate other studies that used single SNP-gene pairs association methods, it is reassuring to see that Tejaas can replicate trans-eQTLs discovered in the same tissue type. Nonetheless, we must highlight that the eQTLGen meta-analysis study includes GTEx whole blood expression. GWAS analysis We investigated the overlap of the novel trans-eQTLs discovered by Tejaas with GWAS variants of complex traits to nd transcriptional regulatory mechanisms through which SNPs a ect complex diseases. In this section, we discuss the details of our analyses and report supplementary results. Data source for GWAS summary statistics We used two di erent libraries of GWAS-associated SNPs: 1. GWAS Catalog. Data was obtained from the EBI website [22]. We used version 'e98_r2020-03-08' which contains 4493 studies with a total of 179 364 associations [23,24]. 2. Complex trait GWAS. A set of 87 GWAS compiled by by Barbeira et al. [25]. Summary statistics from these GWAS were harmonized and imputed to GTEx v8 variants with MAF > 0.01 using only European samples. Of these, 86 traits had at least one genome-wide signi cant SNP ( < 5 × 10 −8 ) among the GTEx v8 variants, which were tested for trans-eQTLs. These 86 traits were broadly classi ed into 11 disease categories. Calculation of GWAS enrichment We used two di erent strategies for calculating the enrichment of trans-eQTL SNPs in GWAS for the two di erent datasets. Complex trait GWAS. In this case, we wanted to compare the GWAS enrichment of trans-eQTLs with that of GTEx cis-eQTLs with varying cuto of GWAS p-values. Hence, we plotted the fraction of cis-eQTLs, the fraction of trans-eQTLs and the fraction of total tested SNPs that overlap with SNPs associated with a complex disease (or disease category), as shown in the representative example of Fig. S24a. We used di erent cuto p-values for selecting the GWAS associated SNPs. Each disease category is a collection of several disease phenotypes. To obtain category-wise p-values for each SNP, we assigned the minimum GWAS p-value from the phenotypes that constituted the category. Finally, we calculated the GWAS enrichment of eQTLs (cis and trans respectively), GWAS = fraction of eQTLs overlapping with GWAS SNPs fraction of tested SNPs overlapping with GWAS SNPs (73) This is shown in the representative example of Fig. S24b. We used all reported cis-eQTLs and all predicted trans-eQTLs without pruning for LD regions to calculate the overlaps and enrichments. P-value calculation. We used a binomial test to calculate the p-values for the enrichment of each tissue-GWAS pair. If is the number of trans-eQTLs in the tissue, then the probability of nding of them in a GWAS is given by, and ( > ) gives us the p-value for the tissue-GWAS pair. Supplementary results Complementary to the Fig. 5b in our main text, we report all pairs of tissues and disease categories, which showed signi cant enrichment ( < 0.05) for trans-eQTLs to overlap with GWAS-associated SNPs, selected at a nominal cuto of < 1 × 10 −6 (Table S2). In Fig. S25, we report the GWAS enrichment for every tissue-study pair for the 87 disease phenotypes, at a nominal GWAS cuto of ≤ 1 × 10 −6 . Traits without any signi cant GWASassociated SNPs are not shown. Similar to disease categories in Fig. 5 in the main text, the GWAS enrichments of trans-eQTLs are speci c to certain tissue-disease pairs. Some of these pairs suggest biological relationship, such as the enrichment of thyroid trans-eQTLs in hypothyroidism or the enrichment of adrenal gland trans-eQTLs in hypertension. There are many other such interesting physiological links, which merits a thorough analysis in the future. rewrite q rev as, The rst sum over can be expressed in terms of simpler sums, where we have de ned the following moments of W, Inserting this into Eq. (80) yields Variance of RR-score We will determine the variance with the equation We will now derive expressions for the sums of elements of W in equation (90)
2021-05-07T13:40:12.106Z
2021-05-06T00:00:00.000
{ "year": 2021, "sha1": "1307fcbe4b6c6cee44d6d7a4ed84e9dc2be38253", "oa_license": "CCBY", "oa_url": "https://genomebiology.biomedcentral.com/track/pdf/10.1186/s13059-021-02361-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "11d3e7314d4cdc606d3b89354926e5af735bc7cf", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
212443436
pes2o/s2orc
v3-fos-license
Pharmacological Clinical and Experimental Studies on the Effect of Alginates in the Treatment of Obesity Introduction: In the prevention and treatment of obesity dietary regiments plays a major role. Recently the benefits of different food supplements are pointed out. The aim of our study was to examine the effect of alginates on the weight loss and body mass index (BMI), and some metabolic parameters in adults and rodents. Materials and methods: The first study represented a clinical study. A total of 108 obese patients entered the trail. They received a diet with a 500 kcal/ daily deficit and alginates as a food supplements. Anthropometric measures were done at baseline and after 6 weeks. Lipid status was also measured. The second study was an experimental study on 48 male obese Wistar rats with previously developed nutritional model of obesity. Results: We found statistically significant reduction of BMI and waist circumference in the group, treated with alginates. Further, total cholesterol, LDL-cholesterol and triglycerides were also statistically significantly reduced in the treated group. Our experimental study showed that rats treated with alginate acid demonstrated a significant reduction of body weight. Moreover there were changes in the blood levels of ghrelin between those treated with/without alginates. Conclusions: Our study suggests that alginates could be useful for reduction of body weight and can reduce lipid cardiovascular risk factors in both humans and animals I. INTRODUCTION Obesity is a wide spread disease not only in the countries with high standard of living, but in the counties with lower standard. According to the Bulgarian Association for the Study of Obesity and Related Diseases (BASORD) more than 56% of adult Bulgarians are overweight (BMI > 25 kg/m 2 ) and 17.7% are obese (BMI > 30 kg/m 2 ). The treatment of obesity includes three major approaches-diet, increased physical activity and drug therapy. Drugs for treatment of obesity have to be effective in weight reduction, well tolerable and without side effects. Obesity continues to grow steadily despite of the numerous efforts to prevent and fight it. Food supplements containing alginate acid are an approved pharmacological treatment of obesity. However, the effect of such supplements on the appetite regulation and metabolism is still unknown. The new Bulgarian food supplement "Algigracil" consists of algine acid salts and antioxidantive complex. One dose (one sachet) contains highly purified alginates (compounds of alginic acid), Vitamin E 30mg, Vitamin C 60 mg and beta-carotene 3 mg. Dissolved in water and taken in 20-30 sec, alginates turns in to gel in the stomach, which remains there 2-3 hours and brings forth a sense of repletion. Aims: A) To examine the effect of alginates on the weight loss, BMI and lipid parameters in patients with obesity; B) To investigate possible effect of alginates on the orexigenic hormone ghrelin in obese male rats. Clinical Study 108 patients (68 females and 40 males) with obesity were examined (mean age 37,8 year, mean BMI -33,7 kg/m 2 ). Alginates were applied in dose 2 sachet t.i.d. parallel with a diet with a 500 kval/ daily deficit for 6 weeks. Anthropometric indexes were measured as follows -body weight (kg) and BMI (kg/m 2 ). The waist to hip ratio was also calculated. The percentage of the fat mass was determined by a bioimpedance apparatus Tanita 420. Total cholesterol, HDL-cholesterol, LDL-cholesterol and triglycerides were measured in human plasma. All tests were made at the beginning and in the end of the study. Data were statistically analyzed by SPSS, v.16. The statistical significance was set at p< 0.05. Experimental Study A total of 48 male Wistar rats (200-220g) were used. Rats were randomized into 4 groups. In the first group, rats were fed with a standard chow diet (controls); in the second group rats were fed with standard chow plus alginates; in the third group rats were fed with a high-fat diet, containing mixture of various nuts plus standard chow food (experimental obesity); the forth group was fed with a high-fat diet, chow food and alginates. All the groups were fed for a 3-week period. Alginates were given once a day (0.5 ml/100g). Food intake was determined daily; body weight was measured at the end of each 3 days. At the end of the study rat were anaesthetized and blood was collected. Epididymal fat tissue and various organs were removed and weighed. Blood specimens were taken for biochemical analyses. Plasma ghrelin concentrations were examined by ELISA methodology. Clinical Study After 6 weeks a reduction of body weight (kg), BMI, waist to hip ratio, fat mass (%) and fat mass (kg) was observed ( Table 1). The food supplement with alginates was very well tolerated by our subjects. No side effects were reported. Experimental Study Alginates affected the development of obesity and the parameters of carbohydrate and fat metabolism in rats. A decrease of plasma lipids and glucose in groups treated with Alginates was determined. Those results correlated with a significant decrease of pancreas weight in rats. The group treated with alginate acid showed a significant reduction of body weight and BMI. Alginates reduced the weight gain (27.5%) in the forth group (experimental obesity plus alginates), compared with the third group (experimental obesity). Moreover, there were changes in the blood levels of ghrelin between those treated with/ without alginates. IV. DISCUSSION The effect of food supplements is based on the property of the compounds of alginic acid (organic acid originating from kelp) to produce gel in an acid medium (the gastric juice) which is insoluble and hard to be assimilated in to the organism. Thus, taking this kind of food supplements would reduce a further intake of high caloric foods, and would possibly promote a weight loss and a good patient's compliance towards weight maintenance. The vitamins contained in this food supplement are properly selected and contribute to prevent degenerative vascular changes. These vitamins of highly antioxidant capacity reduce the oxidation stress level and keep the cell physiological functions. High-fat diet led to the development of obesity in male Wistar rats. The group treated with alginate acid showed a significant reduction of body weight and adiposity. Moreover, there were changes in the blood levels of ghrelin between those treated with/ without alginates. Thus, the beneficial effect of alginates on ghrelin could be possibly explained by its mechanical effect of the stomach mucosa, and thus on the ghrelin secretion. V. CONCLUSIONS After six week of treatment with alginates significant decrease of the body weight and BMI (p<0.01) was established in comparison to the control group. Alginates assured mechanical satiety. The food supplement showed a lack of side effects.  Our results demonstrated that alginates inhibit the development of obesity in male Wistar rats. In addition the tested Bulgarian supplement had the capacity to reduce the the blood glucose and lipid levels. The supplement assured a mechanical satiety. Moreover, the product did not present side effects in rats.  The beneficial effect of alginates on ghrelin could be possibly explained by its mechanical effect on the stomach mucosa, and thus on the ghrelin secretion.
2019-08-18T05:57:48.632Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "1bd82b4b5b22f6fcecd13280060c511f2466b90e", "oa_license": null, "oa_url": "https://doi.org/10.21694/2379-8955.15010", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6a4e356e58bb6ff17ff823cefbf16d8e4633e942", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
233224549
pes2o/s2orc
v3-fos-license
Clustering and Fuzzy Logic-Based Demand-Side Management for Solar Microgrid Operation: Case Study of Ngurudoto Microgrid, Arusha, Tanzania Department of Materials and Energy Science and Engineering, Nelson Mandela African Institution of Science and Technology, P.O. Box 447, Arusha, Tanzania Water Infrastructure and Sustainable Energy Futures, Nelson Mandela African Institution of Science and Technology, Nelson Mandela Rd, P.O. Box 9124, Arusha, Tanzania Electrical Engineering Department, Saint Augustine University of Tanzania, P.O. Box 307, Mwanza, Tanzania Introduction Electrical energy has been one of the most unsubstitutable and flexible sources of energy [1]. It is a vital resource in boosting the economy of any country. erefore, electricity intake is on the ever-increasing side leading to unreliable and low grid efficiency [2]. Meeting this ever-increasing demand means overexploiting the limited resources by establishing new power plants [3]. Towards meeting sustainable energy goals, microgrids have gained popularity as a worldwide trend of exploring green energy technologies [4][5][6]. Microgrids' challenges regarding reliability and stability have compelled many types of research on how to match demand with the available supply [7][8][9]. Also, finding appropriate ways of energy saving through customer-utility involvement has led to the concept of demand-side management (DSM) [10]. rough DSM, moderating power consumption and decreasing peak demand have become the most significant concern of both customers and utility companies [11,12]. erefore, in achieving sustainability, reliability, and stability of any electricity grid, DSM is indispensable and unavoidable [11]. DSM refers to programs seeking to modify consumers' energy consumption through efficiency improvements or shifting loads on the electric grid's customer side [13]. ey encourage customers to consume less during peak hours and more in off-peak hours through financial incentives and behavioural change [10,14]. DSM can be categorized as energy efficiency and demand response (DR) as illustrated in Figure 1. Energy efficiency programs encourage users to consume less energy while enjoying the same service level through energy-efficient devices [15]. DR programs target the demand profile whereby customers may either reduce their consumption during peak periods without altering the off-peak periods [16][17][18] or shift some loads from peak hours to off-peak hours due to high prices imposed during peak periods [13,16]. DR consists of intended electricity consumption pattern adjustments by end-users aiming at altering the total energy consumption or the timing and level of a sudden rise in demand [16,19]. It has three main categories: time of use (ToU), direct load control, and load shifting. ToU method is when there are no fixed electricity prices; instead, different prices are set for other usage times, and high fees are imposed during peak loads. e electricity price is determined by how much and when electricity is used [20]. In the direct control method, utilities have access to enduser customers and limit customers' electricity usage by switching down some of the appliances during peak demands. Most of the appliances suitable for direct load control are heavy loads such as refrigerators, air conditioners, and cooling devices [21]. Load shifting is shifting loads from peak to off-peak times to allow even distribution of the load. is method does not change the overall energy consumed; as an alternative, it keeps the demand and supply to the required possible level to increase system efficiency. Apart from the mentioned programs, DSM makes use of various methods such as electricity tariffs [22], incentives [23], penalties [24], power-saving technologies, and government policies [25][26][27]. Incentive mechanisms are programs wherein a customer is being paid a monetary amount for his ability to reduce load during peak hours. is amount is separate from their standard rates of electricity applied, and it can be carried out as bill rebates or cash compensation [28,29]. e pricing mechanism uses different prices apart from ToU to reveal the cost and value of electricity within that hour. It is divided into two: real-time pricing (RTP) and critical peak pricing (CPP). RTP rates a varying price throughout the day by dividing a day into slots while CPP is when the electricity market charges differently depending on the supply cost of electricity [30]. DSM and DR can be used interchangeably. By tradition, utilities have been managing demand from their side through reducing transmission losses and increasing generation capacities without success [31]. rough DRPs, successful energy management is possible; however, when usage patterns and customers' behaviours are taken into account, more realistic results can be attained since they are crucial towards success load management and decision making [32]. Related Works and Contribution Referring to the available literature works, different researchers have implemented energy management in the electricity market using fuzzy logic. Yuan et al. (2018) proposed an energy management strategy based on fuzzy logic to distribute generated energy depending on the load demanded adequately. e method presented an improvement in the overall performance of the system [9]. On the other hand, Nehrir et al. (1999) implemented a customerinteractive DSM strategy using fuzzy logic to shift the water heater's usage pattern [33]. Results offered the possibility of flattening the load profile through fuzzy rules. Similarly, through the fuzzy controller, three parameters were taken into account: cost, demand, and comfort. e benefit of renewables was analyzed to show how they impact cost and energy saving. Results showed better improvement of load profile through shutting down of loads during peak hours [34]. Also, the application of fuzzy logic has been realized in hybrid microgrids for energy management and supervisory control [35][36][37][38]. Multiagent energy management based on fuzzy logic was introduced, aiming at managing energy in a stand-alone renewable energy system [39]. Ravibabu et al. (2009) developed a fuzzy-based DSM controller to reduce peaks by prioritizing vital loads during peak hours [1]. He did not take care of the customers' comfort, and the research was limited to a single house. Another implementation of a fuzzy-based DSM was done for the residential customers of a grid-connected microgrid by assuming that both the loads and generations are noncontrollable. e simulation results of 25 rules presented improvement in the profile [40]. Furthermore, a DSM for residential customers was employed using the multiscale and multilayer method. Each household was equipped with its control system to limit loads during peak periods [41]. A multiagent-based DR was proposed based on the multistep hierarchical algorithm for a multimicrogrid system to ensure its reliability and cost reduction. e method proved to be more cost-effective compared to conventional energy management systems [42]. A dynamic pricing DR was illustrated using reinforcement learning, which took care of the service provider's benefit and customers' cost reduction. Simulation results promised a balanced and reliable power system with reduced energy and operation cost [43]. To improve grid resilience and provide a cost-optimal solution for system operators, both mixed-integer linear and nonlinear approximation were used. e MINLP result is too large for nonlinear and nonconvex networks to provide efficient solutions [44]. On the other hand, providing an optimal solution using mixed-integer linear programming is time-consuming and slow, especially for larger datasets [45]. Most of the explained energy management system relies on traditional methods such as abstract and deterministic rules, which mainly encounter disadvantages like their inability to guarantee optimal results, especially when there are any fluctuations in the variables. Besides, abstract works usually approximate reality and rely on the developer's experience, hence sometimes unrealistic [43]. e significant advantage of fuzzy logic over others is that no mathematical modelling is needed for the controller design. e outputs and inputs of a fuzzy controller are mapped with membership functions, and final rules are set to obtain the desired outcome [46]. Also, upon an unexpected change in the system parameters, no modifications are required in the controller, and since the outputs depend on the effects of the inputs, the same rule base can still be used [47]. Generally speaking, many DSM methods based on fuzzy logic have been implemented; however, most of these studies did not take care of the customers' usage patterns and their load types. Also, demand response programs in electricity markets have predominantly been conducted in developed countries compared to developing ones [10,48,49]. Obtained results may not be feasible to most developing countries' load profiles due to less flexible loads and even low per capita energy consumption [2]. Moreover, the behaviour point of view in developing countries with many isolated microgrids is different from well-established countries with reliable power. erefore, the present study aims at bridging this gap by developing a DSM based on fuzzy logic to control other clusters of customers independently to enhance customer participation since energy management is following how much and when they consume electricity. In this study, clustering and load limiting reduce the burden to low energy users as they do not influence peaks in the system. e study has utilized data collected from Ngurudoto solar microgrid, Arusha, Tanzania. e proposed method presents the potential of balancing energy in the microgrid based on the load limiting. e new contributions of this work can be listed as follows: (i) A new fuzzy logic-based DSM approach has been proposed to match microgrid customers' real situations in the developing world with a vast difference in consumption (ii) e clustering approach has been used to facilitate customer prioritization; hence, low energy users did not suffer energy management burden (iii) e adopted method can enhance or trigger the DSM's embracement since the benefit is equally distributed depending on the potentiality of energy usage (iv) Customer usage pattern has been well established, which helps the power provider and customers plan for both energies, comfort, and cost-saving properly Data Collection. Consumption data were collected from a solar battery microgrid called Ngurudoto based in Arusha, Tanzania. It has a 7.5 kW capacity with wireless and communication modules for data communication, as shown in Figure 2. Electricity consumption was recorded using smart meters installed in each house. For each intelligent meter, real-time data were recorded using a combination of remote sensing devices, a data logger, and a remote PC. Collected demand data were analyzed using Microsoft Excel, Python, and Matlab. Fuzzy logic was done using MATLAB R2018a software. K-Means Clustering. K-means clustering is an unsupervised machine learning algorithm in which the grouping is done based on similarities and differences. It requires the following inputs: e methodologies involved in K-means clustering are summarized in Figure 3 and explained as follows: (i) e first step involves randomly initializing the value of K, which represents the number of clusters to be formed. e optimal value of K can be calculated using the elbow method. (ii) Distance between the data points and the initialized cluster centres is calculated by using the sum of square distance as follows: (iii) Depending on the minimum distance obtained, data points are grouped to the nearest cluster centres. (iv) e new centroid is calculated from the groups of data points moved to the same group, i.e., μ � data points in a certain cluster/total data points in that cluster (v) e mean is now repositioned as a new centroid. (vi) is process is repeated to all the clusters until there is no change in cluster centroids representing the clusters formed. Case Study. e main objective of this study was to design a DSM model for load management during peak hours. e microgrid located in Arusha, Tanzania, was considered as a case study whereby connected houses were grouped according to their usage patterns through clustering. e loads were also grouped into two groups: vital and nonvital, as illustrated in Table 1 to appreciate their importance. By applying DSM based on load priority and direct load control techniques, consumption was limited through cutting off nonvital loads during peak hours to supply power to the vital loads. Figure 4 represents the proposed model for controlling the loads according to their priority during peak hours. F1, F2, F3, and F4 are the fuses connected to the houses' nonvital loads in the respective clusters. e clusters suitable for energy management were selected according to the energy usage resulted from load profile classification. is selection helped prevent low energy users from the burden of the high cost of energy management. e fuses have a control signal from the fuzzy controller which trip off during peak hours when the load exceeds the Advances in Fuzzy Systems set limit. is mechanism allows shutting down of only nonvital loads leaving the vital loads connected. e selection of loads is based on customer priority. e fuzzy logic controller being the central part of the circuit has two inputs: time and comparator. A comparator compares the reference set maximum current and the current flowing while the timer in the input time checks whether it is peak or off-peak. e outputs of the two are then fed to the fuzzy controller. A controller's output sends the desired control signal to the connected houses to limit the power consumption. Fuzzy Logic Controller. e fuzzy controller is an artificial intelligence system that can be used to either reduce, control, or modify power consumption. It works so that only consumers' priority loads are connected during peak hours, and within the off-peak hours, loads can be increased without exceeding the limit. e general working mechanism of a fuzzy controller contains the following components: (i) Input (ii) Fuzzification (iii) Inference engine (rules) (iv) Defuzzification (v) Output Fuzzy Membership Functions. For all the input and output variables, fuzzy membership functions are essential for defining linguistic rules that govern their relationships. For this case, the trapezoidal membership function is used for input time, as shown in Figure 5. Figure 6 shows zmf and smf, the most appropriate membership function for the feedback or comparator. For the output signals, constant and linear membership functions were used, as shown in Figure 7. e most crucial task in the designing of the fuzzy controller is the development of the fuzzy rules. e number of rules always depends on the number of considered membership functions for both input and output blocks. In this study, the fuzzy controller is designed to limit power usage during peak hours by switching off some of the loads depending on the priority set. In this study, several rules are used, a few of them being mentioned below: (i) If time is off-peak (am) and feedback is < set current limit (A), then cluster 3 is all, cluster 4 is all, and cluster 8 is all (ii) If time is peak (am) and feedback is < set current limit (A), then cluster 3 is all, cluster 4 is all, and cluster 8 is all (iii) If time is off-peak (pm) and feedback is < set current limit (A) then cluster 3 is all, cluster 4 is all, cluster 8 is all (iv) If time is low and feedback is medium, then cluster 3 is no change, cluster 4 is no change, and cluster 8 is no change (v) If time is peak (am) and feedback is > set current limit (A), cluster 3 is vital, cluster 4 is vital, and cluster 8 is vital (vi) If time is off-peak (am) and feedback is none, then cluster 3 is all, cluster 4 is all, and cluster 8 is all (vii) If time is peak (pm) and feedback is > set current limit (A), cluster 3 is vital, cluster 4 is vital, and cluster 8 is vital (viii) If time is off-peak (pm) and feedback is none, then cluster 3 is all, cluster 4 is all, and cluster 8 is all Case Study. is study proposed fuzzy logic-based load management for an isolated solar microgrid. Twenty-two residential houses were considered whereby the K-means clustering algorithm was performed on the collected realtime electricity consumption data using the Scikit-Learn Python package [50]. Clustering was done to identify customers' typical usage pattern, grouping them into respective clusters and hence controlling their demand according to how they consume power. Load limiting and load priority techniques were used for the individual clusters to achieve a flatter load profile. Clustering with K-means gave ten groups of clusters as per Figure 8, which was obtained using the elbow method that determines the number of clusters from the sample data provided. e results are represented by the elbow point or the first turning point of the graph where a small decrease in the error sum is observed. e appropriate K value is at number ten from the data used, as indicated in Figure 8. erefore, the number of clusters found is ten. From the cluster results in Figure 9, houses have been grouped based on similar usage pattern. All the clusters demonstrated variations in demand which indicated that there is a need for DSM. Apart from the usage patterns, clusters 3, 4, and 10 represented households falling in the highest use cluster. Given solar-powered microgrids, these customers may be regarded as potential users who mostly decide or influence the system's overall peak demand. On the other hand, clusters 6 and 9 consist of the lowest electricity users; one may assume that their primary usage is for lighting and phone charging. In the context of DSM, these customers should not be burdened with either the high prices or switching off some loads since they do not influence peaks in the system. e remaining clusters, 1, 2, 5, 7, and 8, represent medium-use customers. is study's clustering results have shown great potential for a successful DSM program due to the proper load balancing technique. Clusters help identify customers responsible for energy management by assessing their degrees of peak generation. e importance of clustering results can be supported by Hannes et al. (2019), who explained that customers would strongly participate in any program if only they generate profit from it [51]. Figure 10 shows the composition of houses in each cluster. Cluster 2 represents a group with most customers while clusters 4, 6, 8, 9, and 10 each have only one customer, indicating that they have unique usage patterns. From the results obtained, one individual house for each cluster of interest was chosen as a case study for DSM. e choice was made based on the highest consumer in each cluster to cater for the worst-case scenario. Table 2 shows the Advances in Fuzzy Systems e grouped customers were then characterized by both pattern and magnitude of consumption. By using the collected demographic data, different types of customers falling into each consumption category and load profiles were characterized. Figure 11 shows the relationship between different demography attributes and power usage for all the houses. Results show that the highest energy consumer is in house 14, falling under cluster 3. High consumption customers also tend to have indoor businesses, as explained by house number 14, which has a bar business. ese results are further supported by authors in ref. [52] who analyzed customers' consumption patterns in Tanzanian rural microgrids. From the demographic results, it is clear that the number of house members and their indoor businesses influences electricity usage. Medium-use customers represented by house number 10, falling under cluster 10, show considerable electricity usages. According to the survey data, one may argue that electricity consumption is due to many students who claimed to be using electricity for studying purposes. e argument is in correlation with the study by Elizabeth et al. (2020), who suggested that medium energy households' presence was due to school children's presence [53]. e same applies to house number 8, which seemed to have a significant consumption since they owned a small shop. House numbers 1,9,19,20, and 21 characterized the small users since they contain either a single family member or a more significant number of older adults who do not consume a considerable amount of energy due to their lifestyles. e present study tries to accommodate the low energy users by proposing a DSM mechanism which did not burden them since their electricity usage was not significant in the system peak generation. e surface viewer in Figure 12 shows the three-dimensional plot of three parameters: time, power, and the cluster power output after fuzzy logic DSM. e graph shows managed clusters 3, 4, and 8. In cluster 3, the off-peak dates represented by days before 14 were uncontrolled since the set power limit was not reached. is situation allows both vital and nonvital loads to be switched on similar to how Nehrir et al. (1999) explained the water heater operation scheduled during off-peak hours [33]. Peak days in the range of 18-22 marked the concentration area whereby the fuzzy logic controller helped limit the power usage to 1300 W using a 7.5 A fuse. e yellow colour shows the shaved or clipped area after DSM, which is the operator's target. For cluster 4, the allowable load was set as 800 W during peak hours. Load limiting was done via a 4 A fuse which shut down excess loads observed on the 21 st day. Likewise, in cluster 8, all the loads were limited to 600 W during peak hours. Excessive loads found in the range of 24-26 days were shaved to 600 W, as depicted in the diagram. Figures 13 and 14 show the rule viewer of the outputs for clusters 3 and 4, respectively. e rule viewer is used to show the effectiveness of the output. From these diagrams, two points were taken into consideration: off-peak and peak periods. On the sixth day, cluster 3, represented in Figure 13 represents the off-peak points. A feedback current of 2.99 A corresponding to a power of 657.8 W gave an output power of 657 W after fuzzification. e results show almost the same energy before and after fuzzification, indicating that the loads were not managed or shifted. In the exact figure, peak points are represented by the 17 th day. e power observed was 1738 W single-phase, which gave a feedback current of 7.9 A to the controller, and hence, the power was limited to 1000 W by switching off nonvital loads. Similarly, cluster four, represented by Figure 14, shows the results during off-peak and peak periods. Load curtailment is observed during peak hours while nothing was done during off-peak hours. e fuzzy controller achieved the above through the feedback current received. Figures 15-18 represent the monthly load profiles before and after implementing DSM for clusters 3, 4, 8, and 10, respectively. e results show that the generated power can be fully utilized except during peak power periods by implementing fuzzy logic-based energy management scheme as the author depicted in ref. [9]. Before optimization, the load demand is higher than the set limits. After applying a fuzzy logic controller, closer to a flat load profile is seen, which helps avoid system collapse and blackouts, especially during peak hours. e profiles clearly show that each cluster's peaks have been restricted to match the corresponding limits set. Also, the overall load profile before and after implementing fuzzy logic DSM is shown in Figure 19. ere is a significant reduction in demand during peak periods due to the clusters that have taken part in DSM. ese results show more improvement in the design and implementation than that reported by Ravibabu et al. [1] since many houses have been included in the clustering approach. Also, with this approach, small electricity consumers are not taken into consideration until when they satisfy for DSM. Figure 20 shows energy savings for different days in a month and the overall energy saving per month. e graph shows the power saving of almost 3700 W per month. e amount of saving achieved is very significant in the DSM context, and it assures the system stability in case of peak loads. Different energy savings are also observed for different days due to the variable nature of demand [54]. Moreover, on the 3 rd , 5 th , 7 th , 11 st , 13 th , 14 th , 16 th , 19 th , 22 nd , and 24 th days, the savings are approximately close to zero, and one may argue that no load was either shifted or switched off. In this scenario, the argument can be that most of the houses were in the offpeak range, hence retaining the same energy before and after DSM. Likewise, the highest energy saving was achieved on the 27 th day, and one may suppose that several nonvital loads were shut down to avoid overloading the system. It is clear from the obtained results that, with fuzzy logicbased energy management strategy, demand profiles can be levelled, and available generation can be optimized Advances in Fuzzy Systems appropriately to meet consumers' needs. Since the proposed method tried to operate the loads partly, the system's efficiency can be improved. e above observation is in line with ref. [55] and further supported by ref. [33]. On the other hand, since both the customers' load demand and the power generated from solar microgrids are uncertain and stochastic, the fuzzy logic controller has proven to be the most efficient and effective method compared to mathematical 10 Advances in Fuzzy Systems models discussed by the authors in ref. [9]. Conclusion e available gap between demand and supply due to the increase in electrical appliances, leading to a shortage of electricity during peak usage, is pronouncing. DSM programs offer an enhanced chance to exploit consumption diversity and accordingly reduce the peak demand for electricity. Shifting loads from peak to off-peak or switching off some loads during peak hour's period can be a challenge to specific customers; hence, exploiting their usage pattern through clustering leads to more flexible DSM techniques. In this work, the fuzzy logic-based DSM programs are used as a load limiter to control loads during peak hours. is control mechanism helps in the proper utilization of the available power at a given time by giving priority to vital loads depending on the preference set by customers. Moreover, the proposed program favours all kinds of customers, hence encouraging their participation in DSM. erefore, this study is essential to guarantee energy saving and customer satisfaction. It can be further extended to account for cases where consumers are given incentives for their willingness to participate in the DSM program. Data Availability e data used to support the findings of this study will be available upon request. Conflicts of Interest e authors declare that they have no known competing for financial interest or personal relationships that could have influenced the work reported. Authors' Contributions is paper was a collaborative effort between the authors. e authors contributed collectively from the idea development to manuscript preparation.
2021-04-14T13:14:59.221Z
2021-02-15T00:00:00.000
{ "year": 2021, "sha1": "6b5797b58b39b6483823eff785ca7b7b000d7082", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/afs/2021/6614129.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c90a629468ebda5c88af071d248e953f1d79a7f3", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
56037790
pes2o/s2orc
v3-fos-license
Nutritional status of adolescent girls in a rural area of Bangladesh: A cross sectional study The improvement of adolescent nutritional status may help address the reduction of all forms of malnutrition in Bangladesh. This is because at this stage, they experience a growth spurt thus increasing the need for most nutrients, needed for growth and reproductive health. The objective of this research was to assess the nutritional status of adolescent girls in rural areas of Bangladesh and find out the associated factors that affects nutritional status. A cross sectional study was carried out among 106 adolescent girls of Nobabpur village in Comilla district. A questionnaire was developed to obtain demographic information, food intake pattern and anthropometric measures such as weight, height with measuring instruments. About 80% were found normal according to BMI where about 13% adolescent girls were malnourished, below the cut off value 18.5. Place of residence, education of adolescent girls, their family expenditure to food and improper knowledge on food and nutrition were identified as underlying causes. Nutritional profiles of adolescent girl can be improved by implementing effective nutrition education program, providing supplementary food, facilitating primary health care program and creating awareness about nutritional knowledge. Severely malnourished adolescent girl in the selected area should be identified as early as possible and brought under supplementary Introduction Nutritional status is defined as the condition of the body in those respects influenced by the diet; the levels of nutrients in human body and the ability of those levels to maintain normal metabolic integrity (Saxena and Saxena, 2009).Essential nutrient must be provided to body by the diet otherwise its inadequate causes health problems such as malnutrition.According to WHO (2006) adolescence as the period in human growth and development that occurs after childhood and before adulthood, from ages 10 to19.Biological processes drive many aspects of this growth and development, with the onset of puberty marking the passage from childhood to adolescence (Mulugeta et al., 2009) Growth during adolescence is faster than at any other time in an individual's life except the first year.Good nutrition during adolescence is critical to cover the deficits suffered during childhood and should include nutrients required to meet the demands of physical and cognitive growth and development, provide adequate stores of energy for illnesses and pregnancy and prevent adult onset of nutrition-related diseases (WHO, 2006). Like other South-Asian countries, Bangladesh has shown deficiencies in the intake of all nutrients, particularly iron, calcium, vitamin A and vitamin C. The main reasons are mainly the low educational level of parents and low family income.Dietary intake with respect to adequate availability of food in terms of quantity and quality (particularly, the mean caloric intake), ability to digest, absorb and utilize food and the social discriminations against girls can greatly affect the adequate nutrition of adolescents.Many boys and girls enter adolescence undernourished, making them more vulnerable to disease and early death.Conversely, overweight and obesity another form of malnutrition with serious health consequences is increasing among other young people in both low and high income countries (Cole et al., 2007). Adequate nutrition and healthy eating and physical exercise habits at this age are foundations for good health in adulthood.Adolescents are the best human resources.But for many years, their health has been neglected because they were considered to be less vulnerable to disease than the young children or the very old.Their health attracted global attention in the last decade only.As adolescents have a low prevalence of infection compared to under-five children, and of chronic disease compared to ageing people, they have generally been given little health and nutrition attention, except for reproductive health concerns (Kalhan et al., 2010).Malnourished adolescent girls who have babies at a young age are likely to experience, and will be less able to withstand, complications because the body has not yet reached maturity.Maternal mortality is higher in anemic women.Even when they survive, poorly nourished adolescent mothers are more likely to give birth to low birth-weight babies, perpetuating a cycle of health problems which pass from one generation to the next (Kumar, 2012).Hence it is essential to assess the nutritional status of adolescence girls, especially developing countries like Bangladesh.The objective of the present work is to assess the nutritional status of adolescent girls in rural areas of Bangladesh and find out the associated factor that affects nutritional status. Subjects and study area To assess, analyze and evaluate the lifestyle; health and nutritional profile of adolescent girls various types of anthropometric, socio economic, food intake pattern and knowledge about nutrition have been collected from adolescent girls of the target population.For this purpose 106 adolescent girls were selected randomly for this study.The study was carried out at Nobabpur of Comilla, to find out the lifestyle, health and nutritional status of adolescent girls of that area. Study design The study was cross sectional in nature.The data were collected at one point of time from samples selected to describe the situation of nutritional status of adolescent girls. Collection of demographic data A Questionnaire was developed to obtain relevant information on age, sex, weight, socio-economic history through interview of the adolescent girls.A structured interview where asked to know the food intake pattern among adolescent girls. Anthropometric data To assess the nutritional status, the anthropometric measures such as weight, height and BMI were taken.A lever balance (Detecto-Medic, Detecto scales, USA) was used to record body weight (Anand et al., 1999).Body weight recorded to the nearest 0.5 Kg on bare foot with minimum clothing.Height of the subjects were measured with a standard scale to the nearest l cm in standing up-straight without assistance, with bared heels close together.Legs straight, arms at the sides and shoulders relaxed, looking straight ahead.During measurement of height the person was allowed to take asleep breath and at maximum inspiration was recorded. Calculation of Body mass index (BMI) Body Mass Index (BMI) is an anthropometric index of weight and height that is defined as body weight in kilograms divided by height in meters squared.BMI of adolescent girls were calculated by using the following equation: Body mass index (BMI) = Weight in kilogram (kg)/ Height in meter 2 (m 2 ) BMI is the commonly accepted index for classifying adiposity in adults and it is recommended for use with children and adolescents (Ulijaszek and Kerr, 1999). Statistical analysis The obtained data were stored in Microsoft Excel 2007 and then exported into SPSS Version 17.0 software (SPSS Inc., USA) for statistical analysis.Data of anthropometric and food habit pattern were analyzed by following above statistical procedure to assess the nutritional status of the children. Results and discussion A cross sectional study was carried out among the households with adolescent girls who were selectively selected in a village named Nobabpur.The educational status of adolescent girls is presented in Table-I. Figure-1 shows the percent expenditure pattern of studied household.It reflects that only 2.76% cost is for housing, 14.95% expense is made for education, medicine and clothing costs almost equally but food is the largest sector of expenditure (53.73%). Table-II shows the adolescent girl's nutrition knowledge such as sufficiency of the vegetables taken, habit of taking vitamin C rich foods, concept about iron rich foods, eating iron rich foods, knowledge about iron deficiency, knowledge about the occurrences of symptoms due to iron deficiency, sufferings experienced from the occurrences of vitamin A deficiency diseases, having the symptoms of vitamin C deficiency disease, knowledge about anemia and experienced sufferings from anemia. In Table-III, it shows that, 66.9% household have no idea about balance diet .Imbalance intake of food would lead to deficit/excess of nutrient intake 66.9% of households ,who have no idea about balance diet.Although 56.6% households do not familiar with iodine rich food but 86.8% family used iodized salt.With greater emphasis on the health of the women in general and the girl child in particular, the picture of nutritional status seen in the rural girls of Bangladesh is alarming though not surprising (Vasanthi et al., 1994).The poor nutritional status of adolescents, especially girls, has important implications in terms of physical work capacity and adverse reproductive outcomes (Haboubi and Shaikh, 2009).Adolescents (aged 10 to 19 years) have specific health and development needs, and many face challenges that hinder their wellbeing.It has been reflected in various studies and surveys done over 1991 to 2000 by different national and international bodies that nutrition and health situation particularly women, children and adolescent girls is grave in this country (WHO, 2006). The main cause of poor nutritional status is the lack of knowledge of nutrition among the girls.Although significant rate of illiteracy does not prevail but 33.9% girl has dropped from primary school where 47.2% has entered higher study. Class Frequency Table III. Perception of adolescent girls about balance diet and different common disease related to nutrition Attaining a degree from university or college was very rare among adolescent girl only 12.3%.Level of education has a significant independent effect on nutritional status in adolescent girls.Present study showed that, a tendency towards an increase nutritional status in adolescent girls with an increase in the level of their education.This may due to the relatively better understanding of public health knowledge to improve the nutritional status of adolescent girls (Mulugeta et al., 2009). Another cause is the expenditure pattern of studied household where these adolescent girls belong.Household cost for food was vast and largest share among other basic needs and it is 53.73% of total expenditure.This is one of the factors which affect the nutritional status of adolescent girls.They also bought cloth costing 9% percent of their total expenditure.Treatment and curing of disease was expensed with 10% of expenditure.Present study observed that a tendency towards an increase nutritional status in adolescent girls with an increase in the family cost for food. Adolescent girl nutrition knowledge and experienced sufferings can be explained as 58.5% adolescent girls took sufficient vegetable and almost 78.3% took citrus foods regularly. Vegetables are good source of iron according to 37.7% adolescent girl.Significant proportion of adolescent girl was not concern about iron deficiency anaemia (IDA), vitamin A deficiency disorders, iodine deficiency disorders, and vitamin C deficiency disorder.61.3% adolescent girl did not know about anemia and unable to confirm their position in at risky group of IDA along which among them 51.8% have experienced anemic condition in variably with different symptoms.Adolescent girls who have relatively better understanding of the public health and nutritional knowledge yield improve pattern of nutritional status. Most of adolescent girls about 80% exist in normal range whereas only 13% were malnourished.Research has shown that better-nourished girls have higher pre-menarche growth velocities and reach menarche earlier than undernourished girls, who grow more slowly but for a longer, as menarche is delayed (WHO, 2006).Because underweight girls are growing for longer duration, they may not finish growing before their first pregnancy.Adolescents with a BMI above the 25-29.99 are at risk for overweight.Weight gain is the result of a positive energy balance (consuming more energy than is expended).Energy expenditure, as assessed through levels of physical activity, declines in children as they reach adolescence, particularly in adolescent girls (Naidu and Rao,1994).There is evidence that children and adolescents of rural families are more overweight than in the past, possibly because of decreased physical activities, sedentary lifestyle, altered eating patterns and increased fat content of the diet.Increase in sedentary activities, such as television viewing and computer games, is suspected to be responsible for the decline in physical activity levels (WHO, 2006). That adolescent girls have no access to resource and power, improper personal hygiene practices, poor knowledge about food and health, lower income of the family, high prevalence of stunting, wasting, underweight and thinness, and vulnerable to domestic abuse and violence is not unknown to third world country like Bangladesh.It should be emphasized to improve their situation without any subterfuge through preserving their right, proper health and care, necessary preventives and curatives, enriched balanced diet and ensuring sustainable environmental heath, which demand paramount allocation of resource towards adolescent development to have a glorious future for all. Conclusion The present study reveals that nutritional status of adolescent girl in rural area of Bangladesh is in satisfactory level with minimal malnourished adolescent girl.Nutritional deprivation affects almost all growth parameters and final adult body size resulting in thinness and stunting.However, nutritional status of both boys and girls improved with age, showing that the effect of malnutrition is more pronounced at the time of peak growth.Nutritional profiles of adolescent girl can be improved by implementing effective nutrition education program, providing supplementary food, facilitates primary health care program and creating awareness about nutritional knowledge.The least number of severely malnourished adolescent girls in the selected area should be identified as early as possible and brought under supplementary feeding program for improvement of their health status. Figure- 2 Figure-2 shows BMI, formerly called the Quetelet index, is a measure for indicating nutritional status in adolescent girls.Most of the adolescent girls about 80% were normal in range and 13% were malnourished in total. Fig. 2 . Fig. 2. Nutritional status of adolescent girls based on BMI (body mass index)
2018-12-06T21:47:19.310Z
2017-10-03T00:00:00.000
{ "year": 2017, "sha1": "0d42fe81c6e21c3d57d81304625dfae422f3e325", "oa_license": "CCBYNC", "oa_url": "https://www.banglajol.info/index.php/BJSIR/article/download/34158/23028", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0d42fe81c6e21c3d57d81304625dfae422f3e325", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10729365
pes2o/s2orc
v3-fos-license
Awareness of Stroke Risk after TIA in Swiss General Practitioners and Hospital Physicians Background Transient ischemic attacks (TIA) are stroke warning signs and emergency situations, and, if immediately investigated, doctors can intervene to prevent strokes. Nevertheless, many patients delay going to the doctor, and doctors might delay urgently needed investigations and preventative treatments. We set out to determine how much general practitioners (GPs) and hospital physicians (HPs) knew about stroke risk after TIA, and to measure their referral rates. Methods We used a structured questionnaire to ask GPs and HPs in the catchment area of the University Hospital of Bern to estimate a patient’s risk of stroke after TIA. We also assessed their referral behavior. We then statistically analysed their reasons for deciding not to immediately refer patients. Results Of the 1545 physicians, 40% (614) returned the survey. Of these, 75% (457) overestimated stroke risk within 24 hours, and 40% (245) overestimated risk within 3 months after TIA. Only 9% (53) underestimated stroke risk within 24 hours and 26% (158) underestimated risk within 3 months; 78% (473) of physicians overestimated the amount that carotid endarterectomy reduces stroke risk; 93% (543) would rigorously investigate the cause of a TIA, but only 38% (229) would refer TIA patients for urgent investigations “very often”. Physicians most commonly gave these reasons for not making emergency referrals: patient’s advanced age; patient’s preference; patient was multimorbid; and, patient needed long-term care. Conclusions Although physicians overestimate stroke risk after TIA, their rate of emergency referral is modest, mainly because they tend not to refer multimorbid and elderly patients at the appropriate rate. Since old and frail patients benefit from urgent investigations and treatment after TIA as much as younger patients, future educational campaigns should focus on the importance of emergency evaluations for all TIA patients. Introduction Transient ischemic attacks (TIA) are warning signs of stroke and require emergency treatment [1], which can prevent subsequent strokes in patients of all ages, even if they are comorbid [2]. European and American guidelines (ESO, ASA/AHA) both recommend TIA be immediately investigated, but many patients delay going to the doctor or ER, and physicians may not realize patients must be immediately referred, tested, or treated [3,4]. Those with TIA often make first contact with family members and general practitioners (GPs), rather than with stroke specialists in emergency rooms (ER) or TIA clinics [5]. Earlier surveys showed that primary care physicians found it hard to diagnose and manage TIAs, and that they treated TIA patients less urgently than stroke victims [6][7][8][9][10]; almost all of these studies were based on small samples. Recently, GPs and neurologists were the target of several articles in Swiss medical journals that described how to manage patients with TIAs [11] [12] [13] and the problem of estimating risk after TIA was commonly discussed at local, national and international conferences. However, these measures may not have effectively alerted GPs to stroke risk after TIA, and their willingness to refer TIA patients for emergency evaluation has not yet been assessed. We hypothesized that GPs would underestimate stroke risk after TIA, and that underestimates would largely account for failure to schedule tests or refer patients for emergency evaluation. Our goal was to determine how much Swiss GPs and hospital physicians (HPs) knew about stroke risk after TIA. We used a structured questionnaire to find out if GPs refer patients with suspected TIA for emergency evaluation, and statistically analysed physicians' reasons for not immediately referring patients. Study area and population We invited the participation of all GPs and HPs (specialists in general internal medicine) in the populations of Bern, Lucerne, Solothurn, Obwalden and Nidwalden cantons, and the Germanspeaking areas of Fribourg and Wallis. In 2012, there were about 1.8 million people in our catchment area. Since there is no national registry of GPs, to identify GPs, we searched the registries of the national association of GPs (Schweizerische Gesellschaft für Allgemeinmedizin, SGAM) and the occupational union (Hausärzte Schweiz, MFE). SGAM and MFE gave us access to their database so we could contact GPs. We searched the institutional websites of all hospitals within the catchment area to identify HPs. Physicians provided their consent to participant implicit by replying the survey. The study was performed according to the ethical guidelines of the canton of Bern. An approval by an ethic committee was not required since data were non-medical and collected anonymously. The local ethic committee of Bern issued a waiver. Processes and Outcomes In 2013, we sent an email invitation to all GPs and HPs (n = 1545) in the catchment area to participate in an online survey. The survey was hosted on the SurveyMonkey website (www. surveymonkey.com, Palo Alto, CA, USA), which uses IP-addresses to prevent duplicate replies. We sent non-responders two email reminders. If they still did not answer, we sent them a printed copy of the survey by postal mail. The survey (S1 Appendix) included three clinical vignettes with typical clinical pictures of patients with TIA. Vignette 1 described a typical clinical case of TIA and asked physicians to estimate, on a Likert scale, a patient's risk of a subsequent stroke within 24 hours, and within 3 months. To make it easier for physicians to estimate risk, we listed the 1-year stroke risk of an 85-year-old man. We also asked participants how confident they were of their risk assessment. In Vignette 2, physicians were asked to select their next step after a TIA (multiple choice question). Physicians were also asked to select the aetiology of the TIA with the highest rate of recurrence (multiple choice question). In Vignette 3, participants were asked to estimate the amount by which carotid endarterectomy would reduce the risk of stroke in patients who had symptomatic carotid artery stenosis. We again asked how confident physicians were of their risk assessment. In addition to the vignettes, we also asked physicians how often they treated patients with TIA, if they investigated TIAs rigorously, if they immediately refer patients to an emergency room, and their reasons for or against immediate referral. Statistical analysis We compared baseline characteristics (age, sex) using Chi-square or Fisher's exact test for categorical data, and differences in estimation of stroke risk using t-test for continuous data. We analysed GP data separately, and stratified by confidence in risk estimates, experience with TIA, and age of the physician. We dichotomized co-variables and omitted underestimates of risk so we could understand the possible confounding mechanism in overestimated risk assessments. To compare the third risk estimate (carotid endarterectomy) with the other two, we collapsed both overestimated risk reductions (40% and 50%) into one category. We calculated pvalue using Chi-square test and Spearman's rho for risk estimation as the independent variable. Each of the other co-variables (experience, age, self-confidence) was used as the dependent variable to detect correlation. Finally, we performed a logistic regression analysis to assess predictors of immediate emergency referral by GPs in cases of suspected TIA. We considered a pvalue of 0.05 to be statistically significant. All analysis was done with STATA release 13.1 (Stata Corp, College Station, TX, USA). Baseline characteristics We contacted 1545 physicians (1259 GPs and 286 HPs); 40% (614) responded, 79% (486) to the online questionnaire, and 21% (129) to the postal questionnaire. Fig 1 is a flowchart of response rate. HPs were more likely to respond to the online questionnaire than GPs (87% vs. 76%). Table 1 shows baseline characteristics of GPs and HPs. Table 2 details the stroke risk estimates made by GPs and HPs. A large majority of physicians (75%, 457) overestimated risk. They were much less likely to underestimate risk of stroke in the 24 hours subsequent to TIA (9%, 53). GPs were more likely to underestimate and HPs more likely to overestimate the risk (p = 0.01). In answer to the question about stroke risk within the subsequent three months, 40% (245) overestimated, and 26% (158) underestimated; there was no difference between GPs and HPs (p = 0.6). Most physicians (78%) overestimated reduction in stroke risk in the five years after carotid endarterectomy; only 6% underestimated risk reduction; there was no difference between GPs and HPs (p = 0.14). When asked for the aetiology with the highest recurrence rate, over half of physicians (61%) incorrectly labelled cardioembolic TIAs as the most dangerous cause of stroke; 25% correctly answered a large vessel stenosis, and 11% though it was small vessel diseases (11%). GPs and HPs estimated the risk of underlying aetiology differently (p = 0.002); HPs were more likely to correctly label a large vessel stenosis to be the major cause of stroke (36% vs 22% in GPs) Stratifying risk estimates In Table 3, we stratified risk estimates by physician experience in treating patients with TIA, how confident they were of their risk estimates, and the age of the physician (cut-off was 55 years). Those who were more confident of the accuracy of their risk estimate were more likely to overestimate stroke risk, and to overestimate the amount that endarterectomy reduced risk. Doctors who had more experience treating patients with TIA were not likely to be better at estimating risk; age also had no influence on risk estimates. Table 4 summarizes the next diagnostic steps doctors would take if they suspected patients had TIA. Almost all physicians (93%; 543; no difference between GPs and HPs) said they would rigorously investigate the cause of a TIA. Over half (55%; 330) would immediately refer patients to the ER. GPs and HPs chose different diagnostic procedures (p = 0.017). Though 38% of physicians (229) would "very often" immediately refer patients suspected of TIA to an ER, HPs were more likely to refer than GPs (p<0.001). Physicians gave different reasons for not referring patients immediately to an ER, 144 (13%) mentioned advanced age of a patient, 138 (13%) a patient's wish to avoid further tests, and 111 (10%) the need for long-term care, and multimorbidity (see S1 Table for all reasons against referral). Physicians would immediately refer a patient with suspected TIA to an ER if they had cardiovascular risk factors (10%) or were younger (9%). Our logistic regression analysis showed a positive association between a physician's belief that the aetiology of a TIA should be rigorously investigated and the likelihood they would immediate referral patients with suspected TIA to ER (OR 2.0, 95% CI 1.5-2.7, p<0.001). Discussion We found that physicians overestimate risk of stroke within 24 hours and within three months of TIA, and they overestimate how much carotid endarterectomy lowers the risk of stroke. More than 90% of physicians say they would rigorously investigate the cause of a TIA, but in results of the vignettes and general questions indicated that only half of them would immediately refer patients to an ER for further work-up. The main reasons physicians did not immediately refer patients included a patient's need for long-term care, multimorbidity, patient desire to avoid further tests, and that the patient was very old. Earlier studies found that a third of patients diagnosed with TIA in primary care clinics were not hospitalized and did not receive further tests or treatment [8]. Studies from Japan, France, Poland, Australia and the United States affirmed that most physicians are undereducated about the risk of stroke after TIA; many found it difficult to manage these patients [6,7,9,10,14]. It was not only primary care physicians who lacked knowledge; neurologists had the same problem [9]. In contrast to these studies, and counter to our hypothesis, we found that Swiss physicians tended to overestimate the risk of stroke after TIA, perhaps because there has been a lot of effort, in Switzerland, to raise awareness of that risk. This may be why less than 10% of physicians underestimated stroke risk with 24 hours after TIA, and less than 30% underestimated risk within 3 months. Most physicians (78%) also overestimated the benefits of carotid endarterectomy on reducing stroke. Physicians commonly overrate the benefits of surgical procedures [15]. Contrary to European and American guidelines (ESO, ASA/AHA) that recommend investigating TIA within the first 24 hours, and despite their overestimates of stroke risk and the benefits of therapy after TIA, only 55% of physicians would immediately refer their patients to an ER. Even fewer (45%) would schedule tests within two days of the event for patients suspected of TIA. The reasons Swiss physicians usually gave for not making an immediate referral to an ER were generally inadequate. If a patient is already in palliative care, there may be a good reason not to investigate further; if they have severe dementia, the question might be debatable. But the benefit of emergency treatment for TIA is clear, even for older and multimorbid patients: over a third of the study population of two large studies (Early Use of Existing PREventive Strategies for Stroke [EXPRESS] and the population-based Oxford Vascular Study [OXVASC]) was over 80 years old and had multiple comorbidities. In the EXPRESS study, risk was reduced independent of age, so advanced age and comorbidities are not good reasons for failing to schedule TIA patients for emergency evaluation. Our survey is limited by the relatively low response rate; only 40% (614) of 1545 physicians answered our questionnaire. We did try to increase the response rate by having medical authorities send a letter of recommendation that asked physicians to participate in the survey. We also sent reminders by email, and a final reminder by postal mail. We were also limited by our use of open-ended questions, since, for example, we could not determine why physicians considered multimorbidity a reason not to refer. On the other hand, our study was strengthened by its population-based nature, its large sample size, and its inclusion of both general and hospital physicians. Swiss physicians overestimate stroke risk after TIA, but refer elderly and comorbid patients to the ER at a much lower rate than guidelines recommend. Educational campaigns might be more effective if they emphasize the importance of emergency management of all TIA patients, regardless of age or comorbidity. Supporting Information S1 Appendix. Questionnaire. (PDF) S1 Table. All reasons for physicians against immediate referral of patients suspected for TIA to an emergency ward. (DOCX)
2016-05-12T22:15:10.714Z
2015-08-18T00:00:00.000
{ "year": 2015, "sha1": "eba99cc15c2f80f238e1c55f038824ea0c27772a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0135885&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eba99cc15c2f80f238e1c55f038824ea0c27772a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
227247478
pes2o/s2orc
v3-fos-license
Probing thermal magnon current mediated by coherent magnon via nitrogen-vacancy centers in diamond Currently, thermally excited magnons are being intensively investigated owing to their potential in computing devices and thermoelectric conversion technologies. We report the detection of thermal magnon current propagating in a magnetic insulator yttrium iron garnet under a temperature gradient using a quantum sensor: electron spins associated with nitrogen-vacancy (NV) centers in diamond. Thermal magnon current was observed as modified Rabi oscillation frequencies of NV spins hosted in a beam-shaped bulk diamond that resonantly coupled with coherent magnon propagating over a long distance. Additionally, using a nanodiamond, alteration in NV spin relaxation rates depending on the applied temperature gradient were observed under a non-resonant NV excitation condition. The demonstration of probing thermal magnon current mediated by coherent magnon via NV spin states serves as a basis for creating a device platform hybridizing spin caloritronics and spin qubits. I. INTRODUCTION The utilization of magnons, i.e., the quanta of collective spin excitation, in magnetic media for transmitting and processing information has flourished in the recent decade and is known as magnon spintronics [1][2][3][4]. Moreover, the emerging field of spin caloritronics [5], which utilizes the interplay between spin and heat currents, resulted in an alternative strategy in creating more efficient computing devices [6,7] and versatile thermoelectric conversion technologies [8]. The progress in magnon spintronics and spin caloritronics field is benefited from the ubiquitous use of spin transport measurement based on the inverse spin Hall effect (ISHE) [9], in which a paramagnetic heavy metal is patterned on a ferromagnetic medium [2,6,8]. Quantum sensors based on the electron spins in diamond with nitrogen-vacancy (NV) centers have been regarded as eminent sensors for various condensed matter phenomena [10][11][12], including spin waves, as it offers high spatial resolution at nanoscale enabling to probe fluctuating magnetic fields with broad frequency band from static to GHz, and non-perturbative operation [10,13]. NV centers are well coupled to coherent magnetostatic spin waves (MSWs) owing to their energy matching [14][15][16][17][18][19][20]. Recently, magnon population has been measured and controlled via pumping the NV center by spin waves with a single NV spin sensitivity [21][22][23]. Furthermore, the same effect was observed nonlocally using the ISHE [24]. Additionally, NV spin excitations and modulations via the spin-transfer-torque oscillation of spin waves by electrical methods through the spin Hall effect have been demonstrated recently [25][26][27]. In contrast, thermally excited magnons with significantly higher energy [28] (defined by ℏ = ! ) than NV spins cannot resonantly excite NV spins, whereas the high-energy magnons can affect the NV relaxation rate in a non-resonant way [29,30]. These high-energy magnon current is known to interact with lower-energy MSWs through the thermal magnon spin-transfer torque [31][32][33][34][35][36]. Thus, probing thermal magnon current via NV spin can be realized using MSWs as a mediator. Herein, we report the detection of thermally excited magnon current mediated by MSW by exploiting the thermal magnon spin-transfer torque (Fig. 1), bridging the energy gap between the thermally excited magnons and NV spin. Using an ensemble of NV spins in a bulk diamond, we observed the modification of the magnetostatic surface spin waves' (MSSWs') magnetization dynamics under resonant NV spins excitations influenced by the thermal magnon current in a magnetic insulator yttrium iron garnet (YIG). Besides, under a non-resonant NV spins excitation condition in a nanodiamond, we also observed NV relaxation-rate changes related to the thermal magnon current. II. METHODS We used a liquid-phase-epitaxy grown YIG sample in the form of a trilayer of single-crystalline YIG/gadolinium gallium garnet (GGG)/YIG of thicknesses 100, 550, and 100 µm, respectively, measuring 6 mm × 3 mm (Fig. 2(a)). To improve the lattice matching between YIG and GGG, a small amount of yttrium in the YIG was substituted with bismuth. Throughout the experiment, external magnetic fields ± "#$ were applied along the y-axis with a tilted angle to the surface plane of the YIG/GGG/YIG ( Fig. 2(a)). Two gold-wire antennas A and B (50 µm in diameter) were overlaid on the surface near both edges of the upper YIG, separated approximately 2 mm away to excite MSSWs by electrical microwave field, and the MSSWs propagates along the ∥ × . direction (. is a vector normal to the YIG's surface) [37]. In this setup, the MSSWs are predominantly excited on the upper YIG layer surface by one of the antennas and propagate to the other end of the sample depending on the polarity of the applied external magnetic field, where + "#$ (− "#$ ) is along the + (− ) axis A temperature gradient ∇ was created along the YIG's longitudinal direction by increasing or lowering the temperature at either site A ( ( ) or site B ( ) ). Such temperature control keeps the temperature at the middle of the YIG's longitudinal dimension constant, as well as the diamond beam's temperature, under the application of temperature differences Δ up to 10 K ( Fig. 2(a)). This was confirmed using the temperature sensing capability of the NV spins [40][41][42] and infrared thermography (see Supplemental Note 4 and 5 [38]). Δ is defined as the difference between ( and ) (Δ = ( − ) ). A. Spin wave and NV spin resonance mapping The MSSWs were excited from antenna A with microwave (MW) power +, = 1 mW in an increasing + "#$ , and the YIG's global coherent spin-waves resonance spectra were mapped out by performing microwave absorption ( ---parameter) measurement using a vector network analyzer (Rohde & Schwartz ZVB8) at Δ = 0. Figure 2(c) shows a map of the spin-wave spectra, exhibiting lines of resonance of the MSSWs spanning to the higher frequencies from the uniform Kittel mode (ferromagnetic resonance (FMR)). The solid red and yellow lines indicate the NV spins' upper ( * = 0 ↔ +1) and lower ( * = 0 ↔ −1) bound resonance transitions defined by the Zeeman energy, respectively [18]. When an energy matching condition between the MSSW and NV spins is fulfilled ( +., = /0 ), the NV spins can be coherently excited by the MSSW [17,18]. From the result in Fig. 2(c), we can expect excitations of the NV spins by the MSSWs within the red and yellow lines. Next, we mapped out the MSSWs-driven NV spins resonance frequencies by performing ODMR spectroscopy with an increasing + "#$ at Δ = 0 using the diamond beam. Figure 2(d) shows a color map of the MSSWs-driven ODMR in the diamond beam. As expected, only the NV spins' resonance transitions that matched with the MSSW's resonance frequencies underwent a PL intensity quenching as a consequence of the transition from * = 0 to * = ±1 [17,18]. In Fig. 2(d), only the * = 0 ↔ −1 transitions that overlapped with the MSSW's resonance frequencies appeared. Furthermore, by zooming in around the NV3 spectral line in [19,43]. The spectra at a matching condition with + "#$ = 19 mT, +, = 2.58 GHz between a MSSW with a specific wavenumber and the NV spins are shown in Figs. 2(g) and (h). B. Detection of thermal magnon current via coherent driving of NV spins In a magnet under a temperature gradient, thermal magnon current is generated [4,8] and exerts a thermal spin-transfer torque $1 to a precessing magnetization of coherently excited MSSWs (Figs. 1 and 2(b)). The phenomenon has been well known to be detected through microwave response (Yu et al. [36]) and the ISHE [33][34][35]. Here an ensemble of NV spins in a diamond beam is utilized to detect the thermal magnon current mediated by MSSWs (Figs. 2(a) and (b)). Note that the applied magnetic field is perpendicular to the MSSWs propagation and the temperature gradient direction (Fig. 2(a) Fig. 3(b). This indicates a change in the amplitude of microwave AC field from the MSSWs [44], as thermal magnon current was generated under the application of temperature gradient in the upper layer of the YIG. Next, we drove the NV spins into the Rabi oscillations between the * = 0 and * = −1 via the MSSW-driven pulse sequence shown in Fig. 3(c) with the same matching condition of 2.58 GHz between the qubit states of * = 0 and * = −1 (Fig. 3(d)). The frequency of the Rabi frequency was enhanced for Δ = 0 to −10 K and was suppressed for Δ = 0 to +10 K (Figs. 3(d) and (e)). This is explained by the change of polarity of the thermal magnon spin-transfer torque [36] (Fig. 1). The amplitude of the Rabi field 5 6 , defined as an effective oscillating electromagnetic field acting at the NV position above the YIG surface ( Fig. 2(b)), can be estimated from the Rabi frequency through the relation 5 6 = Ω 5 6 / 7 [14,45,46], with 7 = 2 ⋅ 28 GHz/T being the gyromagnetic ratio of electrons. The Rabi field amplitude 5 6 evolved from 19 ± 0.5 µT at Δ = 10 K to 26 ± 0.4 µT at Δ = −10 K, based on its plot as a function of Δ (Fig. 3(f)), indicating a change of approximately 18 ± 1 % from 22± 0.6 µT at Δ = 0. The unidirectional propagation of the MSSWs is inverted according to ∥ "#$ × . [47,48] by applying different polarity of "#$ at the upper YIG surface and in this condition the thermal spin-transfer torque is applied with different polarity [36]. Hence, we can expect to observe the same but inverted sign effect when we switch the external magnetic field to the − axis (assigned as − "#$ ) and launch the MSSWs from the antenna B [36]. We tuned the NV resonance frequency to a matching condition of 2.60 GHz (− "#$ = 19 mT, +, = 1 mW). The observed effect can be interpreted as a thermal magnon spin-transfer torque 89 via the thermal magnon current generated by a temperature gradient [4,49,50], which interacts with the MSSW and relaxes by transferring its spin angular momentum (Fig. 1). The transfer of spin angular momentum contributes to the development of the thermal magnon torque $1 , which alters the MSSW's magnetization dynamics [31,32,36] and perceived by the NV spins as an altering Rabi field amplitude ( Fig. 2(b) and see Supplemental Note 9 [38]): where , * , +, , F , G , and are respectively the proportionality constant, saturation magnetization of the YIG, microwave field driving the MSSWs, intrinsic damping parameter of the YIG, resonance frequency of the MSSW, and the distance separating the NV spin and the magnetization precession. The contribution from the thermal magnons can be quantified by the thermal magnon damping parameter, which is proportional to the temperature gradient $1 = ∇ (see Supplemental Note 8 [38]). Using a constant in Eq. (1) as a fitting parameter, $1 was estimated to be (10 ± 0.9) × 10 6H for + "#$ and (4.3 ± 1) × 10 6H for − "#$ using an effective temperature difference of Δ "II = 6.6 K over 2 mm distance at the YIG's top surface under an applied Δ =10 K. The thermal magnon damping parameter values agree well with those reported previously [33][34][35][36], confirming the existence and contribution of thermal magnon current in the evolution of MSSW magnetization dynamics [26,31,32,36]. Furthermore, we confirmed our observation of the thermally excited magnon current electrically by analyzing the spin-wave resonance linewidth from the absorption microwave signal ( --) (see Supplemental Note 7 for the experimental details and data [38]). C. Local detection and non-resonant NV spin excitation We extended the capability to detect the thermally excited magnon current locally and non-resonantly to NV spin transition frequency via a small number of NV spins in a nanodiamond ( Fig. 4(a)). The nanodiamonds with 40 nm of averaged diameter were transferred to the middle of the YIG's longitudinal direction by dropping a small amount of nanodiamond solution with a micropipette. With the same setup and technique as in the experiment using the diamond beam, we mapped out the ODMR spectra of the NV spins in a nanodiamond to obtain information regarding the coupling between the long-distance propagating magnons and the NV spins. Figure 4(b) shows the magnon-driven ODMR spectral map exhibiting PL quenching at the resonance transition ( * = 0 ↔ −1) of the NV spins together with PL image of the nanodiamond used in the measurement (inset) ( +, = 1 W). Additionally, a strong non-resonant PL quenching was observed away from the NV spin transitions [15,39] at frequencies ranging from 2.5 to 2.7 GHz at the + "#$ between 11.5 and 13.5 mT (Fig. 4(b)), where the MSSWs with higher k wavenumbers are within a range as observed in Fig. 2(e). Next, we performed longitudinal spin relaxation measurements, in which the NV spins were polarized to * = 0 by the first laser pulse, followed by a dark time before another laser pulse was applied to read the remaining population (Fig. 4(c)). By varying , the time-trace relaxation of the * = 0 state to its equilibrium state was observed. Under the application of ∇ to the YIG, MW pulse with the frequency of 2.66 GHz and + "#$ and − "#$ = 13 mT (marked by dashed-black circle in Fig. 4(b)) was applied with +, = 1 W during time . increased for Δ = 0 to −10 K and decreased for Δ = 0 to +10 K ( Fig. 4(d)). Opposite polarity of slope-change of was observed when the polarity of "#$ is inverted (Fig. 4(e)), reasonable with the MSSW's unidirectional propagation character. Here, we assume that the observed effect is originated from the modulation of magnon density at NV-resonant frequency via the scattering between the non-resonant MW-excited magnons and the thermal magnons [21,29,31,32]. In this case, is related to the oscillating AC magnetic field amplitude generated by the NV-resonant magnons, as described by ~: ! J | K | J , with | K | J is the AC magnetic field component perpendicular to the NV's quantum axis [14]. By assuming that the AC magnetic field from the NV-resonant magnons evolved proportionally with the increase or decrease of magnetization precession of the MW-excited magnons [21] and based on the fact that the magnetization precession evolved under a variation of ∇ (Equation (1) and Figs. 3(f) and (i)), we can approximate an equation relating the longitudinal relaxation rate and temperature gradient as [14,19,20,26] Γ ∝ The data in Figs. 4(d) and (e) were fitted with equation (2), and 89 was estimated as (4.3 ± 1) × 10 6H for + "#$ and (2.5 ± 0.9) × 10 6H for − 7E8 , that show a good agreement with those estimated from the Rabi oscillation experiments. We note that the temperature measurements at the middle of YIG using bulk diamond beam and infrared thermography (see Supplemental Note 4 and Supplemental Note 5 [38]) confirmed a base temperature change of less than 1.5 K, which will give 0.7 % of the change in [21]. This change of is small compared with the observed change of about 37.5 % (for + "#$ ) and 23 % (for − "#$ ) under the applied Δ from +10 K to −10 K to the YIG, showing that the observed effect is not due to the base temperature change in the nanodiamond. IV. DISCUSSION We demonstrated the detection of thermally excited magnon currents mediated by MSSWs via NV spins, where the thermal magnon spin-transfer torque emanated from the thermal magnon current altered the MSSWs magnetization precession when the YIG sample was subjected to a temperature gradient. The modulation of the magnetization dynamics of the MSSWs was perceived by the NV spins as the alteration of the Rabi oscillation frequency with the resonant NV spin excitation using a diamond beam. Besides, the longitudinal spin-relaxation rate change was observed with a non-resonant NV spins excitation using a nanodiamond. The possible explanation for the observed effect at non-resonant excitation may come from the four-magnon scattering process, where a magnon at the microwave frequency scatters with a thermal magnon resulting in two additional magnons, one of which possesses a frequency resonates to the NV frequency [29,30]. The increase or decrease in the relaxation rate as a function of a temperature gradient indicates the modulation in the population of the thermal magnon [Figs. 4(d) and (e)]. However, to nail down a definite mechanism, it will require further experiments through changing excitation parameters, and also using a nanodiamond or diamond nanobeam with a well-defined NV axis [21,39]. This study provides a detection tool for thermal magnon currents via NV centers, which can be located locally and in a broad range of distances to spin waves. This feature cannot be obtained if only conventional methods, such as ISHE, are used to investigate magnon dynamics, as the conventional method requires a relatively large electrode and specific configurations with proximal distance to the spin waves. Owing to the NV spin's single spin detection sensitivity enabled by its atomic-scale size [51], nanoscale probing and imaging of thermal magnon dynamics can be realized in the future. For example, a scanning probe-based NV magnetometry [13] will be useful for studying the nonuniformity of the thermal magnon current throughout the material at the nanoscale. Such a measurement will be impractical through patterning a large area of a paramagnetic metal for ISHE measurements. A study of the thermal magnon dynamics with high spatial resolution can provide insights into practical applications in spin caloritronics and magnon spintronics [14,19,25].
2020-07-28T01:00:59.972Z
2020-07-27T00:00:00.000
{ "year": 2020, "sha1": "92f00611b814d377638b6b9c331708f3b30ea51a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "92f00611b814d377638b6b9c331708f3b30ea51a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258732451
pes2o/s2orc
v3-fos-license
Temperature-Dependent Effects of Eicosapentaenoic Acid (EPA) on Browning of Subcutaneous Adipose Tissue in UCP1 Knockout Male Mice Uncoupling protein 1 (UCP1) plays a central role in thermogenic tissues by uncoupling cellular respiration to dissipate energy. Beige adipocytes, an inducible form of thermogenic cells in subcutaneous adipose tissue (SAT), have become a major focus in obesity research. We have previously shown that eicosapentaenoic acid (EPA) ameliorated high-fat diet (HFD)-induced obesity by activating brown fat in C57BL/6J (B6) mice at thermoneutrality (30 °C), independently of UCP1. Here, we investigated whether ambient temperature (22 °C) impacts EPA effects on SAT browning in wild-type (WT) and UCP1 knockout (KO) male mice and dissected underlying mechanisms using a cell model. We observed resistance to diet-induced obesity in UCP1 KO mice fed HFD at ambient temperature, with significantly higher expression of UCP1-independent thermogenic markers, compared to WT mice. These markers included the fibroblast growth factor 21 (FGF21) and sarco/endoplasmic reticulum Ca2+-ATPase 2b (SERCA2b), suggesting the indispensable role of temperature in beige fat reprogramming. Surprisingly, although EPA induced thermogenic effects in SAT-derived adipocytes harvested from both KO and WT mice, EPA only increased thermogenic gene and protein expression in the SAT of UCP1 KO mice housed at ambient temperature. Collectively, our findings indicate that the thermogenic effects of EPA, which are independent of UCP1, occur in a temperature-dependent manner. Introduction Obesity is a chronic complex disease, which occurs when energy consumption exceeds expenditure chronically and which is associated with several comorbidities like type II diabetes (T2D) [1], cardiovascular diseases [2], and certain cancers [3]. White adipose tissue (WAT) stores excess energy in the form of triglycerides and secretes several hormones but upon the onset of obesity develops local chronic inflammation. In contrast, brown adipose tissue (BAT) drives adaptive non-shivering thermogenesis in response to cold temperatures and impacts body weight [4]. In rodent models, activation of BAT thermogenesis is involved in reducing diet-induced weight gain [5], and the ablation of BAT is associated with the development of obesity [6]. However, although the presence of BAT in human adults is wellaccepted [7], the contribution of BAT to the regulation of body weight and metabolic health is still a matter of debate among researchers [8]. The classical non-shivering thermogenesis occurs not only in BAT but also in beige adipose tissue [9]. Under certain pharmacological and dietary conditions or external stimuli, subcutaneous adipose tissue (SAT) can undergo browning to develop thermogenic brown-like properties, like uncoupling protein 1 (UCP1) expression [10]. Intriguingly, genetic deficiency of BAT in mice increases sympathetic activity to SAT to promote the compensatory recruitment of beige adipocytes [11]. When properly activated, UCP1 catalyzes the leak of protons generated by the electron transport chain to heat production [12]. Independently of UCP1, other thermogenic mechanisms such as the creatine-driven substrate futile cycling [13], the ATP-dependent calcium cycling [14], and endogenous uncoupler N-acyl amino acids [15] have been shown to occur in brown and beige adipocytes. In our current study, to investigate the physiological role of UCP1 and the therapeutic potential of SAT browning in protecting against weight gain, UCP1-knockout (KO) B6 male mice were exposed to either thermoneutral (28-30 • C) or ambient (22 • C) environments. Accumulating evidence has shown that housing UCP1 KO mice at thermoneutrality markedly reduces BAT and SAT thermogenesis and predisposes them to diet-induced obesity (DIO) [16]. Paradoxically, ambient temperature serves as a cold challenge for UCP1 KO mice resulting in increased energy expenditure and resistance to DIO when compared to wild-type (WT) mice [17]. Moreover, the UCP1-independent thermogenesis may have stronger effects in preventing DIO than the classical UCP1-predominant thermogenesis when housing mice at ambient temperature. However, neither the molecular mechanisms nor the regulators have been clearly characterized yet, and identifying them could potentially reveal novel therapeutic targets in treating individuals with obesity. Dietary long-chain omega-3 polyunsaturated fatty acids (PUFA) such as eicosapentaenoic acid (EPA; 20:5; n-3), the main component in fish oil, is an anti-inflammatory bioactive compound with potential to induce white fat cell browning [18]. We have previously reported that EPA significantly upregulated the mRNA expression levels of key markers of thermogenesis, such as peroxisome proliferator-activated receptor gamma coactivator-1 alpha (PGC1α) and PR domain containing 16 (PRDM16) in HIB 1B clonal brown adipocytes and in the BAT of mice housed at ambient temperature [5]. In addition, utilizing DIO UCP1 KO male mice housed at thermoneutrality, we found that EPA reduced body weight and adiposity and increased BAT PGC1α protein and gene expression, independently of UCP1 [19,20]. Based on the above insights, in the current study, we hypothesized that supplementation with EPA promotes beige adipocyte formation in SAT independently of UCP1 at both ambient and thermoneutral conditions. To gain mechanistic insights of the EPA effects on SAT thermogenesis and how UCP1 controls obesity resistance temperature-dependently, in the current study, we used WT and UCP1 KO male mice housed at either ambient or thermoneutral environments and supplemented with high-fat diets (HFD) with or without EPA-enriched fish oil. To dissect the role of EPA in promoting beige adipocytes thermogenesis, we further harvested primary SAT adipocytes from WT and UCP1 KO mice treated with or without EPA during differentiation. We observed the paradoxical obesity resistance of HFD-fed UCP1 KO mice at ambient temperature. Although no significant impact of EPA on body weight and adiposity was observed, insulin resistance and inflammation were attenuated by EPA in both WT and UCP1 KO mice at both temperatures. Importantly, the expression level of genes involved in thermogenesis and lipid metabolism was significantly upregulated at ambient temperature, independently of UCP1. We also found that although EPA enhanced thermogenic gene expression and the respiration capacity of differentiated SAT-derived adipocytes, EPA only upregulated gene and protein expression levels of thermogenic genes at ambient temperature, UCP1 independently. Based on the outcomes of the current study, we demonstrated evidence for the potential use of EPA in combating obesity and improving overall metabolic health via alternative UCP1-independent thermogenesis. UCP1 KO Mice Are Protected from DIO at Ambient Temperature As the most sensitive hallmark regarding energy balance changes, the body weight of mice housed at ambient temperature was lower than the mice at thermoneutrality. Further, the resistance of DIO in UCP1 KO mice was a reproducible phenomenon [21] at ambient temperature. Conversely, after 14 weeks housed at thermoneutrality, UCP1 KO mice had significantly higher weight gain than WT mice (p < 0.05) ( Figure 1A), suggesting the accumulation of excess energy in fat. Regarding dietary intervention, limited benefits of EPA in reducing obesity in both WT and UCP1 KO mice occurred at thermoneutrality [19]. Similarly, in response to EPA, both WT and UCP1 KO mice gained 4-10% less body weight ( Figure 1A) and 6-20% less body fat ( Figure 1B) compared to HFD-fed mice at both temperatures, but no significant difference was found. Although food intake did not reveal any differences, mice housed at ambient conditions consumed about 25% more food than those at thermoneutrality (p < 0.05), suggesting the compensation for heat loss ( Figure 1C). Finally, a significant interaction was observed between temperature and genotype for body weight gain, body fat percentage, and food intake (p < 0.01, Table 1). study, we demonstrated evidence for the potential use of EPA in combating obesity and improving overall metabolic health via alternative UCP1-independent thermogenesis. UCP1 KO Mice Are Protected from DIO at Ambient Temperature As the most sensitive hallmark regarding energy balance changes, the body weight of mice housed at ambient temperature was lower than the mice at thermoneutrality. Further, the resistance of DIO in UCP1 KO mice was a reproducible phenomenon [21] at ambient temperature. Conversely, after 14 weeks housed at thermoneutrality, UCP1 KO mice had significantly higher weight gain than WT mice (p < 0.05) ( Figure 1A), suggesting the accumulation of excess energy in fat. Regarding dietary intervention, limited benefits of EPA in reducing obesity in both WT and UCP1 KO mice occurred at thermoneutrality [19]. Similarly, in response to EPA, both WT and UCP1 KO mice gained 4-10% less body weight ( Figure 1A) and 6-20% less body fat ( Figure 1B) compared to HFD-fed mice at both temperatures, but no significant difference was found. Although food intake did not reveal any differences, mice housed at ambient conditions consumed about 25% more food than those at thermoneutrality (p < 0.05), suggesting the compensation for heat loss ( Figure 1C). Finally, a significant interaction was observed between temperature and genotype for body weight gain, body fat percentage, and food intake (p < 0.01, Table 1). Figure 1. Body weight and fat percentage, and food intake in WT and UCP1 KO mice fed an HFD or EPA-supplemented diet at ambient and thermoneutral (Thermo) conditions. (A) Percentage of body weight gain, (B) percentage of body fat and (C) food intake. Data are expressed as mean ±standard error of mean (SEM); groups represented with different letter indicate significant difference reported by three-way analysis of variance (ANOVA), p < 0.05, n = 10-12. Effects of EPA and UCP1 Deficiency on Insulin Sensitivity We performed glucose and insulin tolerance tests (GTT and ITT, respectively), measured fasting serum insulin levels, and calculated homeostatic model assessment for insulin resistance (HOMA-IR) in WT and KO mice housed at both temperatures to examine the effects of UCP1 deficiency and EPA on glucose metabolism. To compare groups that differ in fat mass, GTT and ITT normalized to the lean mass of mice have been used. Glucose intolerance was not different between genotypes at ambient temperature; however, in association with the increased body weight, it was higher in the UCP1 KO mice compared to the WT mice at thermoneutrality. Importantly, EPA-fed mice exhibited improved glucose clearance compared to the HFD-fed mice in both genotypes and temperatures ( Figure 2A). Additionally, there were no differences in ITT between the genotypes in both temperatures ( Figure 2B), but EPA significantly increased the insulin sensitivity of mice in both genotypes and temperatures, as indicated by lower basal serum insulin levels and HOMA-IR compared to the HFD-fed mice ( Figure 2C). Multifactorial ANOVA revealed a significant main effect for diet (p < 0.05) on glucose tolerance and insulin sensitivity, confirming the beneficial effect of EPA on glucose homeostasis (Table 1). Effects of EPA and UCP1 Deficiency on Metabolic Hormones To understand the role of EPA supplementation in adipokine production, serum levels of cytokines involved in inflammation, including adiponectin, leptin, and resistin, were measured. In the HFD groups, there were no differences in serum adiponectin levels between WT and UCP1 KO mice at ambient temperature, whereas, at thermoneutrality, KO mice have two-fold higher adiponectin level than WT mice (p < 0.001). Compared to HFD, adiponectin levels were increased in response to EPA in WT and UCP1 KO mice by 34% and 30% at ambient temperature and by 68% (p = 0.0049) and 18% at thermoneutrality ( Figure 3A). HFD-fed WT mice had significantly higher serum leptin levels compared to all other groups. Corresponding with body fat, the absence of UCP1 caused a significant decrease in leptin levels at ambient temperature but not at thermoneutrality. Additionally, compared to HFD, EPA reduced serum leptin levels in WT and KO mice by 71% (p < 0.0001) and 24% at ambient temperature and by 64% and 31% at thermoneutrality, respectively ( Figure 3B). Finally, serum resistin levels were decreased in mice housed at thermoneutrality (p < 0.001). Additionally, EPA decreased the level of resistin in WT and KO mice by 28% and 18% at ambient temperature and by 73% and 50% at thermoneutrality ( Figure 3C). A significant effect of temperature (p < 0.0001) and genotype (p < 0.05) and the interaction between temperature and genotype (p < 0.05) were observed for serum levels of adiponectin, leptin, and resistin (Table 1). Effects of EPA and UCP1 Deficiency on Metabolic Hormones To understand the role of EPA supplementation in adipokine production, serum levels of cytokines involved in inflammation, including adiponectin, leptin, and resistin, were measured. In the HFD groups, there were no differences in serum adiponectin levels between WT and UCP1 KO mice at ambient temperature, whereas, at thermoneutrality, KO thermoneutrality (p < 0.001). Additionally, EPA decreased the level of resistin in WT and KO mice by 28% and 18% at ambient temperature and by 73% and 50% at thermoneutrality ( Figure 3C). A significant effect of temperature (p < 0.0001) and genotype (p < 0.05) and the interaction between temperature and genotype (p < 0.05) were observed for serum levels of adiponectin, leptin, and resistin (Table 1). Effects of EPA and UCP1 Deficiency on SAT Browning Temperature-Dependently We previously demonstrated that EPA increased UCP1 and thermogenic genes in the brown fat of mice maintained at ambient [5] or thermoneutral environments [19]. In the current study, we evaluated the effects of EPA supplementation and UCP1 deficiency on SAT browning, thermogenesis, and lipid metabolism. As expected, compared to an HFD, EPA significantly enhanced Ucp1 mRNA expression in WT mice by 21.1-fold at ambient temperature (p < 0.001) and 7.9-fold at thermoneutrality (p < 0.001). Ucp1 mRNA expression levels were undetectable in the UCP1 KO mice ( Figure 4A). In addition, based on the regulatory network of thermogenic transcription factors, compared to WT, SAT browning in UCP1 KO mice housed at ambient temperature appeared to be predominantly regulated by Pgc1α and cell death-inducing DFFA like effector A (Cidea), reported by about 15-(p < 0.01) and 77-fold (p < 0.001) significantly enhanced gene expression. No similar effects were found at thermoneutrality. We also observed higher expression of Pgc1α and Cidea in the SAT of EPA-fed WT and KO mice at both temperatures, but no significant differences were observed ( Figure 4B,C), which explain the slight beneficial effect of EPA on beige adipocytes programming. To investigate the factors, which may associate with temperature-dependent DIO resistance, other than UCP1, we measured the gene expression of commonly known batokines, like fibroblast growth factor 21 (Fgf21) and bone morphogenetic protein 8b (Bmp8b), which were significantly upregulated by the absence of UCP1 by 16-and 15-fold only at ambient temperature but not at thermoneutrality ( Figure 4D,E). Importantly, in UCP1 KO mice, EPA significantly increased the Fgf21 gene expression (p = 0.0012) compared to the HFD at ambient temperature. A significant effect of temperature (p < 0.0001) and genotype (p < 0.005) and interaction between temperature and genotype were observed for the gene expression levels of Ucp1, Pgc1α, Cidea, Fgf21, and Bmp8b (Table 1). Effects of EPA and UCP1 Deficiency on SAT Browning Temperature-Dependently We previously demonstrated that EPA increased UCP1 and thermogenic genes in the brown fat of mice maintained at ambient [5] or thermoneutral environments [19]. In the current study, we evaluated the effects of EPA supplementation and UCP1 deficiency on SAT browning, thermogenesis, and lipid metabolism. As expected, compared to an HFD, EPA significantly enhanced Ucp1 mRNA expression in WT mice by 21.1-fold at ambient temperature (p < 0.001) and 7.9-fold at thermoneutrality (p < 0.001). Ucp1 mRNA expression levels were undetectable in the UCP1 KO mice ( Figure 4A). In addition, based on the regulatory network of thermogenic transcription factors, compared to WT, SAT browning in UCP1 KO mice housed at ambient temperature appeared to be predominantly regulated by Pgc1α and cell death-inducing DFFA like effector A (Cidea), reported by about 15-(p < 0.01) and 77-fold (p < 0.001) significantly enhanced gene expression. No similar effects were found at thermoneutrality. We also observed higher expression of Pgc1α and Cidea in the SAT of EPA-fed WT and KO mice at both temperatures, but no significant differences were observed ( Figure 4B,C), which explain the slight beneficial effect of EPA on beige adipocytes programming. To investigate the factors, which may associate with temperature-dependent DIO resistance, other than UCP1, we measured the gene expression of commonly known batokines, like fibroblast growth factor 21 (Fgf21) and bone morphogenetic protein 8b (Bmp8b), which were significantly upregulated by the absence of UCP1 by 16-and 15-fold only at ambient temperature but not at thermoneutrality ( Figure 4D,E). Importantly, in UCP1 KO mice, EPA significantly increased the Fgf21 gene expression (p = 0.0012) compared to the HFD at ambient temperature. A significant effect of temperature (p < 0.0001) and genotype (p < 0.005) and interaction between temperature and genotype were observed for the gene expression levels of Ucp1, Pgc1α, Cidea, Fgf21, and Bmp8b (Table 1). We further evaluated protein expression levels of UCP1, CIDEA, and FGF21 in different groups ( Figure 5A). As expected, consistent with gene expression, UCP1 was undetectable in KO groups. While in the WT mice, compared to the HFD group, EPA group showed a 3-fold increase in UCP1 content at ambient temperature and comparable UCP1 content in thermoneutrality ( Figure 5B). As a key adipogenic transcription factor, the protein expression level of CIDEA was remarkably enhanced by EPA, compared to the HFD, in KO mice at ambient temperature ( Figure 5C). We also measured the protein content of FGF21 and found that WT mice fed with EPA have comparable amounts of FGF21 to those found in HFD-fed mice at both temperatures. In KO mice, however, EPA upregulated FGF21 content by 4.7-and 2.1-fold at ambient and thermoneutral temperatures ( Figure 5D), which further supported the UCP1-independent molecular pathway of beige cell programming in SAT. We further evaluated protein expression levels of UCP1, CIDEA, and FGF21 in different groups ( Figure 5A). As expected, consistent with gene expression, UCP1 was undetectable in KO groups. While in the WT mice, compared to the HFD group, EPA group showed a 3-fold increase in UCP1 content at ambient temperature and comparable UCP1 content in thermoneutrality ( Figure 5B). As a key adipogenic transcription factor, the protein expression level of CIDEA was remarkably enhanced by EPA, compared to the HFD, in KO mice at ambient temperature ( Figure 5C). We also measured the protein content of FGF21 and found that WT mice fed with EPA have comparable amounts of FGF21 to those found in HFD-fed mice at both temperatures. In KO mice, however, EPA upregulated FGF21 content by 4.7-and 2.1-fold at ambient and thermoneutral temperatures ( Figure 5D), which further supported the UCP1-independent molecular pathway of beige cell programming in SAT. Next, to validate the molecular signatures of SAT browning, the gene expression of well-identified batokines and brown fat markers were quantified. Upon ambient temper- Effects of EPA and UCP1 Deficiency on SAT Browning, Lipid Metabolism, and Alternative Thermogenesis Temperature-Dependently Next, to validate the molecular signatures of SAT browning, the gene expression of well-identified batokines and brown fat markers were quantified. Upon ambient temperature, there were in the SAT of UCP1 KO mice 317-, 12-, and 40-fold significant increases in key thermogenic genes: iodothyronine deiodinase 2 (Dio2), pyruvate dehydrogenase kinase 4 (Pdk4), and cytochrome c oxidase 7a1 (Cox7a1) expression (p < 0.0001, p = 0.0002, p < 0.0001), respectively, compared to WT mice ( Figure 6A). No changes in the gene expression of the above thermogenic markers between WT and UCP1 KO mice were observed at thermoneutrality. In response to EPA, UCP1 KO mice housed at ambient temperature expressed higher levels of Dio2, Pdk4, and Cox7a1 than HFD-fed KO mice, although no significant differences were observed. Similar to browning-related genes, the SAT in KO mice maintained at ambient temperature expressed higher levels of genes sensitive to cold and genes involved in lipid metabolism ( Figure 6B). As an important regulator for early onset of lipid recruitment [22], mRNA levels of the fatty acyl chain elongase (Elovl3) were elevated 1000-fold in KO mice (p < 0.0001) in comparison to WT mice at ambient temperature. Additionally, the gene expression of glycerol 3-phosphate dehydrogenase 1 (Gpd1) and carnitine palmitoyl transferase 1b (Cpt1b), involved in lipogenesis and fatty acid oxidation, were also 7-and 63-fold higher in the SAT from ambient-exposed KO mice (p < 0.001; p < 0.0001) than in WT mice, reflecting the induction of browning in SAT in UCP1 independent manner. However, neither diet nor the absence of UCP1 altered the lipid metabolism pattern of mice at thermoneutrality. Importantly, compared to the HFD, EPA upregulated the mRNA expression levels of the above genes in both WT and KO mice only at ambient temperature but not at thermoneutrality, revealing the temperaturedependent enrichment of lipid and oxidative metabolism with EPA supplementation. Finally, UCP1 KO mice featured obesity resistance at ambient temperature, suggesting the existence of alternative pathways of thermogenesis. Therefore, genes of recently identified UCP1-independent thermogenic pathways were quantified. At ambient temperature, the mRNA level of sarco/endoplasmic reticulum Ca 2+ -ATPase 2b (Serca2b) was significantly increased by EPA (p = 0.012) independently of UCP1, indicating the noncanonical thermogenic potential of EPA via enhancing ATP-dependent Ca 2+ cycling pathway [14]. We also found that the expression level of peptidase M20 domain containing 1 (Pm20d1), which generates n-acyl amino acid as endogenous uncouplers [15], was significantly higher in the KO mice than WT mice fed both an HFD (p = 0.032) and an EPA (p = 0.009) diet only at ambient environment. The last gene involved in the alternative thermogenesis we quantified was transient receptor potential vanilloid 2 (Trpv2) [23]; however, no difference in expression was observed among all groups ( Figure 6C). Taken together, our results demonstrate that the expression levels of genes relevant to beige adipocytes programming in the SAT were dramatically enhanced in response to the absence of UCP1 only at ambient temperature. Further, EPA has potential beneficial effects on SAT browning in response to temperature and UCP1 independently. A significant effect of temperature (p < 0.0001) and genotype (p < 0.0001) and the interaction between temperature and genotype (p < 0.0001) were observed for the gene expression levels of browning markers and lipid metabolism (Table 1). At last, we performed an analysis of the fatty acid profile in the SAT from all groups (Table S1). As expected, EPA was only found in the SAT of EPA-fed mice, and no significant differences were observed between genotypes and temperatures. Effects of EPA and UCP1 Deficiency on Browning and Respiration Capacity in Cultured Primary Adipocytes To further validate the role of UCP1 in thermogenesis and determine how EPA regulates the beige adipocytes programming in absence of UCP1, we cultured differentiated SAT-derived primary adipocytes in WT and UCP KO male mice ( Figure 7A). In response to EPA treatment, compared to the control, the mRNA level of Ucp1 was significantly increased in the WT group (p = 0.0003), and as expected, Ucp1 levels were undetectable in the KO group, which was consistent with the animal study. Additionally, classic batokines, such as Fgf21, and well-established browning markers, such as Cox7a1, were upregulated by EPA in both genotypes, which suggested that EPA can induce beige cell for- Effects of EPA and UCP1 Deficiency on Browning and Respiration Capacity in Cultured Primary Adipocytes To further validate the role of UCP1 in thermogenesis and determine how EPA regulates the beige adipocytes programming in absence of UCP1, we cultured differentiated SAT-derived primary adipocytes in WT and UCP KO male mice ( Figure 7A). In response to EPA treatment, compared to the control, the mRNA level of Ucp1 was significantly increased in the WT group (p = 0.0003), and as expected, Ucp1 levels were undetectable in the KO group, which was consistent with the animal study. Additionally, classic batokines, such as Fgf21, and well-established browning markers, such as Cox7a1, were upregulated by EPA in both genotypes, which suggested that EPA can induce beige cell formation during adipocyte differentiation, UCP1-independently. Other genes involved in browning regulation (Pgc1α, Prdm16, Pparγ, and Bmp8b) and lipid metabolism (Elovl3 and Cpt1b) were also increased by EPA in both genotypes, but no significant differences were observed ( Figure S1, Table S2). Then, we performed mitochondrial function analysis by measuring the oxygen consumption rate (OCR) to investigate whether the above EPA-induced enhancement of thermogenic markers is responsible for the change of mitochondrial oxidative phosphorylation rate in differentiated adipocytes ( Figure 7B). As expected, the absence of UCP1 decreased mitochondrial respiration, highlighting the important role of UCP1 in mitochondrial function. Basal respiration in WT and KO adipocytes was not elevated by EPA, indicating the minor effect of EPA in ATP-linked respiration. However, the maximal respiration ORCs in WT and KO adipocytes experienced 48.6% and 66.7% increases with EPA, resulting in the 138% (WT, p = 0.0034) and 41.6% (KO) increase in spare respiratory capacity by EPA. Additionally, the two-way ANOVA confirmed a significant main effect for treatment (p < 0.0001) and the interaction of treatment and genotype (p < 0.01) on maximal and spare respiration ( Table 2). Taken together, EPA exerts appreciable effects on several parameters of mitochondrial function in primary adipocytes UCP1 independently, which may contribute by the acquisition of a browning phenotype. browning regulation (Pgc1α, Prdm16, Pparγ, and Bmp8b) and lipid metabolism (Elovl3 and Cpt1b) were also increased by EPA in both genotypes, but no significant differences were observed ( Figure S1, Table S2). Then, we performed mitochondrial function analysis by measuring the oxygen consumption rate (OCR) to investigate whether the above EPAinduced enhancement of thermogenic markers is responsible for the change of mitochondrial oxidative phosphorylation rate in differentiated adipocytes ( Figure 7B). As expected, the absence of UCP1 decreased mitochondrial respiration, highlighting the important role of UCP1 in mitochondrial function. Basal respiration in WT and KO adipocytes was not elevated by EPA, indicating the minor effect of EPA in ATP-linked respiration. However, the maximal respiration ORCs in WT and KO adipocytes experienced 48.6% and 66.7% increases with EPA, resulting in the 138% (WT, p = 0.0034) and 41.6% (KO) increase in spare respiratory capacity by EPA. Additionally, the two-way ANOVA confirmed a significant main effect for treatment (p < 0.0001) and the interaction of treatment and genotype (p < 0.01) on maximal and spare respiration ( Table 2). Taken together, EPA exerts appreciable effects on several parameters of mitochondrial function in primary adipocytes UCP1 independently, which may contribute by the acquisition of a browning phenotype. Discussion Therapeutic activation and recruitment of thermogenic fats to increase energy expenditure and combat obesity have not been fruitful, due to the incomplete understanding of how physiological factors are integrated during the changes in environment, such as temperature and nutrients. Although UCP1 has been identified as a key thermogenic regulator, recently, thermogenic mechanisms beyond UCP1 have been uncovered in both brown and beige adipocytes [24]. Mice deficient in the UCP1 are a well-suited animal model to investigate UCP1-independent mechanisms and study human obesity since adults with obesity express only minor amounts of UCP1. In this study, as humans spend the vast majority of their lives at thermoneutrality, we studied both WT and UCP1 KO mice in the same condition. Comparing them with cold-adapted mice housed at ambient temperature, we show the indispensable role of temperature in mediating the phenotype of DIO mice and thermogenic profile in SAT independently of UCP1. The current study opened a new window to adults that obesity induced by UCP1-dependent thermogenic fat inactivation and depletion can be treated by hypothermal experience. Much evidence emphasizes the importance of omega-3 PUFA and metabolites in activating thermogenic fats and UCP1 expression. It has been noted that omega-3 PUFA induces brown and beige adipocyte differentiation, via the activation of G protein-coupled receptor 120 [25,26]. A more recent study reported that cold and β3-adrenergic stimulation promotes the release of 12-hydroxyeicosapentaenoic acid (12-HEPE), an omega-3 PUFA metabolite in mouse BAT to regulate cold adaptation and glucose metabolism [27]. However, the metabolic and thermogenic outcomes of omega-3 PUFA, similar to EPA, on brown or beige fat function of UCP1 independently are still unknown. In the current study, we focused on investigating the role of temperature in regulating UCP1-independent molecular networks of thermogenesis and the function of EPA in mediating beige adipocytes development in SAT. We demonstrate a genotypic difference in response to EPA on the DIO and SAT browning of male mice kept at different temperatures. It should be noted that environmental temperature leads to a drastic alteration in the importance of UCP1 for metabolic outcomes in animal studies, such as body weight, adiposity, energy expenditure, and others. To "humanize" the thermal physiology of the mouse and mimic the thermoneutrality that humans live at, the thermoneutral temperature has been applied in mouse experiments [28]. The absence of UCP1 in B6 mice kept at thermoneutrality makes them prone to DIO, due to the lowest levels of heat generation to maintain homeothermy [6,29]. At sub-thermoneutral temperature, UCP1-deficient mice are resistant to DIO due to the activation of thermogenic mechanisms alternative to UCP1, which seem to be less efficient energy-wise, meaning that more energy is expended to produce the same amount of heat that UCP1 would produce [17,21,30]. Our study reproduced this robust phenomenon on HFD-fed UCP1 KO mice housed at ambient and thermoneutral temperatures paradoxically. Additionally, the UCP1 KO mice housed at ambient temperature, compared to WT mice, consumed more food and displayed a significant decrease in body weight in both diet groups. Consistent with our findings, a recent study characterized the impact of housing temperature on energy homeostasis and food intake. They observed that energy expenditure of DIO mice decreases by 30% from 22 • C to 30 • C without changing in food intake, leading to the higher body weight and fat at 30 • C [31]. On the other hand, although our data reveal no significant impact of EPA on food consumption, body weight, and adiposity, glucose clearance was enhanced in both the WT and UCP1 KO mice at both temperatures for the EPA-fed diet. In addition, although insulin tolerance was not different between the diets, EPA-fed mice were more insulinsensitive, indicated by the reduced HOMA-IR. In response to the temperatures, beige adipocyte development and the activity of beige adipocytes were associated with systemic glucose homeostasis and insulin sensitivity [32,33]. Studies in animals and humans have reported that the dysfunction of thermogenic fat negatively impacts insulin resistance and T2D [34,35]. In line with the above studies, we observed a reduction in glucose tolerance and fasting insulin level in HFD-fed UCP1 KO mice at thermoneutrality. However, at ambient temperature, HFD-fed UCP1 KO mice have a comparable rate of glucose clearance with WT, along with similar levels of basal blood glucose and insulin. Given the browning of SAT in UCP1 KO mice at ambient temperature, the above beneficial effects may be mediated by newly discovered glycolytic beige adipocytes in the SAT [35]. The effect of EPA on reducing HFD-induced insulin tolerance is in part associated with the anti-inflammatory effect of EPA and the level of cytokines in serum. In agreement with our previous study [36], mice fed with EPA displayed higher plasma levels of adiponectin and decreased levels of leptin and resistin. As a insulin-sensitizing and anti-inflammatory protein secreted by white fat [37], several studies support an association between circulating adiponectin and the risk of developing T2D [38,39]. It was surprising that KO mice at thermoneutrality expressed higher adiponectin with the increase of body weight than WT mice, indicating the expression of adiponectin in KO mice improved insulin tolerance independently of weight change. Additionally, it has been proposed that the anti-inflammatory properties of EPA improves leptin sensitivity and reduce resistin levels [40]. Although leptin regulates feeding behavior and leptin deficiency mice show hyperphagia [41], circulating leptin positively correlates with fat mass, and leptin resistance occurs [42]. Additionally, resistin has been shown to induce insulin resistance in mice [43] and directly counter the anti-inflammatory effects of adiponectin [44]. We report remarkable decreases in leptin and resistin levels in mice fed with EPA at both temperatures, despite limited body weight reduction, which confirms that the anti-inflammatory effects of EPA are independent of UCP1, adiposity, and environment temperature. On the other hand, compared to WT, KO mice have lower leptin and resistin levels only at ambient temperature, possibly due to lower fat mass, but not at thermoneutrality. Collectively, these data delineate that the supplementation of EPA entirely mediates the insulin tolerance and obesity-induced inflammation in UCP1 and in a temperature-independent manner. Our study pinpoints temperature as a crucial mediator of DIO resistance via SAT browning of UCP1 independently, and EPA accelerates this process. On the transcriptional level, genes identified as browning signatures, like Cidea, Pgc1α, and Dio2, were exclusively upregulated in the SAT of UCP1 KO mice housed at ambient temperature, demonstrating that the beige cells can be formatted without UCP1. Additionally, the upregulation of other genes involved in lipid metabolism, such as Cpt1b and Pdk4, and the respiratory chain, such as Cox7a1 and Gpd1, elucidates temperature-induced lipid and glucose turnover [45] and possibly futile energy cycling [46]. Importantly, a recent study found that PM20D1 [15], an endogenous uncoupler, plays an important role in mediating metabolic profiles. Mice in the absence of PM20D1 are significantly more glucose intolerant and insulin resistant than the WT control in response to an HFD [47]. Our study, for the first time, reported that the gene expression of Pm20d1 was selectively upregulated in UCP1 KO mice in a temperature-dependent manner. However, genes of calcium cycling (Serca2b) and calcium influx (Trpv2), with recently proposed alternative thermogenic mechanisms [21,48], were not affected by the absence of UCP1 in the current study. In the in vitro study, cells harvested from UCP1 KO mice expressed lower thermogenic genes than the cell harvested from WT mice. We also found that the absence of UCP1 leads to decreased oxygen consumption in differentiated adipocytes, due to the deficiency of UCP1-derived respiration [49]. The above evidence further confirmed the indispensable role of cold stimulation to induce SAT thermogenesis and protect mice from DIO without UCP1. On the aspect of EPA, Bargut et al. reported that male B6 mice supplemented with EPA as 2% of total energy at 20 • C induced markers of browning and thermogenic factors in SAT [50]. Additionally, in a human SAT-derived adipocyte culture study, 20µM EPA promotes beige adipogenesis by improving the mitochondrial function and the expression of Ucp1 and Cptb1 [51]. The above results suggest that EPA improves mitochondria activity and oxidation and thermogenesis in SAT [52]. In line with above evidence, our current study demonstrated that thermogenic effects of EPA were displayed in both WT and KO mice in a temperature-dependent manner. At first, EPA enhanced UCP1 expression (gene and protein) in the SAT of WT mice housed at ambient temperature. Similarly, in the KO mice housed in the same condition, we observed the pronounced induction of beige fat markers (CIDEA and FGF21) on both gene and protein levels. Many studies demonstrated that EPA significantly increased CIDEA expression level in differentiated human primary white [53] and brown adipocytes [54] to promote thermogenesis. Additionally, as a key energy homeostasis regulator, one study showed that FGF21 can be increased in obese women following a hypocaloric diet supplemented with EPA [55]. These data point to a potential role of EPA in inducing beige fat formation in humans, especially in obese humans with a limited amount of active UCP1. Secondly, to estimate the effect of EPA for beige adipocyte induction and mitochondrial function, we performed genes expression and respiratory capacity analysis in differentiated SAT-derived adipocytes with or without UCP1. In line with other studies, we observed that EPA increased Ucp1 gene level in WT adipocytes, along with the enhancement of core thermogenic transcription factors and mitochondrial biogenesis. Importantly, EPA also enhanced the gene expression level of Fgf21 in a UCP1-independent manner. Our findings in mitochondrial respiration exhibited that EPA has no impact on basal OCR in both WT and KO adipocytes, until injecting the uncoupler FCCP to mimic energy demand. Thus, the maximal and spare respiratory capacities of mitochondria in both WT and KO adipocytes were increased after EPA treatment. Other studies showed that mitochondrial content and function of fat were reduced in both obese humans [56] and rodents [57], and the content of omega-3 PUFA incorporated into fat was positively correlated with mitochondrial biogenesis, adipose tissue function, and obesity. Animal Study Tissues used in this manuscript were from previously reported animal studies [19,20]. Briefly, inbred homozygous UCP1 KO mice and their WT littermate, which have been previously described, were used to perform experiments. Male mice were maintained at ambient temperature before the experimental procedures. At 5-6 week of age, WT and UCP1 KO mice were signed into different groups randomly (10-12 mice/ group) by housing at ambient or thermoneutral environment and feeding either an HFD or an HFD supplemented with 36 g/kg diet EPA-enriched fish oil (Alaskomega, Coshocton, OH, USA, Table S3). Mice have free access to food and water and were individually housed with a 12 h light/dark cycle. Weekly body weight and food intake were recorded. A glucose/insulin tolerance test and body composition were conducted during the feeding period. After 14 weeks intervention, mice were euthanized utilizing the CO 2 inhalation method following 5 h fasting. Blood, SAT, and other tissues were harvested and stored at −80 • C for further analyses. Animal experiments were approved by the Texas Tech University Institutional Animal Care and Use Committee. Glucose and Insulin Tolerance Tests GTT and ITT were preformed after 11 and 12 weeks of dietary intervention along with 5 h fasting. Blood glucose levels were measured at 0, 15, 30, 60, 90, and 120 min after glucose (2 g/kg body weight) or insulin (1 U/ kg body weight, Humulin, Indianapolis, IN, USA) injection. OneTouch Ultra GlucoseMeter (AlphaTrack, North Chicago, IL, USA) was used for blood glucose measurement. The trapezoidal method was used to calculate area under the curve. Body Composition Body fat mass of mice was determined using an EchoMRI™ whole body composition analyzer (EchoMRI LLC, Houston, TX, USA). SAT Fatty Acid Composition Direct fatty acid methyl ester synthesis and gas chromatography/mass spectrometry methods were utilized to identify SAT fatty acid concentrations and to validate EPA delivery to the SAT. Mouse Stromal Vascular Fraction (SVF) Isolation, Maintenance, and Differentiation To get a single cell suspension, SAT from mice were weighed, minced, and digested by collagenase D (Sigma, St. Louis, MO, USA) and then filtered through 70µm nylon mesh (Spectrum, Rancho Dominquez, CA, USA). After centrifuging at 800× g for 10 min, SVF cells were washed, counted, and plated into 6-well plates with Dulbecco's modified Eagle medium (DMEM; Thermo Fisher, Pittsburg, PA, USA) with 10% fetal bovine serum (Atlas Biologicals, Fort Collins, CO, USA) and 1× penicillin-streptomycin antibiotics (Thermo Fisher Scientific, Waltham, MA, USA). After 2 h, unattached cells were washed and removed with 1× phosphate-buffered saline (Sigma-Aldrich, St. Louis, MO, USA). SVF cells were cultured in the above medium until cells reached 80-90% confluence. Growth medium was mixed with 0.5 mM methylisobutylxanthine, 1 µM dexamethasone, and 10 µg/mL insulin (Sigma-Aldrich, St. Louis, MO, USA), and growth media with 10 µg/mL insulin with or without 100 µM EPA were used to induct and differentiate SVF cells. Mitochondrial Respiration Mouse primary SAT-derived SVF was isolated from WT and UCP1 KO mice and seeded into 0.2% gelatin (wt/vol) coated 24-well XF cell culture microplates (Seahorse Bioscience, Billerica, MA, USA). Differentiated SVF cells were treated with or without 100 µM EPA. Then, culture media were changed to the XF assay media (Seahorse Bioscience, Billerica, MA, USA) containing 2 mmol/L sodium pyruvate and 25 mmol/L glucose and placed in a 37 • C non-CO 2 incubator for 1 h. To determine oxygen consumption rate (ORC) changes in the differentiated SVF cells, an XF Cell Mito Stress Test Kit (Agilent, Santa Clara, CA, USA) was used and oligomycin A (1 mmol/L), carbonyl cyanide-4-(trifluoromethoxy) phenylhydrazone (FCCP, 0.3 mM), and antimycin/rotenone (A/R, 1 mM each) were injected according to the manufacturer's instructions. Respiration profiles were calculated by the following formula: Basal respiration = basal measurements − A/R measurements (2) Maximal respiration = FCCP measurements − A/R measurements (3) Spare respiration = maximal respiration − basal respiration (4) RNA Isolation and Quantitative Real-Time PCR SAT was homogenized in QIAzol ® reagent with a 4mm stainless bead using Tis-sueLyser (Qiagen, Valencia, CA, USA). Total RNA from SAT or cells was extracted using Quick-RNA™ Miniprep Kit (Zymo Research, Irvine, CA, USA), and cDNA was prepared using the Maxima H Minus First Strand cDNA Synthesis Kit (Thermo Scientific, Grand Island, NY, USA). Gene expression was performed by real-time PCR by QuantStudio™ (Thermo Fisher Scientific, Waltham, MA, USA), and data were normalized against the housekeeping gene 18S. The primers used were designed using Oligoarchitech ™ Online software, purchased from Sigma-Aldrich (St Louis, MO, USA) and listed in Table S4. Statistical Analyses Results were presented as means ± SEM). Data were analyzed by performing threeway ANOVA, including a main effect for temperature, diet, and genotype and their interactions. If significant, Tukey post hoc pairwise comparisons were conducted, and differences were considered significant at p < 0.05. All statistical tests were performed using GraphPad Prism software. Conclusions In summary, for the first time, we report detailed thermogenic effects of EPA in UCP1 deficient mice, which are temperature-dependent. Our findings suggest profound importance of temperature in inducing SAT browning. The paradoxical resistance to DIO in UCP1 KO mice appeared at ambient temperature and highlighted SAT browning as the driving force. Our analyses revealed the molecular induction of SAT browning of mice housed at ambient temperatures, suggesting compensatory thermoregulatory mechanisms, independent of UCP1. Surprisingly, EPA supplementation improved insulin resistance, systemic inflammation, and adipose thermogenesis regulation in both genotypes of mice during ambient acclimation. Our study is important to expand translational research on UCP1 alternative thermogenesis and dietary intervention to treat obesity and T2D. Further investigation of the molecular mechanisms that mediate UCP1-independent and EPAmediated browning is warranted, including global transcriptomic and pathway analyses in the SAT in UCP1 KO mice in response to temperature and EPA.
2023-05-17T15:16:36.415Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "92201bd8e9bcbe6f05a7da7424c818db9fe39db5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ijms24108708", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb4c092e803a36f328852fe5a3bb469d07206988", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
210292782
pes2o/s2orc
v3-fos-license
Analysis on local force of cable tower in low tower cable-stayed bridge Concrete under pylon saddles in extradosed cable-stayed bridges is usually subjected to tremendous cable force, which might easily cause cracking or crushing of concrete and lead to potential structural security problems. At present, the finite element mechanical analysis model for the cable saddle is not accurate enough to analyse the real stress situation. A stress analysis method of concrete under the cable saddle based on the accurate finite element model is presented in this paper. In order to obtain the exact stress results of the concrete under the saddle, this paper modelled the filament dividers in the saddle one by one, and established the loading surface element on the surface of each filament divider layer to apply the equivalent surface force layer by layer. In addition, this paper also studied the stress characteristics of the lower tower column in the tower bridge, and put forward structural reinforcement suggestions for the force of the lower tower column. The results of this paper can provide reference for the design and construction control of pylons in extradosed cable-stayed bridge. Introduction The external prestressing force of the cable-stayed cable passes through the pylon continuously by installing saddles in the pylon of the extradosed cable-stayed bridge, under the action of enormous cable force, the cable saddle would cause large local compressive stress easily to the concrete on the lower part of the cable saddle which in the pylon due to external prestressing, consequently, it is easy to cause local concrete crushing at the lower part of the cable saddle, as well as large-scale cracks, which buries the hidden danger of structural safety. Domestic and foreign scholars adopted different methods to simulate and analyse extradosed cable-stayed bridges from different perspectives. Zhang et al [1] used ANSYS to analyse the pylon stress of extradosed cable-stayed bridge, they used contact element to simulate the contact relationship between cable and steel pipe, but the saddle structure of sub-steel-pipe is not simplified as a single bundle that ignoring the influence of external steel pipe, and each steel strand interaction of the sub-steel-pipe can't be simulated and analysed. Tan et al [2] carried out the model test that main tower saddle segment of prestressed extradosed cable-stayed bridge by using the improved cable saddle structure of the sub-steel-pipe, and a spatial finite element model is established to analyse the stress distribution of concrete under saddle and the stress characteristics of sub-steel-pipe by applying the actual cable force to the corresponding channels according to the surface pressure in the form of spatial paraboloid. Zhang et al [3] used [4] carried out full-scale model tests of the segment in the cable saddle area combined with a low-tower cable-stayed bridge project, two kinds of double-sleeve cable saddles bond anchorage structures were selected and the transmission degree of unbalanced cable force was tested by low-cycle repeated loading of single-side cable. In addition, the mechanical characteristics of concrete continuum in cable saddle area are analysed and studied, the results show that two kinds of bond anchorage structures have strong and stable anti-slip effect. Zhang [5] used the spatial finite element method to calculate the stress of the saddle. The cable force was converted into parabolic surface pressure and applied to the corresponding channels, the comparison between the sub-steel-pipe and double sleeves shown that the actual effect was closer to the actual situation than the average method and the application method. Zhu [6] took a extradosed cable-stayed bridge as the research object and calculate the stress of cable saddle by space finite element method, the cable force was converted into parabolic surface pressure and applied on the corresponding channels, he considered that the actual effect of the deviation of the tunnel construction on the tower in operation stage was better, which was closer to the actual situation than the previous average method and application method. Wang [7] took an extradosed cable-stayed bridge that the twin-tower single-cable-plane with three-span prestressed concrete as the research background, and used Midas to establish the spatial finite element model of the pylon and saddle area, under the maximum cable force, the spatial force of the whole pylon and the local stress of the concrete in the pylon saddle area were analysed. Liu [8] made use of full-scale model test to study the value and distribution of splitting stress in concrete of main tower, the spatial finite element entity model was established by using ANSYS to analyse the stress distribution under saddle and the stress performance of the filament divider under cable force loading. From what has been discussed above, although a large number of scholars have carried out a variety of entity analysis on the stress distribution law of saddle and the local concrete stress distribution law of saddle in the model test of extradosed cable-stayed bridge, and get a macroscopic grasp of the stress state about the local concrete with saddle, there are still some problems as follows: the saddle model established mostly uses approximate shape blocks, it fails to describe the concrete stress condition of the structure accurately under the condition that steel strand concentrated action of multiple sub-steel-pipes, the stress distribution of concrete under saddle is less accurate. Therefore, this paper uses ANSYS finite element software to analyse the interaction between the saddle and the concrete of the pylon by establishing the exact model of the cable-stayed saddle pylon and applying the cable force on the corresponding channels by means of the average method, which can provide reference for the design and construction control of the pylon of the extradosed cable-stayed bridge. General situation of the engineering As shown in figure 1, there is an extradosed cable-stayed bridge with the span arrangement of 90+165+90 m, because of the tower bridge beneath the cable saddle has a large force, it is easy to crush the concrete bridge tower locally. This paper carries out force analysis on the saddle locally stressed area of the bridge. The detailed geometric structure of cable tower shown in figure 2, which including the elevation, side and section. The interaction between the steel cable saddle and the concrete of the cable tower is analysed by accurately establishing the model of the steel cable saddle concrete pylon. In addition, the stress characteristics of the lower tower column which belongs to the tower bridge was studied in this paper, besides, this paper also puts forward structural reinforcement suggestions for the force of the lower tower column. Finite element analysis of cable saddle and tower column The steel cable saddle adopts solid 45 element, each sub-steel-pipe is represented by solid element, the material properties are defined as steel, and the holes in sub-steel-pipe are ignored, besides, the solid 45 unit is also adopted by the concrete of cable tower, and the joint coupling is adopted for the contact between steel saddle and concrete tower. Due to the meticulous consideration about the cable saddle sub-steel-pipe, there is a large number of model elements if we establish the sub-steel-pipe accurately by entity elements, it will more difficult to establish the model of cable tower completely. Therefore, the analysis range of the cable tower selected in this paper is C1 and C2 (the number of the cable is C1-C12, which varies from the side of the tower to the middle of the span) corresponding to the cable saddle of the cable tower, the steel wire in the steel cable saddle is not simulated. Besides, the force acting on the cable saddle is The internal forces of the above pylons are simulated by the pylon equivalent weight and the stayed cable vertical force component, and the surface forces are loaded on the finite element model, the calculated values are shown in table 3. Since there are many elements to be divided, a 1/4 model is used to calculate and symillimetersetrical constraint is imposed on the symillimetersetrical surface, the established finite element model is shown in figures 3 and 4. As shown in figure 5, the normal surface force is applied to the surface element through the generating surface element of the sub-steel-pipe. The load on the model surface must also include the self-weight of the tower, which is 0.260 N/mm 2 , and the vertical force of the cables which causes the pressure load on the model surface is 4.736 N/mm 2 . The total of the two loads is 4.996 N/mm 2 . (1) The calculation results of the contact position between the lower part of the cable saddle and the concrete are shown in figures 6-10. From the calculation results in figures 6-10, it can be seen that the regular vertical (along Y axis) compressive stress distribution appears in the lower part of saddle under the cable saddle force action, and the magnitude of compressive stress is 7.31 MPa-11.41 MPa; The inner side of the saddle arc in the X direction is basically under compression, the tensile stress with a maximum of 1.15 MPa in the X direction will appear at the junction of the arc section and the straight section of the cable saddle. (2) The stress calculation results at the variable section of the main tower are shown in figures 11-16. It can be seen from figures 11 and 12, viewing from the force acting on the whole cable tower in the X-direction, there is a large area of tensile stress along the X-axis direction at the cross section of the cable tower (the junction between the upper tower column and the lower tower column), as shown in figure 13. The range of tensile stress is between 0.41 MPa and 8.10 MPa (mean value is 4.2 MPa), especially in a small area at the top of the arc section, the maximum tensile stress is 8.10 MPa, this is due to the change of section at the upper and lower pylons junction and the eccentric force at the lower pylon section. Figure 15 is the overall Y-direction stress diagram of the pylon, and figure 16 is the overall deformation diagram of the pylon in the X-direction. As can be seen from figure 15, the Y-direction stress in the upper part of the lower pylon in the inner side is larger than the outer side, while the case in the lower part of the lower pylon is contrary, the Y-direction stress in the inner side is smaller than the outer side. From Fig. 16, it can be seen that the bending deformation of the lower pylon in the overall direction of X-direction is the largest with the value of 0.3 mm at a distance about 5-6 meters from the bridge deck. Prestressed tension or transverse reinforcement infill treatment is needed to prevent crack propagation, or a tension member is installed at the lower tower column 5-6 meters away from the bridge deck, which can also reduce the large transverse tension stress at the variable section. Figure 16. Integral deformation diagram of X-axis cable Tower. Conclusion In this paper, a stress analysis method of concrete under saddle based on the exact finite element model is presented. The feasibility of this method is verified by the analysis of engineering cases, and the following conclusions are drawn: • Under the action of local cable saddle force and upper load, regular vertical compressive stress distribution (along Y axis) appears at the lower part of cable saddle with the value of 7.31 MPa to 11.41 MPa. The inner side of the cable saddle arc in the X direction is basically under compression. At the junction of the arc section and the straight section of the saddle, the X direction tensile stress will appear, and the maximum tensile stress is 1.15 MPa, which meets the strength design requirements of C50 concrete. • From the overall X-direction stress of the pylon, there is a large area of tension stress along the X-axis at the variable section of the pylon (the junction of the upper and lower pylons) , this is due to the change of section at the upper and lower pylons junction and the eccentric force at the lower pylon section. Prestressed tension or transverse reinforcement is needed to prevent crack propagation.
2019-10-31T09:15:13.678Z
2019-10-25T00:00:00.000
{ "year": 2019, "sha1": "806a18d45bab0f7b1076e9a365f9770058e36d26", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/657/1/012021", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "de3b26362c069c2ae1c3b41d4788fe9712a159c4", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Geology" ] }
17080820
pes2o/s2orc
v3-fos-license
Comments on spin-orbit interaction of anyons The coupling of non-relativistic anyons (called exotic particles) to an electromagnetic field is considered. Anomalous coupling is introduced by adding a spin-orbit term to the Lagrangian. Alternatively, one has two Hamiltonian structures, obtained by either adding the anomalous term to the Hamiltonian, or by redefining the mass and the NC parameter. The model can also be derived from its relativistic counterpart. Introduction Anyons (by which we mean here a particle in the plane which carries fractional spin) [1,2] with anomalous gyromagnetic ratio have recently been considered [3,4] either in Souriau's [5] symplectic, or in a novel, "enlarged Galilean" framework. Both approaches are somewhat unfamiliar to most physicists. In this Letter we continue our investigations using more conventional methods, close to the spirit of Ref. [6]. Exotic particles with minimal electromagnetic interaction A curious fact known for thirty years but only investigated in more recent times is that the planar Galilei group admits a two-fold "exotic" central extension, labeled with m (the mass) and a second, "exotic" parameter κ [7]. Physical realizations of this symmetry have been presented, independently [8,9]; both can be obtained from their relativistic anyons as "Jackiw-Nair" (JN) limits [10,11]. The first of these models, referred to as the "extended exotic particle", uses an acceleration dependent Lagrangian [8]. In terms of (external) momenta, P i , and suitably defined external and internal coordinates X i and Q i and (i = 1, 2), [11,12], the model is conveniently described by the first-order Lagrangian where we introduced the non-commutative parameter θ = κ/m 2 . Q 2 is a constant of the motion. When Q 2 = 0, the internal space reduces to a point, and we recover the "minimal" exotic particle in [9]. We first consider the extended case. Q i = 0. The nontrivial Poisson-bracket relations are Such a particle can be coupled minimally to an electromagnetic field in various ways. (i) One possibility [11,12] is to couple to the external part only by adding the usual expression which amounts to gauging the global symmetry associated with the electric charge. This amounts to modifying the symplectic structure which determines the non-commutative geometry of the phase space, cf. (3.5) below. (ii) In another scheme [12] the Hamiltonian is while the non-commutative geometry is unchanged. In such a way the interaction changes Abelian gauge transformations [12] 1 . The two schemes are equivalent in the absence of the exotic structure, θ = 0, but not for θ = 0. Both schemes leave the internal motions uncoupled. They can be also coupled, however, by gauging the additional "internal" global SO(2) symmetry, δQ i = ϕ ǫ ij Q j ϕ ∈ R [11]. In scheme (i) the interaction of an "extended exotic particle" with an electromagnetic field is described by the Lagrangian Then easy calculation shows that the Lagrangian (2.5) is quasi-invariant with respect to local internal rotations supplemented by a gauge transformation, The coefficient in the interaction term (2.5) is fixed by gauge invariance: it generates internal rotations, Q 2 , Q i = 2θǫ ij Q j . The Euler-Lagrange equations are where E i and B are the electric and magnetic field, and e denotes the shifted charge e + Q 2 /2θ. m * = m(1 − eθB), is the effective mass introduced in [9]. Equation (2.8) implies at once that the [squared] length of the internal vector, Q 2 (and hence also the shifted charge) are constants of the motion. In the general case, the "internal" variable is parallel transported, just like for a particle with nonabelian internal structure [14]. This motion is, however, a mere gauge artifact that could be eliminated by a gauge transformation with ϕ(t) = −t/mθ, which would also remove the (mθ) −1 in (2.8). The only physical quantity is Q 2 . Being unphysical, the motion of the internal variable Q will, therefore not be considered in what follows. We only consider the equations (2.6-2.7). When Q = 0, we recover the "minimal" exotic particle of [9], coupled to an e.m. field. In the second scheme (ii), the electromagnetic interaction including the internal motion can be obtained, as described in [12], from (2.4) by means of a noncanonical transformation of the phase space variables, supplemented with a classical Seiberg-Witten map between the corresponding gauge potential. Therefore, in both cases, the additional coupling to internal motion amounts to replacing the original, "bare" charge by the total charge, e → e + Q 2 /2θ, whose two parts can't be measured separately. Anomalous coupling Anomalous coupling to the electromagnetic field has been studied before [3,6,15,16,17,18]. The traditional rule of nonrelativistic physics, translated into the plane, says that magnetic moment interactions should be introduced by adding a term µB to the Hamiltonian, where µ = egs 0 /2m is the magnetic moment. Here g is the gyromagnetic ratio and we denoted nonrelativistic spin by s 0 . Here we propose to generalise this rule by also including an electric term, namely by adding to (2.5) The equations of motion look rather complicated, • for g = 0 we plainly recover the previous equations of motion (2.6-2.7-2.8). • By (3.2) the velocity and the momentum,Ẋ i and P i , respectively, are not parallel in general, except for g = 2 and for constant magnetic and linear and central electric field. • When the fields are not only weak but also constant, eqns. (3.2-3.3) reduce to the weakfield, non-relativistic equations, # (7.1) of [3], i. e., These equations are Hamiltonian. The commutation relations are those of an "ordinary" exotic particle, [9], and the spin-orbit term is added to the Hamiltonian, Unlike in [4], the "corrected" Larmor frequency only depends on the non-commutative parameter θ but is independent of the gyromagnetic ratio g. Remarkably, the same equations (3.4) can be derived also from another Hamiltonian structure, namely from These are indeed the usual "exotic" relations, but with redefined NC parameter and mass, respectively. Thus, for constant external fields, the anomalous electric coupling term in (3.1) (or (3.6)) can be suppressed by redefining the parameters, yielding the same equations (2.6-2.7) as in the minimal model. The constant term µB can actually be dropped from both (3.6) and (3.11). Relation to relativistic anyons The anomalous theory of Ref. [3] was based on replacing the (relativistic) "bare" mass by a fielddependent expression, m → M = M (eF · S), where S αβ is the spin tensor, and F · S = −S αβ F αβ [15,16] 2 . Now in the plane the usual requirement S αβ P β = 0 implies that spin is given by the momentum, In [3] the choice was It should be stressed, however, that (4.2) is a mere Ansatz, and does not follow from any first principle. In fact, any function M = M (eF · S) would yield a consistent theory [6,15,16]. For example, could be (and has been [17]) used. In the weak-field-limit, (4.3) yields the same equations as (4.2), since M ≈ M if egF · S/m 2 c 2 << 1. In what follows, we shall use the simpler expression (4.3). Then the procedure followed in [3] is readily seen to be equivalent, in the weak-field limit, to adding to Cartan's variational 1-form (whose integral is the classical action [5]) the anomalous spin-field term But we can parametrize our curves with proper time, (P α dX α )/M c 2 = dτ [3]. The extra term has, therefore, the same effect as adding ∆H = ges 4mM ǫ αβγ P α F βγ (4.5) to the Hamiltonian, since ∆α = − ∆Hdτ . In a local Lorentz frame, putting s = θm 2 c 2 + s 0 allows us to infer that the extra piece added to the Lagrangian is P 0 ≈ M c 2 and m/M ≈ 1 in the NR limit. Removing the first, divergent term and dropping the last one which goes to zero as c → ∞. In the JN limit, neglecting higher-order terms, we end up with L anom with Q = 0 in (3.1) 3 . Alternatively, the spin-orbit term H anom in (3.6) is the JN limit of (4.5). The two possibilities i. e., either changing the kinetic term, or adding a spin-orbit piece to the Hamiltonian are the relativistic counterparts of the two Hamiltonian structures we found in the non-relativistic context. Semiclassical Dirac particle Returning to the non-relativistic setting, let us illustrate our theory on a related problem. In a recent paper [20], Bérard and Mohrbach consider a 3D Dirac particle in a constant electric field and show that, semiclassically, the particle admits, to order c −2 , the anomalous velocity relation [supplemented with the Lorentz force lawṖ i = eE i ], where σ is the spin vector. Assuming cylindrical symmetry and spin-polarized electrons, σ i = −sδ i3 , the JN limit s/m 2 c 2 → θ yields which is the first equation in (3.4) with B = 0 and with anomalous gyromagnetic factor g = 1. This value has already been found before [21]. To leading order in c −1 , the relativistic Hamiltonian behaves as cf. (3.6). Note that the naive Hamilton equation,Ẋ i = ∂ H/∂P i , would contain a factor (+1/2) instead of (−1/2) in front of the anomalous term in (5.2). The correct coefficient is recovered when the exotic part is taken into account. Either of the Hamiltonian structures yields indeed the correct equations for any value of the real parameter α. (3.5)-(3.6) corresponds to α = 0, and (3.8)-(3.9)-(3.10)-(3.11) corresponds to α = 1/2, respectively. Further generalizations A slightly modified model is obtained replacing the momentum in (3.1), P i , by the velocity,Ẋ i : Magnetic moment interaction of such kind has been considered before [18]. Eqn. (6.1) is also reminiscent of the interaction of a magnetic moment with an electric charge [22]. Adding (6.1) to our Lagrangian (2.5) amounts indeed to changing the potentials in (2.6)- Eliminating the momenta in the new equations of motion and dropping terms which contain second derivatives of the field, we obtain with the new magnetic field, B ′ , replacing B in the new effective mass, m → m * ′ = m(1 − eθB ′ ). For the sake of comparision, neglecting terms which are higher-order in the fields, from (3.2-3.3), we would get instead This is readily transformed into the form (6.2). In a weak and slowly varying field, the two models only differ in the form of the effective mass. It is worth remembering that anomalous velocity relations of the type studied here have been considered in the context of the Anomalous Hall Effect [23] and in the semiclassical theory of the Bloch electron [24]. Equations (2.6)-(2.7), or their "anomalous" generalization in constant external fields, (3.4), is indeed a special case of the more general systeṁ where E = E 0 ( P ) − BM( P ) is the total energy with E 0 and M denoting the Bloch band energy and the magnetization, respectively. These equations can be derived, under quite general assumptions, by semiclassical calculations applied to the dynamics of wave packets in a twodimensional crystal [24]. Note that the non-commutative parameter has been promoted to a function of the momentum [25]. The system (6.3-6.4) can actually be reduced to first order equations for the P i alone, 1 − eBθ( P ) Ṗ i = eBǫ ij ∂ P j E + eE i , (6.5) that can be integrated by solving with respect to P 1 , say, using the conserved quantity Thus the problem is reduced to quadratures. Note that eqn. (6.5) is actually Hamilton's equation for C as Hamiltonian and Poisson bracket (3.5c) in P -space alone. In conclusion, we mention that another way of introducing anomalous coupling for constant e.m. fields has been advocated by us in [4]. There we introduced an "enlarged" planar Galilei group, which incorporates field variables besides space-time. Interestingly, the square of (6.6) is proportional to a Casimir of the enlarged symmetry algebra in [4], and anomalous coupling can then be achieved by adding this Casimir to the Hamiltonian.
2014-10-01T00:00:00.000Z
2005-02-21T00:00:00.000
{ "year": 2005, "sha1": "2dc2b4413d51659e36e7fe1ef6731adf60335f57", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0502181", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "30cad9c8f30c3f832059167e4b170cacc4398f62", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
257659110
pes2o/s2orc
v3-fos-license
Characterization of an Aplysia vasotocin signaling system and actions of posttranslational modifications and individual residues of the ligand on receptor activity The vasopressin/oxytocin signaling system is present in both protostomes and deuterostomes and plays various physiological roles. Although there were reports for both vasopressin-like peptides and receptors in mollusc Lymnaea and Octopus, no precursor or receptors have been described in mollusc Aplysia. Here, through bioinformatics, molecular and cellular biology, we identified both the precursor and two receptors for Aplysia vasopressin-like peptide, which we named Aplysia vasotocin (apVT). The precursor provides evidence for the exact sequence of apVT, which is identical to conopressin G from cone snail venom, and contains 9 amino acids, with two cysteines at position 1 and 6, similar to nearly all vasopressin-like peptides. Through inositol monophosphate (IP1) accumulation assay, we demonstrated that two of the three putative receptors we cloned from Aplysia cDNA are true receptors for apVT. We named the two receptors as apVTR1 and apVTR2. We then determined the roles of post-translational modifications (PTMs) of apVT, i.e., the disulfide bond between two cysteines and the C-terminal amidation on receptor activity. Both the disulfide bond and amidation were critical for the activation of the two receptors. Cross-activity with conopressin S, annetocin from an annelid, and vertebrate oxytocin showed that although all three ligands can activate both receptors, the potency of these peptides differed depending on their residue variations from apVT. We, therefore, tested the roles of each residue through alanine substitution and found that each substitution could reduce the potency of the peptide analog, and substitution of the residues within the disulfide bond tended to have a larger impact on receptor activity than the substitution of those outside the bond. Moreover, the two receptors had different sensitivities to the PTMs and single residue substitutions. Thus, we have characterized the Aplysia vasotocin signaling system and showed how the PTMs and individual residues in the ligand contributed to receptor activity. We sought to study an oxytocin/vasopressin signaling system in the gastropod mollusc Aplysia californica. Aplysia is an experimentally-advantageous system and has provided fundamental insight into the neural basis of motivated behaviors (Jing and Weiss, 2002;Jing et al., 2004;Sasaki et al., 2009;Jing et al., 2010;Sasaki et al., 2013;Zhang et al., 2020;Bedecarrats et al., 2021;Evans et al., 2021;Due et al., 2022;Wang et al., 2023), learning and memory (Sieling et al., 2014;Byrne and Hawkins, 2015;Orvis et al., 2022) and neuromodulation (Cropper et al., 2018b;Zhang et al., 2022), including neuropeptides (Livnat et al., 2016;Zhang et al., 2017;Do et al., 2018;Zhang et al., 2018;Chan-Andersen et al., 2022) and receptors (Bauknecht and Jekely, 2015;Checco et al., 2018;Guo et al., 2022;Jiang et al., 2022;Zhang et al., 2022). The first evidence for the presence of oxytocin/vasopressin-related neuropeptide in protostomes comes from an early immunohistochemical study (Remy et al., 1979), and the later identification of an arginine vasopressin-like diuretic hormone (Proux et al., 1987), both in insects. In gastropod molluscs, five types of vasopressin/oxytocin homologs have been identified in venoms of different species of Conus, two of which have been named as Lys-Conopressin G in Conus geographus (CFIRNCPKG-NH2) and Lys-Conopressin S in Conus striatus (CIIRNCPRG-NH2) (Cruz et al., 1987;Lewis et al., 2012;Lebbe and Tytgat, 2016), although currently there were no reports of endogenous vasopressin-like peptides in these cone snails. Early studies in Aplysia have shown that endogenous oxytocin/ vasopressin-related substances are present in this species (Moore et al., 1981), and based on mass, its sequence appears to be consistent with Lys-conopressin G of Conus (Moore et al., 1981;Thornhill et al., 1981). Immunohistochemical studies of the Aplysia central nervous system suggested that VP-like immunoreactivity is restricted to a single neuron in the abdominal ganglion and two small neurons located bilaterally in each pedal ganglion (Martinez-Padron et al., 1992). VP/OT-type neuropeptides decrease the spiking frequency of the gill motor neuron L7 in the abdominal ganglion and accordingly inhibit the gill-withdrawal reflex . It is also reported that VP/OT-type neuropeptides increase the spiking frequency of the abdominal R15 neuron (Lukowiak et al., 1980). Despite the progress described above, the exact sequence of Aplysia conopressin remains to be determined, and neither the precursor nor the receptors have been described in Aplysia. Previous work has identified the precursor (Van Kesteren et al., 1995a) and one vasopressin-like receptor in gastropod mollusc Lymnaea (van Kesteren et al., 1995b). Later work identified one additional receptor in Lymnaea (van Kesteren et al., 1996). In cephalopod mollusc, octopus, it has been shown to have two members of vasopressin/oxytocin peptides derived from two different precursors, and three corresponding receptors Takuwa-Kuroda et al., 2003). Previous work (Tessmar-Raible et al., 2007;Bauknecht and Jekely, 2015;Williams et al., 2017) has also shown that there are two receptors for a vasopressin-like peptide in annelid Platynereis dumerilii, which together with molluscs, belong to superphylum: lophotrochozoa. Thus, there may be two or more receptors in Aplysia. Here, we first cloned the precursor for Aplysia vasopressin, which provided direct evidence for its exact sequence. Although the sequence is identical to conopressin G, we chose to name the peptide Aplysia vasotocin (apVT) instead of conopressin G because conopressin G is only present in the venom of cone snails. This naming convention has been adopted previously in P. dumerilii (Bauknecht and Jekely, 2015). We then identified two receptors for apVT, i.e., apVTR1 and apVTR2. We also explored the roles of each residue in apVT by single residue alanine substitution, as well as post-translational modifications (PTMs), i.e., the disulfide bond and C-terminal amidation, to the activation of the two receptors. Our results indicate that the disulfide bond, C-terminal amidation, and most residues are important for the activation of the receptors. Moreover, the two receptors might have different sensitivities to the PTMs and single residue substitution. Thus, we have characterized the Aplysia vasotocin signaling system and provided an important basis for the study of its physiological roles. Subjects and reagents Experiments were performed on A. californica (100-350 g) obtained from Marinus, California, United States. Aplysia are hermaphroditic (i.e., each animal has reproductive organs normally associated with both male and female sexes). Animals were maintained in circulating artificial seawater at 14°C-16°C and the animal room was equipped with a 24 h light cycle with the light period from 6:00 a.m. to 6:00 p.m. All chemicals were purchased from Sigma-Aldrich unless otherwise stated. Bioinformatic analysis of peptide precursors and receptors We first used NCBI to find specific sequences of interests. In addition, we also searched AplysiaTools databases (Dr. Thomas Abrams, University of Maryland, United States) to obtain additional sequences for comparison. These latter databases (http://aplysiatools.org) include databases for the Aplysia transcriptome and Aplysia genome. The open reading frames (ORFs) from the full-length cDNA sequences of the apVT precursor and putative receptors were obtained using ORF Finder (https://www.ncbi.nlm.nih.gov/ orffinder/). For the apVT precursor, the putative signal peptide was predicted using SignalP-5.0 (http://www.cbs.dtu.dk/services/ SignalP/) and the putative peptides encoded by the apVT precursor were predicted using NeuroPred (http://stagbeetle. animal.uiuc.edu/cgi-bin/neuropred.py). We also compared the apVT with those of other species using BioEdit software and generated a frequency plot of each amino acid (aligned from the c-terminus) using Weblogo software (http://weblogo.berkeley.edu/ logo.cgi). For the putative apVT receptors, transmembrane domains were predicted using TMHMM Server v. 2.0 (http://www.cbs.dtu. dk/services/TMHMM/). For proteins that were difficult to annotate using blast, we also used the Pfam database (http://pfam.xfam.org/ search#tabview=tab1) to determine what type of protein it is. The phylogenetic trees of sequences from different species were constructed by MEGA X software (https://www.megasoftware. net/) using the maximum likelihood method with 1,000 replicates. For Figure 4B, we used the "Parathyroid hormone peptide receptor_C.gigas" as an out-group, and LG + G + F model to generate our final tree; for Figure 6, the "RYamide Receptor Drosophila melanogaster" was used as an out-group, and LG + F + G + I model was performed which was different from Figure 4B. The selection of the models was based on the results of an initial MEGA analysis. RNA extraction After anesthesia with 30%-50% of the body weight with 333 mM MgCl 2 , Aplysia cerebral, pleural-pedal, buccal, and abdominal ganglia were dissected out and maintained in artificial seawater containing the following (in mM): 460 NaCl, 10 KCl, 55 MgCl 2 , 11 CaCl 2 , and 10 HEPES buffer, pH 7.6, in a dish lined with Sylgard (Dow Corning). RNA was prepared from the Aplysia ganglia using the TRIzol reagent method. Specifically, the dissected ganglia were placed into 200 μL TRIzol (Sigma, T9424) and stored at −80°C until use. The frozen ganglia in TRIzol were thawed and homogenized with a plastic pestle, then TRIzol was added to a total volume of 1 mL, which were incubated at room temperature for 10 min. Then, 200 μL chloroform was added, and the solution was mixed thoroughly by a shaker, and let stand on ice for 15 min. The solution was centrifuged (12,000 × g, 4°C, 15 min), and the supernatant was added to an equal volume of isopropanol. The tube was shaken gently by hand and let stand at −20°C for 2 h. After 2 h, it was centrifuged (12,000 × g, 4°C, 15 min) again, the supernatant was discarded, 1 mL of 75% ethanol/water was added, and the centrifuge tube was shaken gently by hand to suspend the pellet. It was centrifuged (12,000 × g, 4°C, 10 min), the supernatant discarded and the precipitant was dried at room temperature for 5-10 min. Finally, 30 μL of nuclease-free water was added to dissolve the RNA pellet, and the RNA concentration was determined with a Nanodrop ND-1000 spectrophotometer (Thermo Fisher Scientific). Reverse transcription Using the above-extracted RNA as a template, cDNA was synthesized by reverse transcription using PrimeScript RT Master Mix Kit (Takara, RR036A) according to the instructions and then stored at −20°C until use. The synthesized first-strand cDNA serves as a template for subsequent PCR. PCR The synthesized cDNA above was used as a template for PCR. Each pair of specific primers was designed (Supplementary Table S1) in Primer Premier 6 and Oligo7, based on protein-coding sequences for the apVT precursor and putative receptors. The PCR reaction was performed with 98°C/2 min pre-denaturing; 98°C/10 s denaturing;~60°C (depending on the specific primers: see Supplementary Table S1)/15-s annealing; 72°C/30 s extension and 72°C/5 min re-extension for 35 cycles. The PCR products were subcloned into vector pcDNA3.1(+) and sequenced to ensure that the sequence was correct. IP1 accumulation assay Inositol monophosphate (IP1) accumulation assay measures the concentration of IP1, which is hydrolyzed from the second messenger, inositol triphosphate (IP3). IP3 is generated by Gαq pathway when a G-protein coupled receptor (GPCR) expressed in CHO-K1 cells is activated by an appropriate ligand. To express the Aplysia putative receptors transiently in CHO-K1, the cDNA was cloned into the mammalian expression vector pcDNA3.1(+). CHO-K1 cells (Procell, CL-0062) were cultured in F-12K medium (Gibco, 21,127-022) with 10% fetal bovine serum (Genial, G11-70500) at 37°C in 5% CO 2 . Transfection experiments were performed when the cells were grown to 70%-90% confluence. In preliminary experiments, for each dish (60-mm diameter), 3 μg of the putative receptor plasmids [in pcDNA3.1(+)] and 3 μg of the promiscuous Gαq plasmids (also known as Gα16) (Bauknecht and Jekely, 2015;Sharma and Checco, 2021) [in pcDNA3.1(+)] were co-transfected in CHO-K1 cells, mixed with 400 μL of Opti-MEM (Gibco, 11,058,021), followed by the addition of 15 μL of Turbofect (Thermo Fisher Scientific, R0531). Note that the inclusion of Gα16 will ensure a response no matter what signaling pathway (endogenous or not) a putative receptor might couple to. For the Class-A GPCR 3, we could not obtain an IP1 response compared with the apVTR1 and apVTR2, suggesting that this receptor is not a receptor for apVT. Then, for apVTR1 and apVTR2, in each dish (60mm diameter), 4 μg of plasmids [in pcDNA3.1(+)] was transfected in CHO-K1 cells, mixed with 400 μL of Opti-MEM (Gibco, 11,058,021), followed by the addition of 15 μL of Turbofect (Thermo Fisher Scientific, R0531). Under this condition, we could still obtain an IP1 response, suggesting that apVTR1 and apVTR2 are the receptors of apVT and can associate with the native Gαq in the CHO cells. Thus, for all subsequent IP1 accumulation assays, 4 μg of the apVTR1 or apVTR2 plasmids [in pcDNA3.1(+)] without the promiscuous Gαq plasmids was transfected. The CHO cells with the reagents added above were mixed gently and incubated at room temperature for 15 min. The DNA/Turbofect mixture dropwise was then added to the dish, and the cells were incubated at 37°C in 5% CO 2 overnight. The next day, the cells were trypsinized and reseeded in 384-well tissue culture-treated plates (Corning, 3,570) at a density of 20,000 cells/well in F-12K and 10% FBS and incubated at 37°C in 5% CO 2 overnight. On the third day, the activation of the putative receptor was detected by monitoring IP1 accumulation using an IP1 detection kit (Cisbio, 62IPAPEB) in Tecan Spark. Except for using 0.5x reagent, all other procedures were performed by following the IP1 detection kit manufacturer's instructions. Peptides are synthesized by Guoping Pharmaceutical (Supplementary Figure S1) and are aliquoted in 50 nmol EP tubes, and stored at −20°C until use. Identifying the precursor for Aplysia vasotocin and predicting peptides To identify a putative precursor and receptors for Aplysia vasotocin (apVT), we began with a bioinformatic analysis. For the precursor, searching "Aplysia conopressin" in NCBI returned one entry: two predicted sequences (accession number: XM_ 013084328.1, which corresponds to a genome sequence: NW_ 004797283.1; accession number: XM_013084330, which corresponds to the same genome sequence: NW_004797283.1) and a Lys-conopressin precursor deposited in 2008 (mRNA accession number: FJ172359.1), which is likely based on an early large-scale sequencing project (Moroz et al., 2006). The CDS regions of the three sequences are similar: those of XM_013084328.1 and XM_013084330 are identical, which share 15 nucleotides more than that of FJ172359.1. Using the RNA sequence from NCBI (XM_ 013084328.1), we also found an mRNA sequence (TRINITY_ DN1494_c1_g1_i2) in AplysiaTools (see Materials and Methods) ( Figure 2B) with the same CDS region with XM_013084328.1 and XM_013084330. Based on this, we plotted gene expression with the sequence XM_013084328.1 (Figure 2A). Note that the mRNA sequence (XM_013084328.1) produces an identical protein as the TRINITY_DN1494_c1_g1_i2 does, but its noncoding regions are somewhat different from the AplysiaTools sequence. After using bioinformatics to find the potential apVT gene in Aplysia, it was important to identify the peptides that are generated by the precursor gene and then find receptors that might be responsive to the peptides. Here, we first designed primers (Supplementary Table S1) using the precursor sequence we found, performed PCR on cDNA of Aplysia CNS, and obtained an mRNA of 504 bp in length ( Figure 3A, see Supplementary Figure S2A for the complete gels) which is identical to CDS of XM_ 013084328.1. The sequence we cloned was shown in Supplementary Figure S3. We aligned the precursor of apVT with the homolog precursors in several other species. The result was shown in Supplementary Figure S4 (the similarity of each sequence to apVT is provided in the figure legend). The data indicate that the precursor of apVT has high similarity with the homolog sequence in other species. Next, we used NeuroPred (Southey et al., 2006) to predict possible peptides that might be generated from the apVT precursor ( Figure 3B). The sequence of apVT is: CFIRNCPKGamide, identical to conopressin G. Similar to Vasopressin/Oxytocin in other species, apVT is made up of nine amino acids; the first and sixth of which are cysteines, which form a disulfide bond; the C terminus of apVT is amidated. We also compared apVT with vasopressin/oxytocin in other species ( Figure 1A, see Supplementary Table S2 for information on these sequences) and made a frequency plot ( Figure 1B) with Weblogo. Given that Vasopressin/Oxytocin has a consistent number of amino acids and posttranslational modifications in different species, we hypothesized that the amino acid sequence and posttranslational modifications of apVT might have some importance, e.g., in receptor activation (see Figures 7-10). Identifying putative receptors for Aplysia vasotocin To identify putative receptors, we searched "Aplysia conopressin receptor" or "Aplysia vasotocin receptor" in NCBI, but this search did not return any sequences. Because of the various naming nomenclatures for vasopressin/oxytocin-like peptides in different species, we then tried to search "Aplysia vasopressin receptor" in Frontiers in Pharmacology frontiersin.org Frontiers in Pharmacology frontiersin.org NCBI, which did return one sequence (XM_005111551). Then, we searched "Aplysia isotocin receptor" in NCBI, which also returned one sequence (XM_013088972.2). In addition, we used the Lymnaea stagnalis conopressin receptor (LSU27464) (van Kesteren et al., 1995b) to blast in the NCBI, which returned yet another possible sequence (XM_005096258). In total, we obtained three sequences. We used these three sequences to blast in the AplysiaTools and found that the third sequence, XM_005096258, appeared to be incomplete compared to a similar sequence (TRINITY_ DN90163_c0_g1_i3) in the AplysiaTools. Next, we used NCBI Conserved Domain Search and TMHMM server 2.0 to predict whether these three sequences (XM_005111551, XM_ 013088972.2, TRINITY_DN90163_c0_g1_i3) are GPCRs. The three sequences are predicted to have 7 transmembrane domains ( Figure 4A), which presumably are complete GPCR sequences. In addition to the third sequence, the other two sequences are also present in AplysiaTools databases and have identical CDS regions. To determine whether the three putative GPCRs might be related to apVT receptors, we blasted each sequence in NCBI in four species where more protein sequences have been studied, i.e., Caenorhabditis elegans, D. melanogaster, Danio rerio and Mus musculus (Supplementary Table S3). For the protein with accession number: XP_012944426.1 (mRNA: XM_013088972.2), named isotocin receptor in NCBI, a number of sequences named vasopressin receptors or oxytocin receptors with low E-values (<2E-27) came up at these searches in several invertebrate and vertebrate species, suggesting that this protein might be related to apVT receptors. We therefore tentatively named it apVT receptor 1 (apVTR1). For the sequence (TRINITY_DN90163_c0_g1_i3) blasted from AplysiaTools, these searches also returned useful known proteins with low E-values (<7E-13) but low query coverage (<31%), this low query coverage may be due to the long third intracellular loop (ICL3) in the Aplysia sequence (Supplementary Table S3). Therefore, we named it as apVT receptor 2 (apVTR2). For the protein XP_005111608.1 (mRNA: XM_005111551.3), the searches returned useful known proteins (such as vasopressin V1a receptor [M. musculus]) with low E-values (<5E-13) and high query coverage (>65%). However, through later experiments (see Figure 5A), we determined that the sequence is not a receptor of apVT. Therefore, we used Pfam (http://pfam.xfam.org/ search# tabview=tab1) to blast the protein and found that it is classified as Class A GPCRs (rhodopsin family). Thus, we tentatively named this protein (XP_005111608.1) Class-A GPCR 3 (Supplementary Table S3). To provide a better view of the results from Supplementary Table S3, we have included a simplified table as Supplementary Table S4. This Table shows that the apVTR1 and apVTR2 are more similar to vasopressin V1b receptor, whereas Class-A GPCR3 is more similar to vasopressin V1a receptor. To obtain a phylogenetic relationship between the three proteins, we decided to construct a phylogenetic tree with a number of Class A GPCRs in molluscs Lottia giagantea and Crassostrea gigas from their Supplementary Table S5 of . Then, we added the three Aplysia sequences (Supplementary Table S3) and re-ran the phylogenetic tree ( Figure 4B). The tree showed that apVTR1 and apVTR2 are clustered together with C. gigas vasopressin receptor and L. giagantea vasopressin receptor, supporting the hypothesis that apVTR1 and apVTR2 might be apVT receptors. For comparison, although Class-A GPCR 3 is close to these sequences, it is not in the same cluster. To determine if these sequences are true receptors for apVT, we chose to pursue the study by cloning the two putative apVT receptors and Class-A GPCR 3. We designed primers (Supplementary Table S1) using the three GPCR sequences, and successfully cloned mRNAs for apVTR1 (GenBank accession number: OQ586100), apVTR2 (GenBank accession number: OQ586101), and Class-A_GPCR3 (GenBank accession number: OQ586102) ( Figure 4C, see Supplementary Figures S2B-D for the complete gels). The three sequences were shown in Supplementary Figure S5. apVTR1 and apVTR2 have conserved motifs in TM3 with DRY, and TM7 with NPXXY, whereas these motifs in Class-A GPCR 3 are less conserved (TM3: DRH, TM7: NPYIF). We then compared the three putative receptors using BioEdit (Supplementary Figure S6). The similarity between Class-A GPCR 3 and apVTR1 is 28.8%, the similarity between Class-A GPCR 3 and apVTR2 is 23.04%, and the similarity between apVTR1 and apVTR2 is 35.06%. The data indicate that the three sequences have high similarity. To search for other sequences that might be related to the apVT receptors, we used the cloned apVTR sequences to blast both the transcriptome and the genome of the AplysiaTools databases, but we did not find any additional related sequences. Activation of the putative receptors by apVT To determine if these three putative GPCRs are the receptors of apVT, we cloned apVTR1, apVTR2, and Class-A_GPCR3 into pcDNA3.1 plasmids, and expressed them in CHO cells. We then used the IP1 accumulation assay that detects IP1 generated in the Gαq pathway (see Methods) to determine whether the predicted Aplysia peptides could activate the receptors. In preliminary experiments, we performed experiments by co-transfecting plasmids with a promiscuous Gαq protein (also known as Gα16) to test if all GPCRs can be activated by apVT. We initially screened IP1 responses of the apVT, at two concentrations (10 -10 M and 10 -5 M) on the three receptors: apVTR1, apVTR2, and Class-A_ GPCR3 ( Figure 5A). At 10 -10 M, a peptide was typically not or minimally (if any) activating a receptor, so it is used as a control. apVTR1 and apVTR2 responded to apVT ( Figure 5A), whereas Class-A_GPCR3 had no response to apVT. We also tested the effects of vasopressin/oxytocin-like peptides from other species (Cone snail (ConS), Eisenia foetida (Annetocin), M. musculus (Oxytocin and vasopressin) on the Class-A GPCR3 (Supplementary Figure S8), and none of the four peptides had any effects on Class-A GPCR3. Next, for apVTR1 and apVTR2, we only transfected plasmids for a putative receptor in CHO cells without the promiscuous Gαq protein and obtained similar results ( Figure 5B) compared to those when co-transfected with Gαq protein. Thus, for the rest of IP1 accumulation assay, we performed the experiments without co-transfection with Gαq protein. Taken together, we conclude that apVTR1 and apVTR2 are apVT receptors, whereas Class-A_GPCR3 is not. Furthermore, for the apVTR1 and apVTR2 that had a significant response to apVT in the initial screening ( Figure 5B), we used multiple concentrations of the apVT, ranging from 10 -12 M to 10 -4 M to determine the dose-response curves of peptide Frontiers in Pharmacology frontiersin.org 07 activation on the receptors ( Figure 5C). The EC 50 for the two receptors are similar: apVTR1 (EC 50 = 70 nM), and apVTR2 (EC 50 = 77 nM). Finally, we generated a phylogenetic tree of the two newlyidentified apVT receptors with vasopressin/oxytocin-Rs from selected species in arthropods, molluscs and mammals ( Figure 6, see Supplementary Table S5 for information on these sequences. For multiple alignments of the two apVT receptors with vasopressin/ oxytocin-Rs from selected species, see Supplementary Figure S7.). The tree suggested that apVTR1 and apVTR2 were closely related to conopression receptors in molluscs, e.g., Lymnaea. Notably, the tree also suggested that the vasopressin-like peptide receptors from annelids and molluscs are more closely related to mammalian receptors than the arthropod receptors. The roles of post-translational modifications of apVT on receptor activity To investigate the effects of the disulfide bond on the activity of apVT, we first synthesized the apVT analog without the disulfide bond: apVT'. However, the results showed that the effects of apVT' on apVTR1 (EC 50 = 65 nM) and apVTR2 (EC 50 = 70 nM) were not different from those of apVT (Figure 7), which might imply that the disulfide bond is not important for the activity of apVT. However, previous work has shown that vasopressin/oxytocin without the disulfide bond could spontaneously form a disulfide bond under physiological conditions (Roy et al., 2007). Therefore, we synthesized neuropeptide analogs that protect cysteines with acetamidomethyl (Acm) to prevent the spontaneous formation of the disulfide bond: [Cys(Acm) 1 ]apVT (Acm protects only the first cysteine), [Cys(Acm) 6 ]apVT (Acm protects the second cysteine) and [Cys(Acm) 1,6 ]apVT (Acm protects both cysteines). In addition to the protection of cysteine residues, we also used serines to substitute the cysteines [(Ser 1,6 )apVT] (Labarrere et al., 2003). The results showed that the effect of apVT analogs without the disulfide bond on the two receptors was significantly reduced (Figure 7), indicating that the disulfide bond is important for the function of apVT. To investigate the effect of the C-terminal amidation on the activity of apVT, neuropeptide analogs without the C-terminal amidation were synthesized: apVT-OH (C terminus without amidation, but with the disulfide bond), apVT'-OH (C terminus un-amidated, and without the disulfide bond) (Figure 8). EC 50 values of apVT-OH on apVTR1 and apVTR2 were 3,500 nM and 1,000 nM respectively, and EC 50 values of apVT'-OH on apVTR1 and apVTR2 were 2,200 nM and 2,200 nM respectively ( Figure 8E). These results showed that apVT-OH and apVT'-OH had significantly weaker effects on apVTR1 and apVTR2 than apVT, indicating the C-terminal amidation is important for the function of apVT. The roles of single residues of apVT on receptor activity In addition to the C-terminal amidation and the disulfide bond, vasopressin/oxytocin-like peptides had a relatively consistent number of residues in different species, with some residues completely conserved, whereas other residues are less conserved (Figure 1). To determine if the vasopressin-like peptides in other species with some sequence difference to apVT have any activity on apVTR1 and apVTR2, we first synthesized vasopressin/oxytocinlike peptides from other species: ConS, annetocin, oxytocin, vasopressin. We found that ConS, annetocin, and oxytocin have various effects on apVTR1 and apVTR2 (Figure 9), whereas vasopressin had no effects on these two receptors (Supplementary Figure S9B). These results suggest that different residues in the ligands might play a role in receptor activation. However, because of the variations of amino acids in the number and position in the above-mentioned vasopressin/oxytocin-like peptides, a conclusion about the specific effects of individual residues of apVT on receptor activity cannot be drawn. To determine the roles of specific residues in apVT in the activity on the receptors, we used alanine to replace the other residues except for the cysteines in apVT. Seven types of analogs were synthesized. We determined the dose-response curves of peptide analog activation on the receptors (Figure 10). These results showed that the changes of residues in different positions had various effects on receptor activity. Overall, the residues within the disulfide bond tended to have larger effects than residues outside the disulfide bond. Distribution of apVT precursor and apVTR1 in Aplysia tissues We illustrated the expression of the apVT precursor and its receptors using a NCBI database from a broad spectrum of RNA-seq data (GSE79231) obtained from adult and developmental stages (Moroz et al., 2006;Gyori et al., 2021). For apVT precursor (XM_ 013084328.1), it is most highly expressed in the CNS. It is also expressed in digestive organs, esophagus, and hepatopancreas (Supplementary Figure S10A). For the receptors, the database only included information for apVTR1 (XM_013088972.2). apVTR1 appears to be highly expressed in mantle, and digestive organs. It is also present in the CNS (Supplementary Figure S10B). Currently, there were no data available for apVTR2 (the accession number in NCBI: XM_005096258) in the database (GSE79231). Discussion Using bioinformatics, molecular biology, and cell-based assay, we have obtained the precursor for Aplysia vasotocin from the Aplysia CNS and provided the first evidence for the presence of two receptors. We also explored the roles of individual residues and PTMs in the activation of the two receptors. Identification of the Aplysia vasotocin signaling system Previous work has suggested that Aplysia vasopressin-like peptide is present in the Aplysia CNS using an antibody against Lys-vasopressin and an antibody against arginine vasotocin (Moore et al., 1981;Martinez-Padron et al., 1992). The precursor we identified provides further evidence supporting this idea. Our initial database searches returned three similar sequences (Figure 2), but the predicted CDS sequence of one of them (NCBI: FJ172359.1) is shorter than the other two (NCBI: XM_ 013084328.1, and AplysiaTools: TRINITY_DN1494_c1_g1_i2) partly because at the N terminus, there are two start codons (ATG). We also used https://www.ncbi.nlm.nih.gov/orffinder/ to predict the CDS region of FJ172359.1 and found that the CDS region is actually longer, and is consistent with that of XM_013084328.1. Thus, we designed primers for the longer CDS sequence and successfully cloned this precursor from the Aplysia CNS tissues (mostly pedal ganglia). Taken together, our study supports that the CDS sequence is the longer version rather than the shorter one. Importantly, the peptide predicted from the precursor matched the FIGURE 6 A phylogenetic tree of two Aplysia vasotocin receptors with verified vasopressin/oxytocin-like peptide receptors in both protostomes and deuterostomes. The tree was generated using MEGA X with 1,000 replicates (See the bioinformatic section in Methods for more details and Supplementary Table S4 for information on the sequences). "*" indicates that the receptor has been studied/verified. "RYamide receptor_Drosophila melanogaster" was used as an outgroup. The tree is drawn to scale, with branch lengths measured in the number of substitutions per site. Numbers at the nodes are bootstrap values as a percentage. Only bootstrap values greater than 50 are shown. Frontiers in Pharmacology frontiersin.org Lys-cononpressin G as predicted before (Martinez-Padron et al., 1992). We used the A. californica RNA sequencing database (GSE79231) in NCBI to show that the precursor is present in the CNS, consistent with the previous work (Moore et al., 1981;Martinez-Padron et al., 1992). In addition, the data also showed that the precursor is most highly expressed in the CNS compared to peripheral tissues (Supplementary Figure S10A). We found that there are at least two receptors for apVT. Our initial bioinformatic analysis suggests that there might be up to three receptors, which we have cloned. The IP1 accumulation assay experiments demonstrated that two of three receptors, apVTR1, and apVTR2, are true receptors for apVT, whereas the third one (Class-A GPCR3) is not. Notably, apVTR2 is distinct in that it has a long intracellular loop (636 aa) between the fifth and sixth transmembrane domains, compared to~70 aa for apVTR1. To our knowledge, this long intracellular loop appears to be also present in another mollusc Theba pisana (Stewart et al., 2016) among a number of vasopressin/oxytocin receptors in the species we examined. Based on the phylogenetic relationship ( Figure 4B), the third putative receptor, Class-A GPCR3, appears to be somewhat related to apVTR1 and apVTR2, but it is not responsive to apVT. We also tested other Aplysia peptides with the disulfide bond, e.g., Urotensin II (Romanova et al., 2012), AstC on the Class-A GPCR3, but it is not responsive to any of these either (Supplementary Figure S9A). Thus, this receptor might be sensitive to some unknown Aplysia peptides, possibly with a disulfide bond. Regardless, our data indicate that there are at least two receptors for apVT, similar to Lymnaea (Van Kesteren et al., Frontiers in Pharmacology frontiersin.org 1995a; van Kesteren et al., 1996). Interestingly, the vasopressin-like peptide receptors from the superphylum lophotrochozoa, which includes annelids and molluscs, are more closely related to mammalian receptors, than the arthropod receptors ( Figure 6). Similar findings have been reported for other Aplysia proteins (Moroz et al., 2006;Jing et al., 2015). Notably, the A. californica RNA sequencing database (GSE79231) in NCBI showed that apVTR1 is present in the CNS, in addition to some peripheral tissues (Supplementary Figure S10B), although data on apVTR2 was unavailable. In mammals, as a closely related neuropeptide, oxytocin (OT) is structurally similar to vasopressin (AVP), with differences in only two AA residues at positions 3 and 8 ( Figure 1B). At position 8, oxytocin is the neutral amino acid Leu, whereas vasopressin is the basic amino acid Arg (Turner et al., 1951;Tuppy, 1953). In terms of the eighth amino acid, it is the basic amino acid Lys in apVT, which is similar to the eighth amino acid (Arg) of vasopressin in nature but differs from that (Leu) of oxytocin, which is neutral, implying that apVT may be more similar to vasopressin than to oxytocin. On the other hand, previous functional studies implicated invertebrate oxytocin/vasopressin-like neuropeptides in the regulation of reproduction (Oumi et al., 1996;Koene, 2010;Garrison et al., 2012), suggesting that invertebrate oxytocin/vasopressin-like peptide might be evolutionarily more similar to vertebrate oxytocin because oxytocin plays significant roles in reproduction (Carter et al., 2020). Moreover, our IP1 accumulation experiments showed that the mammalian oxytocin could have a weak activation effect on both of the two Aplysia receptors, whereas the mammalian vasopressin had no effects on the two receptors ( Figure 9, and Supplementary Figure S9B). Thus, these results support that the apVT is more similar to mammalian oxytocin rather than to vasopressin, which seems to contradict the structural similarity. Perhaps, this could be partly explained in part by our alanine substitution experiments, where, when the eighth amino acid of apVT was replaced with the neutral alanine, the effects on the receptor activity were relatively weak compared with the alanine substitution of the other residues ( Figure 10). Another possible explanation could be that the third residue of apVT is identical to the one in oxytocin, and the alanine substitution of this third residue caused much larger effects. Actions of PTMs and single residues of apVT on receptor activity We have determined the roles of PTMs of apVT on the activity of the receptors (Figure 7). Initially, our experiments with apVT without the disulfide bond actually have similar EC 50 values on both receptors compared to the one with the disulfide bond. However, this experiment did not necessarily show that the disulfide bond is not important for receptor activity because previous work has shown that oxytocin without the disulfide bond can spontaneously form a disulfide bond (Roy et al., 2007). Indeed, when we used acetamidomethyl (Acm) to protect either one or two cysteines to prevent cysteines from forming the disulfide bond, the EC 50 values became significantly higher. When the cysteines were substituted by serines, the apVT analog has no obvious effect on the apVTR1 and apVTR2. Consistent with this result, previous work also showed that when the disulfide bond was replaced with other types of chemical rings in oxytocin, some of the peptide analogs can still be active on the oxytocin receptor depending on the bond length and torsion angle (Muttenthaler et al., 2010;Adachi et al., 2017). Taken together, our data support that the disulfide bond is important for receptor activity. We also tested the roles of C-terminal amidation on receptor activity by removing it. The data showed that amidation appears to be important for the activity of both receptors (Figure 8). To our knowledge, this is the first evidence that C-terminal amidation might be important for receptor activity for vasopressin/oxytocin receptors. We expect that, if vasopressin or oxytocin in mammals were removed from its C-terminal amidation, similar effects could be observed on their receptors, although this remains to be tested formally. It is interesting to note that despite the highly conserved sequences of vasopressin/oxytocin among different species, the activity of vasopressin/oxytocin from three different species (a mollusc, an annelid, and a mammal) varied significantly from apVT on both Aplysia receptors (Figure 9, Supplementary Figure 5B). Nevertheless, the data did suggest that, other than the disulfide bond and C-terminal amidation, individual residues might also play some roles in receptor activity. We formally tested the roles of individual residues on receptor activity by performing alanine substitution experiments. Many previous studies have shown that Frontiers in Pharmacology frontiersin.org changes in evolutionarily conserved residues in peptide ligands have a significant impact on, and usually are necessary for, receptor activity . For oxytocin/vasopressin, except for cysteine residues, the residues at the position of 5, 7, and 9 are more evolutionarily conserved than the other residues. However, after replacing the three residues with alanine, EC 50 values of these analogs were not necessarily higher compared with analogs with the replacement of the residues less conserved (Figure 10), which is somewhat unexpected. On the other hand, we found that changes in residues within the disulfide bond ring seem to have a greater impact Frontiers in Pharmacology frontiersin.org on receptor activity than those residues outside the ring, similar to results obtained in previous work (Adachi et al., 2017;Kinoshita et al., 2021). Similar results using bioassay were also obtained for human urotensin II with a disulfide bond (Labarrere et al., 2003). Overall, our findings are in general consistent with findings on vasopressin/oxytocin in vertebrates. Notably, the two Aplysia receptors appear to have different sensitivities to the alanine substitution of apVT. Specifically, we found that the effects of peptide analogs on the two receptors were significantly different when the second and fourth residues of apVT were substituted with alanine. Peptide analogs with the second and fourth residues substituted could not activate apVTR1 at all, but still had some effects on apVTR2, although the effects were weakened compared with apVT. It would be of interest to investigate why the two receptors had different sensitivity, perhaps by molecular modeling of the ligand and peptide analogs with the two receptors in the future. From a drug development perspective in mammals, the importance of oxytocin/vasopressin, particularly oxytocin, in their actions as a drug has been discussed previously (Carter et al., 2020). For example, oxytocin can function as a stress-coping molecule, an antiinflammatory, and an antioxidant reagent, with protective effects, especially in the face of adversity or trauma. Oxytocin influences the autonomic nervous system and the immune system. These properties of oxytocin may help explain the benefits of positive social experiences and have drawn attention to this molecule as a possible therapeutic in a host of disorders (Carter et al., 2020). In addition, the effects of cone snail venom peptides with sequences similar to oxytocin/vasopressin, i.e., conopressins, have been extensively studied (Lewis et al., 2012). These cone snail venom peptides are used for prey capture and/or defense for these animals . Indeed, many cone snail venoms, including conopressins, contain peptides with two or more cysteines, and act on membrane proteins, e.g., voltage-or ligand-gated ion channels (including GPCRs) (Lewis et al., 2012;Koch et al., 2022), or act as hormones (Robinson et al., 2017;Turner et al., 2020). Thus, our present results might help provide some insights into how to design better drugs for medicine (Walter et al., 1971;Vrachnis et al., 2011;Ichinose et al., 2019). In summary, our study provides a relatively complete description of the Aplysia vasopressin signaling system by identifying the precursor and two receptors and exploring the roles of PTMs and individual residues in receptor activity. Future work could investigate how the peptide ligand might interact with the receptors and possibly explain how PTMs and individual residues might contribute to receptor activation through structural modeling . Additionally, the physiological actions of the Aplysia vasotocin signaling system need to be explored more extensively. Preliminary work has shown mRNA distributions of apVT and apVTR1 in Aplysia tissues (Supplementary Figure S10). Previous work has also shown that neurons with vasopressin-like immunoreactivity are present in the CNS (Martinez-Padron et al., 1992). It would be interesting to determine the neuronal distributions of the two receptors, and possibly provide clues on what kinds of behavioral networks the Aplysia vasopressin signaling system may act on. Given the diverse roles of vasopressin/oxytocin in mammalian neural functions, these studies in Aplysia in particular, and molluscs, in general, may also provide a better understanding of how the vasopressin signaling system has evolved. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. Ethics statement There are no current legal requirements for experimental studies of mollusc Aplysia in China. In this study, by referring to the relevant regulations on the welfare and ethics of experimental animals in China, an experimental protocol that conforms to the principles of animal protection, animal welfare and ethics was formulated. Accordingly, this study has been granted an exemption from the Animal Ethical and Welfare Committee of Nanjing University
2023-03-22T15:24:20.224Z
2023-03-20T00:00:00.000
{ "year": 2023, "sha1": "20474726bbbd0b25ff867f5ebb0b66926a8cfe39", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2023.1132066/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "282d421e2963016eb6b538349e3250631bcc61a4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
249452567
pes2o/s2orc
v3-fos-license
Neuroleadership as an Asset in Educational Settings: An Overview Objectives : The goal of this research is to investigate the scientific basis for integrating neuroscience in general, and cognitive neuroscience in particular, into the field of educational leadership. In recent decades, the scientific community has shown a great interest in integrating neuroscience into higher education and the many levels of leadership education and decision-making that are crucial to a range of educational difficulties that leadership leaders are called upon to handle. Methods/Analysis : The present effort involves a systematic review of research publications published in the preceding two decades after a keyword search of reputable international databases. This review incorporates papers from Scopus, PubMed, Elsevier, and PsycINFO databases. The terms neuroleadership and education were used in combination with the four subfields outlined in the research: Among them are decision-making and problem-solving abilities, emotional control, cooperation and influence with others, and facilitation of change. Findings : The review's results underscore the vital relevance of neuroscience integration into educational leadership difficulties and highlight ethical concerns regarding its deployment in educational settings. Novelty /Improvement : The novelty of this work is that it conducted a review of the literature on neuroleadership using a combination of executive function parameters, more precisely cognitive flexibility, decision-making, problem-solving, emotional regulation, the mirror neuron system, and behavioral data from studies conducted in educational and administrative settings. 1-1-Integration of Neuroscience in Leadership and Education Various researchers have been interested in the study of neuroscience and its relationship to leadership for decades as they seek a framework that underpins leaders and organizational performance. The study of neuroscience and its influence on human behavior and reaction systems compels educational leaders to delve deeper into human dynamics and their impact on defining an organization's culture and mission [1]. In addition, educational leaders may use brain research to tap into other people's abilities and grow and train their brains via effective communication. This is characterized by a high degree of social, emotional, and cultural intelligence. Numerous studies have illustrated various types of leadership and specifically referred to transformational leadership as an effective leadership style [2]. According to Bass, transformational leadership entails four critical qualities of the leader: i) charisma; ii) inspiration; iii) spiritual development, and iv) a customized strategy. Various researchers in the fields of educational research and organizational management point to certain changes in the profile and requirements of executives that must be met in order to conduct effective leadership. An effective leader must prioritize values shared by his subordinates, seek their education and growth, and promote a common vision and purpose. Furthermore, an effective educational leader possesses qualities that foster collaboration, problem-solving abilities, and an ability to adapt to change. Nonetheless, effective educational leaders demonstrate empathy and a genuine interest in people as humans, not simply as employees. According to Mayer et al. (2000) [3], an effective (educational) leader is capable of recognizing emotions in both himself and others. To properly manage, one must be able to interpret emotions. Thus, an effective educational leader should be able to acquire the confidence and loyalty of others, thereby increasing the organization's efficiency and effectiveness. Neuroscientists are reinventing leadership in the twenty-first century by imbuing the neurological foundation and neural basis of leadership effectiveness with new meaning for leading oneself, others, and an organization renowned for its threefold emphasis on effectiveness [4,5]. According to neuroleadership practitioners, understanding the neurological underpinnings of leadership success requires grasping the neuroscience of social behavior for engagement, motivation, and peak performance. In addition, neuro-leadership abilities are required of 21st-century leaders to develop connections, manage emotions, make choices, and encourage people to accomplish corporate goals to meet the challenges of reducing achievement gaps and adapting to changing populations [6,7]. Nonetheless, several contradictory studies address the critical nature of incorporating neuroscience into educational leadership and contribute to its increased effectiveness [8,9]. The scientific community appears to be skeptical. According to Lindebaum & Raftopoulou (2017) [8], recorded knowledge of brain patterns has little practical effect on existing social behaviors. Neuroimaging research has shown comparable results, indicating that integrating neuroscience discoveries into educational leadership would not modify management paradigms, i.e., [9]. On the other hand, neuroscience has posed a challenge to organizational management and given new meaning to organizational and leadership performance [10]. Neuroscientific data can be critical in defining educational leadership in terms of cognition, specifically in terms of the decision-making, problem-solving, emotional regulation, and personal qualities of academic leaders. Over the past decades, the scientific community in the neuroscience and education sectors has emphasized the biological parameters involved in leadership combined with the personality traits of the leader that determine decision making. This scientific interest has given impetus to the development of neuro-leadership studies. Neuroscience, the subject of neuroleadership, is a branch of study that studies the interplay of neurons that underpin human behavior and its consequences. Neuroleadership, defined as the ability to link brain regions with leadership actions, attempts to bolster the leadership sector via the use of neuroscience research. Neuroleaders lead their organizations by developing a management plan based on brain research. Neuroscience provides significant growth opportunities for leaders and managers through studying biological and neurochemical processes in the brain for instance decision-making and emotional regulation. As a result, academics may develop better educated theories and leadership styles by delving into the neural underpinnings of behavior. The purpose of this study is to examine the value of integrating neuroscience into education, particularly in the area of decision making and problem solving, by stressing executive function characteristics such as cognitive flexibility, emotional regulation, and the visual system of mirror neurons. Those are critical components in the performance of rational educational leadership, if feasible. 2-1-Neuroleadership in Education -Brain Facts and Theory Implications The frontal cortex is the apex of a brain area hierarchy that integrates external and internal factors in order to reflect, arrange chronologically, and execute complicated mental and behavioral responses to environmental obstacles, such as those related with leadership. The frontal cortex is connected to almost all other cortices, subcortical areas, and brain stem nuclei, allowing it to access and control a wide variety of cognitive resources. Additionally, it has been demonstrated that a substructure known as the ventromedial prefrontal cortex collaborates with limbic areas as an emotion regulator to facilitate efficient mental functioning in the pragmatics of social life, including self-regulation of agency and goal-directed activity, social self-awareness, decision-making, and moral behavior [11][12][13]. Additionally, frontal lobe participation has been shown in investigations of episodic memory (for example, hemisphere encoding and retrieval asymmetry) [13,14]. Semantic memory is associated with the left prefrontal cortex [15]. Thus, encoding is required for both episodic and semantic memory. While semantic memories might begin in a personal context, they can progressively shift from episodic to semantic memory as their sensitivity and identification with individual events diminish. Thus, self-awareness may grow more generic over time, enabling it to be applied to novel circumstances. As a result, it's unsurprising that both depend on frontal lobe functions. Thus, the frontal lobes include a large number of the neurological abilities necessary for leadership [13]. The term "neuroleadership" evolved out of a desire to learn more about how humans may improve overall leadership abilities and effectiveness. It arose when neuroscientists were able to watch live human brains (through functional Magnetic Resonance Imaging (fMRI) scanners, for example) and gain new insights into how the human brain works. Neuroleadership is an area of study that focuses on the neurological underpinnings of leadership and management techniques. It synthesizes results from various disciplines within neuroscience, including social cognitive and affective neuroscience, cognitive neuroscience, integrative neuroscience, neurobiology, and others. It is believed that establishing a science of leadership that incorporates the mind's and brain's physiology would become more accessible to leaders interested in learning and improving themselves and others. Additionally, it converts the soft skills associated with professional growth into practical talents by using scientific evidence. Neuroleadership is divided into four subfields of research. These include decision-making and problem-solving, emotional control, collaboration and influence with others and change-facilitation. Each one of these sub-dimensions, has the potential to integrate a neuroscience perspective with established models that assist us in resolving common problems. Due to the brain's neuroplasticity, the fusion of leadership and neuroscience opens new avenues for leaders to learn how to adapt and alter their leadership methods and behaviors to become more successful practitioners [1,16]. Furthermore, neuroleadership can boost effective educational leadership due to the fact that effectiveness in educational leadership and in the field of human resource management is established on the capacity to control one's emotions [17]. A leader's emotional intelligence defines their ability to influence behavior and contribute to an individual's personal growth by mobilizing, motivating, and stimulating their mental talents. An emotionally savvy leader may inspire trust and loyalty in his subordinates and push them to work harder to accomplish a mutually agreed objective. Leaders' emotional intelligence is critical to their capacity to encourage people to accomplish corporate goals. Emotional intelligence and leadership behavior [18], particularly in the decision-making process, are inextricably linked. With theoretical perspectives on transformational leadership and its link to emotional intelligence, empirical research has established a favorable correlation between these notions. A survey performed by the global Johnson & Johnson Consumer Care and Personal Care Group discovered that executives with the highest job performance have considerably greater emotional intelligence than other executives. Additionally, research evaluating the success characteristics of a broad sample of Latin American, Japanese, and German CEOs is intriguing [19]. Successful and unsuccessful managers were shown to have distinct profiles in three critical areas: seniority and advanced experience, cognitive intelligence, and emotional intelligence. Typically, unsuccessful managers have a much greater level of cognitive competence and professional experience than successful managers. Additionally, he has a poor level of emotional intelligence. On the contrary, effective managers have a much greater degree of emotional intelligence in addition to a sufficient level of cognitive intelligence and job experience. Several studies [17] have examined leaders' capacity to identify others' feelings, how leaders may use emotions to monitor their followers in working groups, and how leaders can utilize emotions to build leadership skills. These qualities and skills are critical in leadership processes because they influence how followers view their leaders. As a result, the relationship between transformative leaders and their followers develops into a highly emotional one. Transformational leaders demonstrate various non-verbal emotional qualities (for instance, their perspective on others and their verbal comfort) that make them fascinating and charismatic leaders. In addition, influential, transformative leaders must have the intuitive ability to empathize with people and offer counsel when necessary [20]. In the realm of education, transformational leadership is instrumental. Education is a critical social framework for the future of society. Schools in the twenty-first century must adapt successfully to a rapidly changing world. Changes occurring in the global environment consider the teacher-position leader vital to any transformation's success [21][22][23]. Teachers, as change agents, must possess the essential skills and capacities to teach tomorrow's citizens, successfully building their intellectual, emotional, and social capital at all levels of school [24]. Emotionally intelligent teachers appear to be more successful leaders because they can recognize and regulate their own and pupils' emotions, such as anger or irritation, and adapt their conduct to varied situations. Additionally, given the evidence that increased stress and psychological exhaustion contribute to teachers' intention to resign from effective knowledge management, research into the influence of emotional teachers' intelligence appears to be particularly promising for success. 3-Research Design and Methods The current study's systematic review intends to combine studies including the following scientific fields: Νeuroscience, Cognition, Leadership, Decision-making, and Emotional regulation, in the field of education and learning process-the publications included span the last decade, more precisely from 2010 to 2021.The research methodology is described in the following figure ( Figure 1). Figure 1. Research Methodology Flowchart The methodology adopted in the current study is systematic review. Firstly, research articles after detailed searching integrated in the study from databases of Scopus, PubMed, Elsevier and PsycINFO. Keywords that used the term of neuroleadership, education in conjunction with the four subfiends that have been described in research: These includedecision-making and problem-solving, emotional control, collaboration and influence with others and changefacilitation. Additionally, other parameters, such as personality and cognitive flexibility, were integrated to combine neuroleadership research findings for the purpose of sculpting a neuroleader personal profile for transformational leadership in education ( Figure 1). All of the studies were further divided into groups based on their experimental models, which may be qualitative or quantitative, as well as their inclusion criteria, which included the following:  Year of publication from 2010-till today (N=96);  Primary search terms referred to the theoretical or methodological approaches in the context of neuroleadership, cognitive neuroscience and education in conjunction with the sub-dimensions of neuroleadership: decisionmaking, emotional control, collaboration with others, change-facilitation and in addition with specific parameters of decision-making, cognitive flexibility (dimension that is considered as crucial for an educational leader and the idiosyncrasy as defined by the notion of mirror neurons so as to describe a complete theoretical framework for neuroleadership);  Method of study: Qualitative data;  Study population: Employees in Educational and Management settings. In this study, the systematic literature review method was adopted. A systematic literature review is a type of scientific investigation in which the studies conducted on a particular subject are scanned in detail, and the findings are synthesized after the criteria of exclusion and inclusion in the collection of related studies. Basic topics, research questions, and goals are the starting point for a systematic literature review. Then, the related publications are defined; the selection, evaluation, and interpretation of the studies are based on a conceptual perspective; the sources of the data used and the way they are analyzed and synthesized are clarified, and finally, the findings, limitations, and inferences are discussed. All literature reviews are carried out within a specific stage and steps. After initial judgments had been taken prior to this evaluation, the first phase in the data collection procedure was to determine the keywords. The following keywords were employed specifically: Leadership, Education, Cognitive Parameters, and Neuroscience and other combined parameters with terms like neuroleadership, neuroeducation but also by searching for terms related to the part of decision making (key parameter in leadership), such as cognitive flexibility, emotional regulation. Following that, the studies were indexed in the most prestigious and widely recognized research databases, including Scopus, PubMed, Elsevier, and PsycINFO. A search revealed a total of 96 (N=96) searches based on keyword combinations. The title of the article and a summary of each one relating to the predefined keywords served as the inclusion criteria for screening the surveys in the second step (Step=2). This stage led to the formation of sixty-four (N=64) articles that met the aforementioned inclusion criteria. In step three (Step=3), articles whose entire text was inaccessible or hard to find were excluded (N=19). Additionally, they were excluded due to linguistic constraints (N=5); specifically, related works with content authored in a language other than English were excluded. This screening considered the following criteria: studies should be written in English, published between 2010 and 2020, and have practical consequences for neuroleadership research or conceptual talks. There were a total of forty works that matched the criteria mentioned above (N=40). Seven (N=7) research papers were excluded from step four (Step=4) due to a lack of research data. These studies made no direct connections between neuroleadership and outcomes and were therefore excluded from the research group denoted by the flowchart's "Data Quality Score." The study's major themes were established as a result of the papers evaluated. Neuroleadership implications for educational leaders (school leaders and instructors) in terms of decision-making, emotional regulation, and problem-solving were included as themes if they were also supported by other analyzed studies ( Figure 2). 4-1-Neuroleadership -Decision-Making and Problem Solving Neuroleadership is founded on two fundamental concepts: decision-making and issue resolution (problem-solving). Adaptability is a behavioral component associated with effective neuroleadership in the education and management of other organizations. It is a cognitive dimension that encompasses innovative problem-solving in response to changing conditions, uncertain or unpredictable scenarios. Adaptation requires a high level of self-awareness and the capacity to steer critical decisions in leadership and, more significantly, educational contexts. A leader should recognize changes in the work environment, interpret them to formulate goals, and forecast future occurrences that need a high degree of adaptation or change. Likewise, Paulhus & Martin (1988) [25] as is cited by Hannah et al. (2013) [13], observed that a diverse repertoire of information, actions, and tactics is a feature shared by several conceptualizations of a leader's adaptability [26]. The learning that underpins such flexibility in leaders includes both task and personal development. The latter improves one's awareness of oneself, identity, capacities, and other task-related characteristics. Thus, the selfacts as a link between the actions of leaders and the underlying processes that underpin adaptive performance. Educational leaders face task demands; identity structures are primed to initiate self-regulation functions at five hierarchical levels: (a) perception, (b) consciousness, (c) goal emergence, (d) affect systems, and (e) at the top of the hierarchy, these lower structures aggregate to activate a tailored working self-concept. Thus, leaders who have the greatest access to a breadth of relevant knowledge and skills, as well as self-regulatory structures, in order to comprehend problems and develop tailored working self-concepts, should be better equipped to engage the self in deliberations and guide the process toward their goals and priorities. According to an emerging body of neuroscience research, the neural pattern observed at rest represents the brain's underlying functional connections and an individual's inherent and constant brain function or aptitude [27]. Indeed, Cacioppo et al. (2003) [28] argued that the brain is not inert during rest but performs various potentially important neuronal activities, including memory consolidation and learning [13]. Fox & Raichle (2007) [19] discovered that patterns of brain activity during rest correlate to patterns of task involvement. Waldman et al. (2011) [29] have suggested that the brain's resting state may indicate genuine leadership skills. At this stage, integrating neuroscience into the explanation of leadership styles can provide relevant concepts for developing a better knowledge of the brain activity of leaders, particularly educational leaders. And therefore, to shed light on the fundamentals of effective leadership. The brain basis for a leader's self-complexity may also provide insight on leadership development. An increasing field of social neuroscience research is shedding light on the functions and processes of many brain regions. Additionally, growth in this area may be tracked over time using EEG, fMRI, and other techniques. Butler et al. (2016) [30] found that more competent leaders had reduced alpha coherence in the prefrontal cortex, which is largely responsible for executive processes such as self-regulation [13,30]. Thus, it is possible to submit leaders to activities designed to enhance their metacognitive capacity and then evaluate changes in prefrontal brain activity over time as they advance toward a normative index. Undoubtedly, the neurocognitive revolution in educational leadership studies has shifted focus to leaders' thinking processes in order to better understand their actions and effectiveness. 4-2-Neuroleadership -Emotional Regulation Emotional regulation is another perspective that has been studied in neuro leadership research. Neuroscientific methods can explain a further understanding of emotions and unconscious processes that interfere with them and lead human behavior. Communication between educational leaders and colleagues or, between instructor and learner is a essential part of the educational process and is characteristic in learning environments. Educational leadership is highly interested in communication processes expertise since this process exists at all levels of management, beginning with communication between leaders and employees and ending with peer communication between colleagues. Communication is the primary tool for sharing practices inside an organization, and executives have long recognized communication as critical to their success. For an extended period, academics and psychologists asserted that a competent leader's role was to foster a specific "social climate" within the group, which impacted the members' moods and performance. Thus, a transformative leader maintains a healthy balance of production and group member happiness. Additionally, we know that leaders with a higher level of emotional intelligence can sympathize with employees' emotions and exhibit more emotionally appropriate interactions and behaviors [17]. Empathy, or the capacity to empathize with the feelings of others, refers to both cognitive and emotional processes that enable us to represent other people's mental and affective processes cognitively and to generate an actual reaction consistent with their actions. The literature demonstrates a strong correlation between emotional empathy and the capacity to identify facial expressions. Observing their facial expressions may deduce their emotional states [31]. Educational leadership aims to explain a leader's behavior structure using neuropsychology and cognitive neuroscience insights. These advancements enable the examination of the brain systems behind emotions and communication and the personal profile and behavioral style of an educational leader or instructor. May-Vollmar (2017) [32] conducted a study on emotional intelligence and school leader performance and discovered that emotional intelligence is a significant predictor of an individual's ability to execute successful leadership practices. Along with school, leaders can control their emotions; they must recognize their position in facilitating change. Leaders who demonstrate self-awareness and self-control, for example, can recognize when an encounter is causing them to feel irritated and will be able to manage their emotional response throughout the contact. For instance, a leader may have had extensive training in the art of inspiring others to share a vision. However, the leader's dissatisfaction might obstruct the leader from adequately executing the leadership practice. The leader's capacity to comprehend and recognize emotional triggers resulting from developing self-awareness of their own emotions and those of others enables them to boost motivation and engagement levels by strategically considering how to minimize workplace stress and elicitation of negative emotions. Saxe (2011) [33] conducted a study on school leaders' emotional and social intelligence and discovered that effective leaders cultivate strong relationships:  By empathizing with people and being just;  By ensuring autonomy and predictability during the transition process;  By elevating people' standing via personalized assistance and partnership. Saxe (2011) [33] provided the following research findings. In the five-brain model indicated that the limbic system is composed of the amygdala, hippocampus, fornix, cingulate cortex, septum, mammillary bodies, and striatum. According to Rock and Cox (2013) [34], when an individual compares his or her position to that of another, the cingulate cortex (dorsal anterior cingulate cortex) is engaged, the same brain area that processes pain. Additionally, as status improves and pleasure is received during social processing, the reward brain circuitry in the striatum is engaged. As a result, Rock and Cox (2013) [34] assert that "Information validating one's status can activate the reward brain circuitry. While a person receives a social benefit, namely when believing that he or she was establishing a positive reputation with others, activity in the striatum [is stimulated]." When individuals feel important to their colleagues and school administrators, their status improves as a result of the reward brain circuitry being triggered. By identifying changes to elevate the status of individuals who work in a school environment, influential school leaders can reduce the danger circuitry of the brain and boost the reward circuitry. In a school context, opportunities to elevate others' status include open invitations to serve on committees, encouragement to develop supervisory skills, solicitation of additional talents and experience, and customizing professional development to ensure ongoing progress. Educational administrators in the twenty-first century may be, wish to foster healthy, balanced school cultures. In this situation, skills in teaching and learning, strategic management, and social, emotional, and cultural awareness of both kids and adults are required. To promote change and impact individual behaviors and cooperation, school leaders must understand how to nurture and grow everyone's potential, leveraging school leader intelligence to improve clarity and foster autonomy aligned with organizational success. "Leadership success is contingent upon a leader's capacity to solve a complicated social problem, such as the coordination of ideas and behaviors within social groupings". 4-3-Neuroleadership -Mirror Neurons "Social intelligence is a kind of emotional intelligence that focuses on interpersonal relationships. Daniel Goleman's approach consists of four domains: self-awareness, emotional self-management, empathy, social awareness, and social skillsor relationship management. Furthermore, the second two of those components, empathy and social ability, comprise social intelligence". The neuroleadership idea is based on the mirror neurons effect and how great leaders can connect with their followers through rapport-building in order to develop highly focused and effective teams that are just as fascinated with the goal as their leader is. Social neuroscience, a subfield of neuroleadership, argues that a leader must be an expert in his area and possess social intelligence. A true leader demonstrates empathy for his team, instilling a strong belief in the vision. Thus, the neuroleadership idea encompasses both the aspiration to succeed and the united objective. Employees search for clues from supervisors and instinctively emulate their conduct. This demonstrates the critical nature of having a positive perspective and setting an example of desired conduct for leaders. Additionally, leaders may have a better knowledge of others by first gaining a better awareness of themselves. Thus, mirror neurons offer a unique chance for humans, particularly those who exert educational leadership, to put themselves in the footsteps of another and experience the world from another (physical and mental) perspective, comprehending its intentions, actions, and feelings. This tendency is in conjunction with the executive parameter that is referred to as cognitive flexibility (and will be analyzed later). It enables the individual to learn from others and so alter its conduct as necessary (switching). Mirror neurons are the neurobiological mechanism underlying the most humanistic, if not the most compassionate, element of humanity [35]. Their role is to provide the individual with the required tools for social interaction. It supports the development of interpersonal relationships in a professional or, more precisely, educational setting. Furthermore, this is the same human behavior that robotics has been attempting to replicate for a significant period [36]. The emergence of electronic computing systems based on distributed and parallel computing was a major force of innovation in robotics and artificial intelligence. This method is partially inspired by the idea of human intelligence as the management of symbolic representations and the flow of information at hierarchical and sequential levels of processing in a convergent and divergent way [37]. Artificial neural networks, deep learning, and machine learning are all terms that refer to systems that replicate some human cognitive abilities required for leadership exercise, such as logical thought, learning, pattern identification, decision making, and problem-solving. However, the novelty of technological advancements that replicate the function of mirror neurons is that they extend their function in the realm of emotions to what is known as affective computing. All of the above converges towards establishing an innovative digital leadership system that incorporates artificial intelligence and neural networks. 4-4-Neuroleadership and Cognitive Flexibility -A Reborn Leader's Promising Field The executive functions of the brain are beneficial cognitive parameters for an organizational leader. Cognitive flexibility is one of these cognitive parameters of executive functions. Cognitive flexibility refers to our brain's capacity to adjust our behavior and thought processes to the novel, alternate, or unexpected circumstances. Cognitive flexibility is critical for learning and problem-solving in complicated situations. It enables us to choose the approach that must be followed to adapt to the various conditions we experience. By contrast, cognitive rigidity refers to the difficulty of modifying habits and ways of thinking when they are ineffective or unable to accomplish the initial objectives. Cognitive flexibility is a competitive advantage for managers and leaders seeking to maximize their potential. In psychology, the term "flexibility" refers to adapting a skill to new situations unrelated to the ones used to train. Thus, a high degree of flexibility should enable an individual to swiftly move from one processing method or style to another, maximizing the advantages and avoiding the disadvantages of each. Cognitive flexibility has been identified as a significant predictor of incredibly complicated and unstructured. It is described broadly as "the capacity to adjust behavior to changing circumstances." For many years, neuroscientists and psychologists believed that a fixed and predominantly static brain constrained adults' capacity for change. On the other hand, it was believed that children's brains were malleable, continuously changing learning machines capable of absorbing knowledge and randomly rewiring themselves. To a large extent, it was believed that the differences between adults and children were due to how human brains developed. Around 100 billion neurons are born in the human brain. As people interact with and interpret their environment, connections between these neurons grow. By the time a child reaches the age of two, they have made around 1000 trillion connections. These connections continue to build throughout childhood, assisting the child's growth and learning. The brain undergoes a period of consolidation and considerable neuronal pruning during adolescence. Numerous neurons in the human brain that are seldom utilized die, leading in a loss of around half of all connections, or 500 trillion. Adult brains are extraordinarily adaptable, and individuals may rewire their brains to accomplish astonishing achievements with the correct approach, patience, and effort. In 1992, for example, Dr Jeffrey Schwartz taught persons with Obsessive Compulsive Disorder (OCD) how to adjust their perceptions of and responses to their disorder's symptoms. (e.g., reclassifying obsessions and compulsions as false alarms or misleading information, attributing these symptoms to hyperactivity in specific brain circuits, devaluing unwanted thoughts as unimportant or unwanted, and redirecting their attention away from their symptoms and toward a specific, desired, and constructive behavior). After ten weeks of practice and hard effort, Schwartz's patients reported significant improvement in their symptoms and a sense of control over their sickness -a remarkable reversal for individuals who had previously felt completely enslaved by their symptoms. Perhaps more astonishing, Schwartz noticed a difference in the way these people' brains physically functioned over the same ten-week period. Their perspectives of OCD and the accompanying brain processes appeared to have evolved as a result of their frequent, persistent, and purposeful application of will and attention. Similarly, Arrowsmith-Young (2012) devised a set of cognitive exercises in the 1970's to aid her in overcoming a crippling mental block.' Barbara possesses a near-perfect memory but is unable to comprehend the meaning of symbols. For example, she was unable to comprehend what a clock's hands represented or how to interpret this representation in order to determine the time; she could not distinguish between 'the boy chases the dog' and 'the dog chases the boy'; she read and wrote from right to left; and she frequently swapped letters and numbers. After getting upset with her circumstances and being inspired by the work of neuroscientists Luria and Rozenweig, she created a series of cognitive exercises (flashcards with clock faces on them). She exercised for eight hours a day, almost to exhaustion [38]. Additional neuroplastic treatments have been developed to address function loss associated with clinical themes for instance stroke, depression, addiction, and other learning issues, as well as certain types of blindness and deafness. Similarly, research continues to gather evidence that personality and intellectual characteristics are malleable and may be influenced by our environment and experiences. This research demonstrates that humans are far more adaptable than previously assumed. With enough effort and practice, leaders or educational leaders and also staff's behaviors could be altered and adapted to new circumstances. Notably, these changes do not occur automatically. Instead, they need repeated acts of will and self-discipline. This is consistent with Angela Duckworth's study, which indicates that grit -or enthusiasm and persistence -rather than talent, aptitude, or competence -is the key to success. Thus, educational leaders appear to be less restricted by capacity, aptitude, and ability. They are more restricted in their effort and attention allocation judgments. In a frenetically busy culture and replete with distractions, neuroplasticity provides opportunities for development and achievement. The findings of this study can be integrated with the notion of neuroscience in education on the one hand, and with the concept of neuroscience and its contribution to management in terms of decision making and emotion and behavior management in an educational setting on the other. Neuro-leadership is a critical factor in clarifying decision-making processes. Increasing research on the learning process in relation to the biological parameters and personality characteristics of educational leaders has resulted in data-driven solutions for educational leadership that improve its effectiveness in facilitating learning, teaching, and behavior management processes. Applying neuroscience research to education offers the potential to improve students' knowledge, interpersonal skills, motivation, and decision-making processes. Further experimental research can be employed to bolster the study's findings in this regard. New discoveries can assist schools in incorporating neuroscience-based teaching and leadership practices. School principals who are aware of these viewpoints can help improve their students' grades. As a result, authorities should place a premium on training educational leaders who grasp behavioral basics. With this information, schools may capitalize on neuroscience's opportunities, find the most successful educational processes, and comprehend biological and environmental aspects that influence the psychological condition of the populations they serve in order to optimize academic operations. Neuroscience in education should not be viewed just as a means of implementing pertinent findings in educational management. By teaching or informing pupils about the way the brain learns, they may adopt more effective learning stages. These studies demonstrate that boosting students', educators', and students' knowledge of how the brain develops and learns can greatly expedite learning or development. The acquisition of neuroscience information by teachers has an effect on their pedagogical understanding and teaching abilities [39,40]. One reason neuroscience research is critical in education is to ascertain how a human being is impacted by his or her non-social, biological aspect. Leadership behaviors are defined by the leaders' own attitudes and the environment in which they operate [41]. This demonstrates that, in terms of educational leadership, an individual may be an excellent leader with appropriate training and support from surroundings. However, the quality of a person's leadership is influenced by his genetic, hormonal, physical, and mental growth and maturity. Because leadership development is interconnected with a person's disposition, genetics, and physical development, neuroscience sheds light on this mostly opaque issue. Neuroscience-based knowledge can help advance both theory and practice of leadership [42]. However, the fact that school decision-making processes rely excessively on data and that all operations are assessed in light of data promotes instrumentation [41]. Additionally, school principals must recognize that individuals possess emotions in addition to hereditary features and that individuals exhibit inconsistency. As a result, judgments should be made using evidence-based procedures while simultaneously considering the human factor. Additionally, neuroscience research makes several recommendations for teacher and leader development programs. Leadership and teacher education programs should place a premium on persons who are capable of managing stressful situations, possess a biological understanding of stress, and possess knowledge of skills such as collaboration, acceptance of change, and active research [43,44]. Top-notch training methods may entail supervisors that are emotionally stable, cognizant of the factors that influence people's decision-making processes, and knowledgeable in their domains. According to Pope (2019) [5], influential school leaders foster relationships via empathy, communication, and cooperation. When leaders foster a feeling of community among teachers, they generate a higher level of trust and empathy for their colleagues. Neuroleadership emphasizes the need of understanding both the management and physical components of leadership, as well as the chemical development of the brain and the needs of the people a leader leads [44]. The study's fundamental implications for educational leaders are as follows: Given the neuroscience studies on multitasking, school leaders as neuroleaders are supposed to avoid giving numerous tasks. They efficiently control their emotions, are aware of the biological and sociocultural consequences of sleep, stress, motivation, reward, and threat, and then respond appropriately. Institutional leaders who understand the biological underpinnings of behavior may use this knowledge to transform their schools' projects. Neuroleaders may impart information about how the human brain works to their students, so increasing their awareness. Policymakers should continue to analyze these fundamental results and work to provide a diverse variety of developmental programs in educational settings that are all based on neuroscience findings. For organizational effectiveness, 21st-century leaders must comprehend the underlying architecture of the brain, human attitudes, and behavior. This study aimed to assist instructional leaders in understanding the neurological foundation for employee engagement, motivation, and productivity. Building reliable connections, synchronizing the purpose and vision into clear, concrete steps, and providing clarity and assurance for autonomy and cooperation are all skills that a skilled school leader may use to link individuals across an organization [44]. The current study's systematic review summarized the findings regarding the importance of integrating neuroscience into educational environments and, more specifically, its contribution to critical areas of decision making and emotional management through an analysis of the results of cognitive studies on staff functions and, more specifically, cognitive flexibility. However, the current study has certain drawbacks. One of the limitations of this research is that in the current review, data from neuroimaging studies that have been conducted were not utilized. The integration of neuroimaging studies elucidates the benefits of neuroleadership in an educational context in greater detail and precision. However, it is necessary to highlight some ethical concerns about the integration of neuroscience into education, specifically those surrounding the preparation of brain research projects in an educational setting, which will be discussed in greater detail in a future review paper on the ethics of neuroscience in neuroeducation. 5-Conclusions Neuroleadership is one of the most widely discussed topics in contemporary science. The findings of this study enable the author to offer an integrated and improved strategy for work engagement treatments based on recent neuroleadership discoveries. Recognizing the need for more study, we have highlighted other developing research issues and themes proposed by the authors of the examined publications. To sum up, all social interaction and thinking starts in the brain. According to brain experts, social pain is processed in the same region of the brain as physical pain. Thus, interpersonal attunement is a crucial trait of transformative leadership. There are four dimensions of neuroleadership that address transformative leadership abilities, including social and emotional intelligence, communication, and empathy. Teachers think a principal's social and emotional abilities impact leadership effectiveness. For example, a successful school leader may establish connections and influence people by controlling their perceptions of others by being fair and equal. Thus, school leadership development needs profound self-reflection and social awareness to manage negative sentiments in the workplace. Schools need leadership intelligence to make decisions to lead effectively in the 21 st century; educators must have social and emotional intelligence. For high levels of student performance and high-quality instructional practice, school leaders must simultaneously control emotions, cooperate with others, promote change, and encourage and engage employees to adhere to the organization's vision and goal. The brain is a social organ. Successful school leaders must set high expectations in order to reduce the threat of a reaction. To develop a reflective culture for enhancing teaching and learning, successful school leaders must elevate employees who demonstrate talent and strategic management abilities while promoting autonomy. The intelligent school leader works with others to achieve organizational goals to encourage, engage, and inspire them. Lastly, the school leader uses social and emotional intelligence to get people to work together for the success of the whole organization. 5-1-Future Research Neuroscience is one of the most widely discussed subjects in contemporary science. The findings of this study enable the author to offer a comprehensive and improved strategy for work-dedication therapies based on recent neuroscience breakthroughs. In recent years, neuroscience and leadership have formed a debate within organizations and businesses. By investigating the structure and function of the brain, neuroscientific research provides scientifically-proven knowledge that informs the execution of leadership. This partnership's current objective is to address the following questions: How should leadership be exercised to a) raise and sustain the effectiveness of executives, b) increase and maintain productivity, and c) activate and positively engage people in the pursuit of the goals of an educational institution or business? The brain is the origin of all social interaction and thought. Social pain, according to brain specialists, is processed in the same region of the brain as physical pain. Consequently, interpersonal coordination is an essential characteristic of transformational leadership. By adjusting to the changing academic landscape of digital transformation, leaders must build and improve a combination of digital and soft skills, particularly practical communication abilities, in a new context. Maintaining coherence between geographically dispersed nodes requires initiative and the ability to adapt to diverse and complicated challenges and functions. 6-1-Author Contributions E.G., C.H. and H.A. contributed to the design and implementation of the research, to the analysis of the results and to the writing of the manuscript. All authors have read and agreed to the published version of the manuscript. 6-2-Data Availability Statement The data presented in this study are available on request from the corresponding author. 6-3-Funding The authors received no financial support for the research, authorship, and/or publication of this article.
2022-06-08T15:14:21.037Z
2022-05-31T00:00:00.000
{ "year": 2022, "sha1": "6e8556aadade07fa14e70647dc6fd5d18650a469", "oa_license": "CCBY", "oa_url": "https://www.ijournalse.org/index.php/ESJ/article/download/792/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e9cee109572b11daff6e0dd16d491700817193fe", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [] }
250580088
pes2o/s2orc
v3-fos-license
Pain reduction and adverse effects of intravenous metoclopramide for acute migraine attack: A systematic review and meta-analysis of randomized-controlled trials BACKGROUND Metoclopramide may be used to treat people suffering from acute migraine. However, no comprehensive investigation on this issue has been recorded. This review will provide more solid evidence for the use of metoclopramide in treating acute migraine. AIM To compare the efficacy of intravenous metoclopramide with other therapies in migraine attack treatment in an emergency department (ED). METHODS We included randomized controlled trials of participants older than 18 years with acute migraine headaches, which included at least one arm that received intravenous (IV) metoclopramide at the ED. A literature search of PubMed, Web of Science, Cochrane Collaboration, and Reference Citation Analysis on December 31, 2021 retrieved other drugs or placebo-controlled studies without language limitation. The risk of bias was assessed using the Cochrane risk of bias tool. The primary endpoint was pain reduction at 60 min or closest to 1 h after treatment, as measured by the pain scale. Secondary endpoints included adverse effects or reactions resulting from metoclopramide or comparisons. RESULTS Fourteen trials with a total of 1661 individuals were eligible for review. The risk of bias ranged from low to intermediate. IV metoclopramide administration was not associated with higher pain reduction at 1 h (Standard mean difference [SMD] = -0.03, 95% confidence interval [CI]: -0.33-0.28, P = 0.87). However, metoclopramide was associated with better pain reduction than placebo (SMD = 1.04, 95%CI: 0.50-1.58, P = 0.0002). In addition, side effects were not significantly different between IV metoclopramide and other drugs or placebo (odds ratio [OR] = 0.76, 95%CI: 0.48-1.19, P = 0.09 and OR = 0.92, 95%CI: 0.31-2.74, P = 0.54, respectively). CONCLUSION Metoclopramide is more effective than placebo in treating migraine in the ED. Despite the observed tendency of decreased side effects, its effectiveness compared to other regimens is poorly understood. More research on this area is needed to treat migraine in acute care settings effectively. INTRODUCTION Migraine, a chronic neurological disease, is one of the most common causes that lead patients to seek medical attention [1]. Apart from regular follow-up at the outpatient department, many patients with migraine suffer from acute migraine attacks requiring an emergency department (ED) visit. There were approximately 1.2 million annual ED visits for acute migraine headaches in the United States [2]. At the same time, persons who suffer from this illness frequently encounter several other accompanying symptoms, such as nausea, vomiting, and sensitivity to light, sound, touch, or scent [3,4]. Unfortunately, its pathogenesis remains complicated and little understood. As a result, if such a problem cannot be effectively treated, it significantly impacts the health-related quality of life of individuals suffering from acute migraine [5,6]. According to the American Headache Society recommendations, several acute migraine treatments include triptans, ergotamine, non-steroidal anti-inflammatory drugs, combination analgesic, and antiemetics [7]. Metoclopramide, an anti-emetic drug acting as a dopamine/serotonin antagonist, was initially used in migraine patients who experienced nauseating symptoms [8]. Later, it was shown to be effective in pain control of acute migraine attacks [9,10]. In the recent recommendation, metoclopramide was considered the "probably effective drug," even though several studies showed the efficacy of metoclopramide monotherapy. It has been investigated that the efficacy of metoclopramide was neither inferior to sumatriptan nor opioids [11,12]. Moreover, apart from the efficacy aspect, metoclopramide showed superiority in other aspects, such as lower adverse severe effects and lower addiction rates which are considered an essential issue in the ED as patients with migraine tend to revisit. It is undeniable that metoclopramide might not be the first choice for clinicians to use in acute migraine as its efficacy might not be outstanding compared to other drugs. As prior mentioned, the severe side effects of metoclopramide, which are extrapyramidal symptoms, such as tardive dyskinesia and akathisia, though rarely reported in short term use and less worrisome than those of triptans and opioids, should also be concerned as they might result in an irreversible and sufferable experience for the patient [11]. To comprehend the big picture of using metoclopramide in acute care for migraine, this study aimed to compare metoclopramide use with other therapy in migraine attack treatment in an acute care setting. Our study hypothesized that metoclopramide monotherapy should effectively treat acute migraine attacks in an ED. July 20, 2022 Volume 12 Issue 4 Protocol We conducted this systematic review and meta-analysis following the Preferred Reporting Items for Systematic Reviews and Meta-analyses statement guidelines [13]. We prospectively registered our protocol with the International prospective register of systematic reviews (ID: CRD42022322609). Search strategy and inclusion criteria We (N.U. and W.W.) independently searched four standard databases, PubMed, Web of Science, Cochrane Collaboration, and Reference Citation Analysis, from their inception until December 31, 2021, without language restriction. The search words "metoclopramide," "Meclopran," "Plasil," "Reglan," "methoxyprocainamide," "migraine," and "headache" were the Medical Subject Headings used, in combination and with different spellings and endings. We also searched websites, organizations, relevant reviews, grey literature, and references to identify additional eligible studies. Additionally, we searched for any unpublished trials registered on the "clinicaltrials.gov" Internet site. The selection criteria were as follows: (1) Randomized controlled trials including adults more than 18 years of age with acute migraine headaches, regardless of their types (i.e., with or without aura); (2) at least one arm having received an intravenous (IV) metoclopramide during ED stay; (3) comparing of at least one agent or placebo; (4) reporting of average pain scale before the administration of each agent; and (5) reporting of at least one of the following: Pain scale at 60 or other minutes, any adverse effects, and rescue medications needed at the ED. We excluded pre-clinical studies, review articles, and studies without a control group (e.g., case reports and case series). The two authors (N.U. and W.W.) independently screened the search results to identify eligible studies. Full-text articles of the retrieved studies were retrieved and independently assessed by the two authors against the pre-specified criteria ( Figure 1). Any discrepancies were discussed with a third party and concluded by consensus. Outcomes of interests The primary endpoint was pain reduction at 60 min or closest to 1 h after treatment administration, as measured by the Visual Analog Scale (VAS) or others. Secondary endpoints included adverse effects or reactions resulting from metoclopramide or interventions. Adverse effects in this study were defined by any of the following symptoms: Upper gastrointestinal complaints (dyspepsia, heartburn, and bloating), allergic reaction, dizziness, drowsiness, nasal congestion, dry mouth, dystonic reaction, akathisia, and significant blood pressure drop. Data extraction and assessment of risk of bias We separately extracted the data from the included articles using a prepared data extraction form. Specifically, we extracted basic characteristics (first author, publication year, study location and setting, and number and age of participants), treatment details and interventions in the study groups, and the outcomes of interest. We sought to contact the associated author by email for incomplete or missing data or clarification. The two authors (N.U. and W.W.) independently assessed the risk of study bias using the latest version of the Cochrane Collaboration tool for assessing the trial risk of bias [14]. Any disagreements were handled through discussion with the assistance of a third independent expert. Data synthesis and statistical analysis The data was imported into pre-formatted record forms. We calculated individuals and pooled estimates as standard mean differences (SMDs) for continuous endpoints, with 95% confidence intervals (CIs). We calculated individuals and pooled estimates using odds ratios (ORs) with CIs for dichotomous endpoints. We estimated heterogeneity among the included studies using the I 2 statistic (the percentage of total variation across studies due to heterogeneity). We applied a fixed-effect model if the heterogeneity was minor (I 2 ≤ 50%). However, if there was evidence of strong heterogeneity (I 2 > 50%), a random-effect model was employed instead. Visual assessment of funnel plots and Egger's test were used to assess publication bias caused by small-study effects. For statistical analyses, we applied RevMan version 5.3 (Nordic Cochrane Center, Cochrane Collaboration, 2014, Copenhagen, Denmark) [15]. All tests were two-tailed, and P values < 0.05 were considered statistically significant. Figure 1 demonstrates how the 820 retrieved articles were screened for inclusion in the review and analysis. After excluding duplicated studies, 533 remained. Of those, 470 were excluded following title and abstract screening according to the inclusion and exclusion criteria. The remaining 63 articles were retrieved and reviewed for full-text copies before including 12 studies in the data analysis. In addition, three articles were also searched by citation searching, and two articles met the pre-specified criteria. July 20, 2022 Volume 12 Issue 4 Characteristics of included studies Data extraction and meta-analysis were performed on 14 papers published between 1990 and 2020. The research was carried out in the United States of America (n = 7), Turkey (n = 3), and Iran (n = 4). The mean ages were around 34-40 years. Most studies applied 10 mg of IV metoclopramide, while three administered 20 mg of metoclopramide as interventions. Five trials investigated the efficacy of IV metoclopramide against placebo. Most studies compared more than one arm. All trials reported pain intensity at 0 and other minutes after drug administration, as VAS or other appropriate methods. Table 1 summarizes the baseline demographics and clinical characteristics of the included studies. Deviation from the intended interventions and randomization contributed to a high proportion of concerns over risk of bias. Five out of fourteen had an overall low risk of bias. The risk of bias assessment by Cochrane risk of bias assessment is illustrated in Figures 2 and 3. Secondary outcome Eight studies measured adverse effects across IV metoclopramide and comparisons. The pooled effect size was homogenous both compared with others (I 2 = 13.3%, P = 0.33; Figure 6) and with placebo (I 2 = 0%, P = 0.89; Figure 7) Publication bias There was no substantial publication bias in the funnel plot for the meta-analysis of the average pain reduction between IV metoclopramide and comparisons ( Figure 8). The regression-based Egger's test was performed using a random-effect model with restricted maximum-likelihood method and found that P value was 0.0814. DISCUSSION This meta-analysis investigated the clinical efficacy of IV metoclopramide for treating acute migraine attacks in the ED. This study showed that administration of IV metoclopramide was an effective treatment for migraine headache in adults, compared with placebo. However, the benefit of metoclopramide was not superior to other drugs. Our systematic review also demonstrated that IV metoclopramide tended to have fewer side effects than other interventions. The overall study risk of bias ranged from low to some concerns. Acute migraine is a common neurovascular disorder. It is described as a moderate to severe, predominantly unilateral, and recurrent headache that lasts for several hours to a few days [3,29]. Metoclo-July 20, 2022 Volume 12 Issue 4 pramide is initially used to treat acute migraine for decades [11]. A few studies over the years have highlighted that metoclopramide has substantial therapeutic effectiveness in treating acute migraine episodes [26,30]. The reason behind the use of metoclopramide could be that it antagonizes the dopamine D2 receptor, which is proposed to be one of the pathogeneses of pain in migraine [11]. A meta-analysis of pooled data illustrated that metoclopramide significantly reduced headache pain, and those patients were less likely to rescue medicines than the placebo groups [3]. However, the authors chose various inclusion and exclusion criteria for this study, which may contain data on non-migraine headaches, confounding any conclusions to be derived [3]. Furthermore, metoclopramide also had an anti-emetic effect that ameliorates migraine patients' symptoms [11]. Therefore, metoclopramide could be a first-line treatment for acute migraine episodes. Our findings are consistent with the prior research finding that metoclopramide was more effective than placebo in pain reduction [9]. In addition, metoclopramide had a higher benefit than some drugs in our analysis (subcutaneous sumatriptan, intravenous valproate, and oral ibuprofen). These findings fit with the pattern described previously by Colman et al [9]. However, that study selected both ED and headache clinic settings, which differed from ours. Besides, Colman and colleagues analyzed the pain using a complete relief of headache or significant reduction in headache pain. As a result, discrepancies were likely to occur across that definition. Our study provided the difference aiming to close this gap. We compared all studies based on the pre-and post-intervention mean pain intensity in each study, which is more feasible to apply and compare. However, the side effects of metoclopramide might be serious and irreversible, for example, tardive dyskinesia. It is characterized by the uncontrollable movement of the tongue, face, and extremities. Nonetheless, our findings reveal that the adverse effects resulting from metoclopramide were not different across the other drugs. Results obtained by Orr and colleagues[31] are consistent with our findings. Moreover, compared to other suggested therapies, metoclopramide's adverse effect profile is less concerning than triptans, which are commonly utilized in ED situations [32,33]. Limitation This review contains some limitations. First, all included studies were conducted in only three countries, including Iran, United States, and Turkey, which possibly resulted in the generalizability bias. Secondly, most trials did not report exclusion criteria in sufficient detail; therefore, the definitions for migraine might be varied among studies. In addition, several studies did not report the confirmation of migraine diagnosis, duration of headache, and prior therapies. As a result, we probably combined studies with varying patient characteristics, making it difficult to determine if our findings are generalizable to other contexts. Finally, this meta-analysis included studies done at different dates (between 1990 and 2020), resulting in the observed heterogeneity. CONCLUSION To conclude, metoclopramide was proven to be beneficial to treating migraine in the acute care setting, such as in the ED, compared to placebo. Despite the demonstrated trend of a lower adverse effect, its efficacy compared to other regimens is little comprehended. More studies on this topic should be further conducted to improve migraine treatment in acute care settings effectively. Research background Metoclopramide may be used to treat people suffering from acute migraine. However, no comprehensive investigation on this issue has been recorded. This review will provide more solid evidence for the use of metoclopramide in treating acute migraine.
2022-07-16T15:14:43.044Z
2022-07-20T00:00:00.000
{ "year": 2022, "sha1": "80fde127583b329bf7c453a876f3fffe7e348f29", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5662/wjm.v12.i4.319", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0d4841d5092639a8f3ed5aa582c80620c49fc1b1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55866557
pes2o/s2orc
v3-fos-license
Extraordinary acts and ordinary pleasures: Rhetorics of inequality in young people’s talk about celebrity In this article, we start from the problem of inequality raised by the existence of a class of celebrities with high levels of wealth and status. We analyse how young people make sense of these inequalities in their talk about celebrity. Specifically, we revisit Michael Billig’s Talking of the Royal Family, and his focus on rhetorical strategies that legitimate inequalities of money and power. As he argued, in comparing their lives with those of the rich and famous, young people are making sense of the massive disparity between the two, often replacing envy or anger with pleasure in being ‘ordinary’. We extend Billig’s work by looking at a larger class of public figures than royalty, including those with a more permeable border between ‘them’ and ‘us’. In so doing, we expand his categories and attend to the relationship between the gender of celebrities and contemporary rhetorics of inequality. Introduction 'A public fascination with a family possessing incalculable wealth should itself signify an interesting academic puzzle' (Billig, 1992: 14). Michael Billig wrote this over two decades ago about the British royal family. This puzzle has particular pertinence in our current 'age of austerity' with high levels of youth unemployment, growing child poverty and cuts to welfare and social security across much of the world. In this article, we revisit Billig's puzzle in relation to a class of people who possess apparently incalculable wealth and significant power: celebrities. We do this by drawing on a large-scale qualitative study of the role of celebrity in young people's aspirations in England. Celebrity culture is often positioned in mainstream media and policy discourse as a potentially 'corrupting' influence on young people's aspirations . However, young people's voices are often absent from discussions about popular culture and aspirations. While there is much sociological research exploring youth inequalities (e.g. Archer et al., 2010;Ball, 2010), talk about poverty and inequality (Shildrick and MacDonald, 2013) and youth cultures (Nayak and Kehily, 2007), existing research has not examined the role of celebrity culture in young people's talk about inequality. Our research set out to explore how young people use celebrity in imagining their own futures. Within this study, we noticed a lack of 'radical' critique of inequality within young people's talk about celebrity, paralleling a wider 'popular acceptance of inequality' (Billig, 1992: 14). This absence sparked our curiosity and was the starting point for our article. Like Billig, we address this by looking at the rhetorical strategies young people use when talking about the wealthy and powerful, interrogating the justifications and judgements they make as they compare their own lives to those of celebrities. This article offers an extension of Billig's work in three ways. First, we attend to how talk of being extraordinary and ordinary works in relation to a broader range of mediated public figures, including people whose lifestyles and status appear more accessible than those of the royal family. Second, we extend and modify the rhetorical strategies Billig identified, looking at the place of disgust, authenticity, risk and vulnerability in the way that people speak about the pleasures of ordinary life. Finally, we explore the gendered dynamics within these strategies. The common-sense of inequality Discourse analysts from a range of theoretical and methodological approaches have long been concerned with the construction, legitimation and negotiation of inequality in talk (e.g. Bennett, 2013;Fairclough, 2010;Fallon, 2006;Van Djik, 1994). Such work has drawn attention to the role of structures of talk, discursive strategies and patterns in the discursive construction of different social groups in the reproduction and maintenance of inequality. Sociological and social psychological work has pointed in particular to the function that comparisons play in naturalising inequality. For example, in the context of UK welfare reform, Jensen (2014) argues that binaries such as 'strivers' (those who are seen as hard working) and 'skivers' (those who are seen as not working enough) work to position problems of poverty and economic inequality as individualised issues of 'welfare dependence' and 'irresponsibility'. Such comparisons circulate in the context of neoliberal discourses of meritocracy, which present inequality as resulting from differences in skill and work, rather than structural inequalities (Smart, 2012). Similarly, recent discursive work on contemporary discourses of unemployment in England has examined how participants draw on discourses of 'choice' in poverty and worklessness, exploring how these can be mobilised to construct class differences between the 'deserving' and 'undeserving' poor. Young people themselves are often at the centre of policy rhetoric about the relationship between choice, meritocracy and poverty. Youth are frequently presented in policy discourse as suffering from a 'poverty of aspirations' (Spohrer, 2011). In a discursive analysis of governmental speeches, papers and reports, Spohrer (2011) argues that policy discourse positions 'low aspirations' as a cause of social disadvantage, thus presenting the solution to poverty as 'raising aspirations' at the level of the individual. Young people's talk about their own social location and understanding of inequality thus needs to be understood in the context of wider discourses of poverty, meritocracy and aspiration. This article seeks to make a contribution to the analysis of the role of popular culture in young people's understanding and talk about inequality, and make a theoretical contribution to the rhetorical study of ordinary people's talk about celebrity. In the next section, we will outline Billig's (1992) analysis of talk about royalty, and outline our own contribution to this understanding in the context of the rise of celebrity culture. Rhetorical strategies, coupon-filling and the pleasures of ordinary life In his analysis of ordinary families talking about the royal family, Billig (1992) found patterns of 'common-sense' across different social groups as they evaluated the monarchy. He argued that the families he spoke with did not see the royal family as ruling by divine right; rather they were often positioned as 'down to earth', but simultaneously extraordinary in this ordinariness. Billig posited that this rhetorical commonplace confirms the position of royalty as extraordinary, and positions the speakers as both refusing their 'subservience' to a superior royalty and also reinforcing their position as ordinary/ royal subjects. Consequently, Billig argued that when people make claims about the royal family, they are not just talking about royalty, they are also talking about their own lives, and in doing so, making sense of the differences between them. He conceptualised this 'doubledeclaiming' as a form of 'coupon filling', in which participants are able to reconcile their everyday lives, comparing, for example, the wealth and status of royalty with their own freedom to go shopping, clean their own homes and buy fish and chips. He argued that this rhetorical strategy keeps envy at bay, as calculations based on 'common-sense' affirm that ordinary life is preferable and that the world is just: The uncalculated calculations of common-sense's double-declaiming can be used to compare 'their' misfortunes with 'our' gains … Speakers are to be heard depicting the pleasures of 'ordinary life' in general, and affirming, in a personal way, the credits of their own particular lives. (Billig, 1992: 119) Billig's study was conducted at a time when royal celebrity was on the rise, in which the everyday lives of royals were regularly reported. He was concerned with the continuing interest in royal lives and persistence of the British monarchy, arguing that apparently trivial talk about the royal family, including mockery of royal scandals, served to legitimate the continuation and wealth of the royals. While the royal family are increasingly mediated in ways similar to non-royal celebrities -for example, through regular media reporting of their private lives -royalty remains distinct from most types of celebrity because it is an institution of (almost exclusively) inherited privilege. In contrast, celebrity increasingly presents itself as an open and 'democratised' space offering forms of status and power to 'ordinary people' -particularly since the advent of Reality Television (Couldry, 2004). Celebrity and stardom have always been suffused by notions of meritocracy and individual success alongside an emphasis on the celebrity's typicality and ordinariness (Dyer, 2003). Contemporary celebrity thus generates a slightly different 'academic puzzle'. Thus we are asking -How is inequality constructed in young people's talk about celebrity? Do young people use similar rhetorical strategies to those of Billig's families in 1992? What can this tell us about young people's understanding of inequality? In doing so, we develop Billig's framework to explore the particularly affective and gendered nature of young people's celebrity talk. Method The article draws on a qualitative study of the role of celebrity in young people's classed and gendered aspirations, funded by the Economic and Social Research Council in the United Kingdom (http://www.celebyouth.org). The wider study combines group and individual interviews with 148 young people (aged 14-17) in six schools across England, with textual case studies of 12 celebrities. In this article, we use a close reading of the data from the group interviews with young people to unpick the rhetorical strategies used to make sense of inequality. The group interviews offer an ideal site for such an investigation, in which it is possible to observe people arguing and formulating thoughts in the 'cut-and-thrust of discussion' (Billig, 1992: 15). We specifically designed the group interviews with this in mind, encouraging schools to select a diversity of participants across gender, class, ethnicity and attainment, and allowing young people to lead the discussion on themes including their liked and disliked celebrities, routes through which people acquire fame, what defines a celebrity and their relationships to celebrity lifestyles. These group interviews lasted between 40 and 60 minutes, were audio recorded, transcribed, and thematically coded using the computer package NVivo, for analysis. In the next section, we elaborate on the process of data analysis. Making discursive sense of celebrity We draw on the tradition of discursive psychology, initiated in Potter and Wetherell's (1987) now-classic text. They offered an approach that challenged the notion of fixed, underlying attitudes, arguing that evaluations made in talk are context-specific. They contended that examining the detail and organisation of talk enables researchers to see the functions that particular evaluations perform. We also use developments of this work in relation to discursive-affective practices (Wetherell, 2012) and rhetorical strategies (Billig, 1996;Potter, 1996). These share an attention to both the fine-grain and patterning of talk, and its relationship to wider social relations of power and inequality. This enables a focus on both the detail of participants' utterances and broad 'forms of intelligibility' (Wetherell, 1998: 388; see also Gill, 2009) across our data. Following this, we analyse young people's talk about celebrity culture not as transparent reflections of attitudes or emotions, but rather as part of contextual and collective meaning-making practices. We are interested in the patterns and tensions in celebrity talk that produce particular categories of social subjects and mark out what is seen as 'legitimate' in social life (Hall, 2002). Thus, we examine the interactional, argumentative nature of talk about celebrities, asking 'Why this utterance here?' (Wetherell, 1998: 388). When young people talk about celebrities' lives, they negotiate the 'common-sense' of social life, arguing, making judgements, taking up and justifying positions on particular issues (Billig, 1996). By taking a detailed look at what participants say and do not say, we can see what is remarkable or ordinary, what is taken for granted, what needs further explanation or justification and where there are sites of tension between different versions of 'common-sense'. Analysis and discussion The young people who took part in the study did not uncritically accept celebrity wealth and status. Talk about celebrity wealth was often accompanied by explicit or implicit criticisms and justifications, as participants grappled with the inequality between their own lives and those of celebrities. As shown in Table 1, we mapped five rhetorical strategies young people employed when making sense of the wealth and power of celebrities, finding similarities to and differences from those that appeared in the talk of Billig's (1992) ordinary families. Participants positioned some celebrities as extraordinarytheir wealth and status justified by their difference to ordinary people and extraordinary characteristics, talents or behaviours. Conversely, participants presented some celebrities as ordinary in extraordinary circumstances, with their very ordinariness a remarkable fact. Like Billig's families, the young people also foregrounded the pleasures of ordinary life. They achieved this by positioning their own lives as preferable to those of certain celebrities, and in doing so, avoiding becoming disgusting and inauthentic and avoiding risk and vulnerability. The article will explore each of these strategies in turn with the exception of 'celebrities cannot do ordinary things in ordinary ways', which replicated the patterning in Billig's data. The other four are reshaped when they move from royalty to celebrity. Furthermore, in this move, we argue that the gender of the celebrity takes on a particular significance. While Billig (1992) did not attend to the 'targets' of each strategy because their royal status rendered gender less important, for celebrity, we suggest that it is a crucial part of the puzzle. As Billig (1992) writes of attitudes to the royal family, 'two sets of common-places -"they" should behave better than "us"/"they" are only human like "us" -pose a continual dilemma which frames each royal action and each royal personage' (p. 96). These also framed each celebrity action and each celebrity personage, though there is a sense that, while the royals are extraordinary through being, celebrities become extraordinary through doing. In this section, we focus on the ways that young people constructed celebrities as better than us. This happened through the rendering of some, predominantly male, celebrities' philanthropic acts, achievements and hard work as extraordinary. We explore this strategy by focusing on philanthropy since this was the most common domain of these extraordinary acts and was central to how participants presented celebrities as deserving of their wealth and status. We start with an analysis of talk about the businessman Bill Gates, whose extreme wealth is part of his extraordinariness as a celebrity. In three of the four group interviews in which his wealth was discussed, it was mentioned alongside talk of his philanthropy. We can see this conjunction in the extract below: Heather: Okay well if you were going to design a perfect celebrity … Homer: Oh what's his name, the guy who made the computers, [some laughter] Bill Gates.… Ryan: Oh he's like a beast, he's got loads of money. Jack: He's good as well. Homer: He gives it away for free. (London, Ryan's use of the word 'beast' to describe Gates distances him from 'us' by conjuring a large, powerful, wild and mythical creature. The term 'beast' also carries negative animalistic meanings, which is perhaps why Jack directly follows it up with 'he's good as well' and Homer makes a reference to him 'giv[ing] it [money] away for free'. Other interviews contain references to Gates' extraordinary philanthropy. For example, Bob talked about how 'he was like helping eradicate polio from like the world' (South West, 14-15) and Tim about how 'every year he gives like a 100 million to charity' (London, 14-15). Bob, Tim, Ryan, Homer and Jack use extreme-case formulations to strengthen their claims (Pomerantz, 1986;Potter, 1996). It is hard in this conversational moment for anyone to argue with the extraordinariness of 'eradicating polio from the world' or giving away the unimaginable amount of '100 million'. Thus, Gates appears as unlike ordinary people, and so his position as the world's richest man (Forbes, 2014) can be calculated and reconciled. Footballers' salaries were subject to particular scrutiny within the young people's talk, positioned by some as unfair. Here, extraordinary charitable giving and hard work could be used as a counter-claim. This is evident in Bruno's talk about footballer David Beckham, who like Gates, above, was identified as an 'ideal celebrity': Heather: David Beckham, did you say he was the ideal celebrity? Bruno: Yeah.… You can't argue with him.… like he wasn't born with talent, there are so many people that are born with something, he had to work for it, like every day, day in, day out. And he would be playing at the time when he would get like £10, and now, £10 a week, and now people get 100 k a week. And since then, even now he's giving, three like three, three million pounds to charity for five months, and he's playing for a Paris club, but he's not taking the money, he's going to give it to charity straight away. I don't think he's a wrong person, something you can tell like, he's done nothing wrong. (London,(16)(17) Beckham is presented as having worked tirelessly 'day in, day out' in order to progress from £10 a week to £100 k a week. This emphasis on hard work was reflected in participants' evaluations of different routes into celebrity, in which particular celebrities were judged more 'hardworking' and thus deserving of wealth and fame . The use of an extraordinary 10,000-fold salary multiplier emphasises Beckham's trajectory from ordinariness to extraordinariness, to a place in which it is possible for him to hand his entire pay packet 'to charity straight away'. Beckham is implicitly contrasted to those who are 'born with talent' or money -he has earned his wealth (and skill) and given back thus legitimating his wealth. This is confirmed when Bruno sums up, 'he's done nothing wrong', staking a position in a point of contestation (the size of footballers' salaries). Such evaluations can be understood in the context of the regular reporting of celebrity philanthropy. Celebrity-fronted events such as the Live 8 and Chime for Change concerts present the solution to complex social problems through the lens of the 'extraordinary' acts of wealthy 'heroic' individuals. In the data presented, we can see how the role of inequality in the persistence of such problems disappears in a rhetorical flourish. 'On the red carpet, and she was ordering McDonald's': Celebrities are ordinary within extraordinary circumstances Following Billig's findings, a pattern across young people's talk was an emphasis on the ways in which celebrities maintain ordinariness within the extraordinary circumstances of extreme public visibility and renown. This was done through emphasising mundane behaviours or 'embarrassing' actions that indicated the celebrity's ordinariness via 'resistance' to the norms and pressures of the celebrity industry, and evidence that they had remained 'true to themselves'. As in the last section, we do not have space to explore all these instances, and instead focus on analysing a few in detail in order to unpick the patterns of sense-making that characterise this strategy. We argue that this rhetorical strategy works to legitimate inequality through the assertion of authenticity. Specifically, because celebrities can maintain their ordinariness (and be 'like us') in the extraordinary circumstances of fame, they deserve their wealth and status. Ordinariness was 'evidenced' through different acts for men and women. For women, this oriented around their bodies with several female celebrities celebrated for sustaining their 'ordinariness' by refusing to submit to media pressures placed on women in the public spotlight to look a certain way. In the extract below, Ginny discusses musicians Jessie J and Miley Cyrus shaving their heads for charity: Ginny: Cutting off their hair for charity, which I think is quite like-like personally I am obsessed with my hair … So I feel like, even though they are these big celebrities that are meant to care about their appearance they did-Well Jessie J's going to do something so like outrageous, so like kind hearted. (London,(14)(15) Here Ginny marks out head shaving as a sacrifice. In her account, it is the 'outrageousness' of the hair cut that marks it out as an act of generosity. Ginny produces this as 'abnormal' through a contrast (Potter, 1996) with her own 'obsession' with her hair, a perhaps more normalised position of bodily femininity. However, what renders this act significant and worthy of comment is that it is carried out by 'big celebrities' in the context of all-pervasive media scrutiny that demands that female celebrities 'care about their appearance' and vilifies those who transgress certain ideals. In such a context, positioning Jessie J as not obsessed with her hair marks her out as extraordinary. The archetypal celebrity who was remarked on for being ordinary in extraordinary circumstances was actress Jennifer Lawrence. This was manifest through talk about two key acts: her televised fall at the 2013 Oscars ceremony, and her stated 'love of food' and refusal to diet (Peterson, 2014): Strawberry: She's just normal. Like she was on the red carpet, and she was ordering McDonald's, and I thought that was cool, coz like all the rest of them are like starving themselves, and she was giving out a positive message. And then she tripped, and just laughed at herself. (London,(16)(17) The construction of Lawrence as ordinary was positioned as central to her popularity and value. Lawrence is rendered 'just normal' despite the fact that the very need to remark on her 'ordering McDonalds' (the McDonald's burger being particularly symbolic of her 'everydayness', akin to the 'fish and chips' in Billig's study), falling over and laughing at herself marks these events as very much out of the ordinary. Jennifer Lawrence's ordinariness (and thus extraordinariness in celebrity terms) is constructed through contrast with a more extreme description of 'all the rest' of the female celebrities, who are 'starving themselves'. As Billig (1992) writes of the royal family, their 'ordinariness … is a popular object of desire', and 'this desire is framed by assumptions of the extraordinariness of this ordinariness' (p. 72). While for women, bodies were a key terrain upon which ordinariness was read, for male celebrities, evaluations of 'ordinariness within extraordinary circumstances' centred around personality and actions. For example, UK royal Prince Harry was consistently positioned as ordinary through reference to both his involvement in the military and 'having a laugh'. The extract below is typical of the group interview talk about Harry, including a reference to him 'fighting for the country' and the infamous Las Vegas holiday during which he was photographed playing strip billiards with his male friends: Joe: Even if he is in the royal family, he's just a normal guy. He fights for our country. He's just trying to be a normal bloke, he wants to go out and have a good laugh. [Paris: Yeah] That's why he goes on like holiday, and he does stuff like that. He just wants to be normal. Paris: Yeah. He's young. Like, I'd say that's like what every young person would be doing. Just because you're famous, or you're a celeb, or you're part of the royal family, doesn't mean you can't have a life, like, or have to act-Joe: Even if you are a role model to millions, it shouldn't affect you having a good time, and-Paris: He's still got to have a life, can't live a life of misery Britney: Coz people, people like him for what he is, and not for what he like pretends to be. (South West,(16)(17) In this extract, we see how working in the military and going on a 'boys' holiday' appear as forms of 'escape' from the stifling confines (or 'misery') of royalty and the paparazzi, in which he can 'be normal' and 'have a laugh', an act long-associated with laddish masculinity (Willis, 1977). Harry's behaviour is positioned as both abnormal for royalty, yet justified through Paris' claim that it is 'what every young person would be doing'. Harry is thus able to claim an altogether more ordinary kind of extraordinariness through these highly mediated acts. Joe and Paris present Harry as just like them -ordinary young people, who just want to have a good time. With both Harry and Lawrence we find not just sympathy for them having to live up to certain images, but a celebration of their capacity to resist these pressures. In imagining them in this way, participants are reversing the conditions of the inequality and so affirming it. As Billig (1992) showed, ordinariness also featured within a third rhetorical strategy: that asserting the pleasures of ordinary life. In our final two sections, we outline two substrategies within this that significantly extend his work. We did find examples of participants mobilising this strategy in similar ways to those highlighted by Billig in relation to the royals -for example, through discussion of press intrusion and the impossibilities of doing 'ordinary' things. However, in relation to celebrity, the desirability of ordinary non-celebrity lives was manifest in two further ways: through positioning celebrity lives as either disgusting and inauthentic or as risky and vulnerable. As we will demonstrate, these have different affective registers and functions. While the first set are characterised by contempt, blame and a desire for levelling via humiliation, the second are characterised by empathy towards celebrities. Yet what they have in common is that they render celebrity lives as spaces of risk and danger to the notion of 'authentic' selfhood, thus positioning young people's own lives as preferable. 'Some of their body parts just aren't real': Celebrities are disgusting and inauthentic In the next rhetorical strategy, celebrity life is deemed undesirable because it is associated with inauthenticity and positioned as an object of disgust. Here, participants expressed contempt for celebrities who they deemed 'fake' and 'arrogant'. These were contrasted with celebrities who were seen as more 'ordinary' or 'down to earth'. Becoming a celebrity was presented therefore as carrying the risk of changing, both in terms of personality and bodily modifications. This strategy exemplifies what Billig (1992) refers to as 'the paradox of desire'. While 'we' might desire celebrity privileges, and certainly our participants did express such desires, if 'our' wish were granted 'we' would not be 'us': 'we' would be 'them'. 'We' would be privileged but risk betraying our-selves in the process. In an age when authenticity is deemed as a central marker of successful personhood, such a risk is great indeed . This strategy was highly gendered, focusing almost exclusively on those female celebrities whose fame is associated with their bodies, but also including a few 'feminised' young male celebrities such as musicians Justin Bieber and One Direction (see Harvey et al., 2013). This strategy was most evident in young people's talk about 'extreme' celebrity body modifications and cosmetic surgery, which universally provoked disgust and contempt, and were presented as signs of fakeness: Kim: So Katie Price? OrangeJuice: She's so fake. Kim: Why don't-Eleanor-Marie: She's too fake, and she's so up herself. Joanna: It's the plastic surgery and stuff that make me dislike her (London,(14)(15) Kim: You use the word fake. What do you mean by that? Kirsty: Not their-selves.… Literally not themselves, like some of their body parts just aren't real. [laughter] (Manchester,(14)(15) Lewis J: How are you going to know if she's a good person? She's [Nicki Minaj] hiding behind an image that makes her look like a good person, then she must be a bad person. (London,(16)(17) The extracts above highlight how 'fakeness', through plastic surgery, was evaluated as 'bad' -if a celebrity is not 'themselves' then 'they' must be 'a bad person'. Importantly, the combination of being 'fake' and 'up yourself' appeared in contrast to celebrities who were seen as 'authentic' -those who had not let fame change who they 'really' were. As in the comment above, one of the main targets of these accusations was musician and American Idol judge Nicki Minaj: Mike [female]: Another reason I hate her, one of my friends, she's like, oh my god she annoys me, but um, she's like, my best friend, she has like a poster of Nicki Minaj in her room. And she's like and I go 'Why the hell do you have that in your room, it's disgusting?' And she say like, 'Oh it's because she's beautiful'. I'm like, 'No … She just really isn't'. Teresa: She wants to look like her. That's like really bad because then, if she wants to look like her, is she going to get butt implants? … Mike: Yes, she influences like every single girl. Yes, to be beautiful you have to look like this, it's like 'No she really isn't'. Do you find her attractive male species Mike locates Minaj as powerful, 'she like influences every single girl', positioning herself as outside of this influence, and rejecting the expectation that 'to be beautiful you have to look like this'. The group collectively constructs Minaj as 'disgusting' and 'fake' through her 'rubbery, plasticky' body with its 'butt implants', repeatedly denying the possibility of her being beautiful, with Ryan allowing this only if he 'knew that she wasn't fake'. The talk is infused by violence and graphic imagery, in this extract exemplified by Mike wondering if 'you can bounce her … Roll her down a hill'. The laughter accompanying this plays an important role in the group's meaning-making, acting 'as a means of preserving everyday social order' (Billig, 2005: 235). Billig sees public mockery of members of the royal family doing embarrassing things as a way to claim the desirability of ordinary life. We suggest these instances represent something more than mockery. The undesirability of Minaj's celebrity is further claimed through taking delight in (imagining) humiliating and even destroying her. In this and other examples, it is Schadenfreude rather than mere mockery that is operating, where levelling comes through expressing contempt for celebrities' inauthenticity or arrogance and through taking cruel pleasures in their failures: Laura: So you'd rather meet someone that you disliked? Shane: Probably. Rick: Just so I can abuse them. Shane: Yeah … You know, like it's like when people say erm [pause] you know, it's like they've got such a big ego, you just want to take them down a couple of pegs. That's why … Just because they think so highly of themselves, and they haven't really done anything so. (London,(16)(17) Cross and Littler (2010) locate Schadenfreude as a 'trans-individual affective process of resentment' and response to the contemporary political conjecture of neoliberal capitalism, where individuals have a 'desire for equality but [are] unable to think of anything other than levelling through humiliation' (p. 397). In these extracts, certain celebrities are presented as seeing themselves as superior to everyone else. The 'abuse' of celebrities is thus justified as a way of challenging this hierarchy, which is presented as unfair as 'they haven't really done anything'. We argue that deeming celebrity lives inauthentic, unfortunate, disgusting and failed permits both resistance to the wealth and status of celebrities, and levelling which neutralises the inequalities between 'us' and 'them'. However, such 'abuse' was most often directed at working class and black and minority ethnic celebrity women, such as Katie Price and Nicki Minaj. This abuse is also found in online spaces, which enable more public, ritualised and archived expressions of this (Allen, 2013;Jane, 2014). We contend that while generated from conditions characterised by a growing disparity between the haves and the have nots, collective expression of celebrity Schadenfreude 'overwhelmingly works to express irritation at inequalities but not to change the wider rules of the current social system' (Cross and Littler, 2010: 395). 'It must be really hard': Celebrity lifestyles are risky and vulnerable In this final section, we look at how participants asserted the desirability of their ordinary, non-celebrity lives through positioning celebrity as risky and vulnerable, where fame and wealth can lead to terrible things including losing control of yourself. This strategy was manifest in discussions of celebrity addictions to drugs and alcohol, attracting excessive fans and 'haters', and giving in to 'peer pressure'. Rather than the contempt and hate that suffused the talk in the last section, this celebrity talk was characterised by empathy and sympathy. Gender and age intersected, with young female celebrities being mobilised as cautionary tales. Cautionary tales appear often in the data, with their most prominent subject being film actor and child star Lindsay Lohan, a 'train wreck child star', as exemplified in this extract from an all-female group interview: Georgia: And even when I think about, yes, I'm going to say Herbie Fully Loaded, because that's an amazing film.… When like you see her now you just think, why would you do that to yourself? … Female: They grow up like, they're already pressured from when they're kids because they're famous and then as they grow up they just give up caring any more. Daniella: Yes. I do I know like more celebrities that have had drug problems than like people that I've heard of that aren't famous. And I think like part of the fame, you will be faced with drugs, stuff like that. Female: Because it's just like a ready source, like everyone will try and give it to you. And also like I suppose like being famous you get a lot of stress, so that that is to like calm you down and stuff. (South West,(14)(15) These participants mobilise a voice of authority within their talk about what it is (rather than might be) like to be a child star, for example, in the following statements: 'part of the fame, you will be faced with drugs' and 'you get a lot of stress, so that is to like calm you down'. Daniella's comment about celebrities' drug problems makes a claim about the greater vulnerability of celebrity lives compared to those of ordinary people. As Projansky (2014) identifies, age and gender are crucial to the construction of what she calls 'crash-and-burn' girls, who each start as a 'can-do girl who has it all, but who -through weakness and/or the inability to live with the pressure of celebrity during the process of growing up -makes a mistake and therefore faces a spectacular descent into at risk status' (p. 4). Other 'train wreck' celebrities -all female -who generated empathy included Britney Spears, Demi Lovato, Whitney Houston and Amy Winehouse, discussed below: Naomi: One person that I like, really think is an amazing singer, is Amy Winehouse. It was really sad that she died, but I think that people think that, celebrities should not take drugs and all of this. And obviously there's one thing that you shouldn't do, you should just not get into that sort of crowd where you take drugs. But I think it must be really hard for, for people like that, because they're influenced and even though they do have fans they are still humans, they still make mistakes. Whereas people give them, they say it's worse and stuff, when really everyone, who's, anyone might go through that. (Manchester,(14)(15) Like the girls discussing Lohan, Naomi speaks with authority. While she states that she does not condone drug-taking ('there's one thing that you shouldn't do'), this does not lead to a negative judgement of Winehouse. Instead, Naomi asserts the difficulty of resisting peer pressure. In constructing Winehouse as 'still human' and asserting that 'anyone might go through that', Naomi diffuses the differences between her and Winehouse, 'us' and 'them', simultaneously constituting celebrity status as something to be avoided. To be human is positioned as a vulnerable, fallible existence. The expectation for celebrities not to make mistakes appears unreasonable -this would make them alien, inhuman. Here, as with the extracts earlier, we see how, through emphasising the negative things that celebrity brings, its desirability is challenged or even refuted: fame may bring wealth but it is also 'hard... for people'. Conclusion This article has sought to examine the rhetorical strategies used by young people as they make sense of the differences between their experiences and those of celebrities. Exploring the patterning of such talk provides a lens through which to explore some of the 'common-sense' understandings of wealth, status and inequality in contemporary young people's lives. It is unsurprising, perhaps, that in 'austerity Britain', young people do not always uncritically accept the power and affluence of celebrities. Our data show that talk about wealth and status was accompanied by justifications, with the hierarchies of 'them' and 'us' needing to be explained by celebrities' extraordinariness. What we think is particularly interesting is how such differences were often framed within neoliberal discourses of meritocracy and individual extraordinariness, for example, in the discussion of David Beckham working 'day in, day out', and Bill Gates' wealth as a businessman. These narratives of meritocracy were not confined to talk about celebrities, with young people's discussions of their own aspirations, imagined futures and potential barriers to achieving their dreams also infused with the language of individualism, hard work and triumph over adversity (for a more detailed examination of this, see Mendick et al., 2015). The claims, criticisms and justifications that young people make about celebrities also operate discursively as evaluations of themselves and the world in which they are growing up. As young people fill in the 'coupons' of their own lives, in comparison with those of celebrities, we contend that rhetorical strategies that emphasise and value ordinariness can offer a sense of agency in the face of social inequality. Going to McDonald's, not having to deal with the paparazzi and living an ordinary life can be positioned as choices that are made to avoid the risks, inauthenticity and vulnerabilities that the wealth and status of celebrity bring: As the columns of credits and debits are summed, so the accounts are settled to arrive at the conclusion that there is a 'just-world', at least so far as [celebrities and the rest of us] are concerned. (Billig, 1992: 124) At the same time, the anger, ridicule and violent imagery around some celebrities' wealth and status highlight the visceral way in which inequality can be discursively managed through a classed, gendered and racialised rhetoric, in which some bodies, some forms of 'success' and some careers are positioned as authentic, while others are denigrated as 'disgusting'. The role of humiliation and Schadenfreude in these moments of anger echoes the discourses of disgust and revulsion often levelled at marginalised groups (Tyler, 2008(Tyler, , 2013. We would argue that the disproportionately large representational space such discourses about celebrity occupy may, among other things, draw scrutiny away from other cultural power-holders -particularly the financial and political elite (Negra and Holmes, 2008). The wealthiest in the world are those who command corporations in the telecommunications, retail and energy sectors (Forbes, 2014), whose wealth does not seem to attract the same level of anger and discursive humiliation. In terms of our celebrity case studies, Bill Gates' wealth, for example, could be justified through claims about his entrepreneurship and philanthropy, while celebrities such as Nicki Minaj and Katie Price were often positioned as disgusting and inauthentic, evaluated in particular in relation to their bodies. The analysis of young people's talk about Bill Gates in particular suggests some of the ways in which young people make sense of such corporate wealth, often evaluating it against individualistic, gendered and classed notions of meritocracy, hard work and success. Our data thus echo Smart's (2012) research, in which young people drew on neoliberal interpretations of economic inequality in their understanding of wealth and poverty. Celebrity talk is an important space, therefore, in which young people make sense of their own place in an unequal society. Our analysis thus contributes to a sociological understanding of how young people talk about inequality, and the role of celebrity culture in this collective meaning-making, developing Billig's (1992) analytical framework. While the rhetorical strategies we have explored in this study certainly offer some agency for young people in thinking about their futures, we would argue that they also work to naturalise and justify social inequality.
2018-12-06T21:31:15.317Z
2015-04-23T00:00:00.000
{ "year": 2015, "sha1": "3408ffd627a8a851bbbb8af11bc36ee290f6ae4f", "oa_license": "CCBY", "oa_url": "http://journals.sagepub.com/doi/pdf/10.1177/0957926515576636", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "5915e18f356a3d8a03a439fa83e89ae84deb219e", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
27189758
pes2o/s2orc
v3-fos-license
Potential Mechanisms and Functions of Intermittent Neural Synchronization Neural synchronization is believed to play an important role in different brain functions. Synchrony in cortical and subcortical circuits is frequently variable in time and not perfect. Few long intervals of desynchronized dynamics may be functionally different from many short desynchronized intervals although the average synchrony may be the same. Recent analysis of imperfect synchrony in different neural systems reported one common feature: neural oscillations may go out of synchrony frequently, but primarily for a short time interval. This study explores potential mechanisms and functional advantages of this short desynchronizations dynamics using computational neuroscience techniques. We show that short desynchronizations are exhibited in coupled neurons if their delayed rectifier potassium current has relatively large values of the voltage-dependent activation time-constant. The delayed activation of potassium current is associated with generation of quickly-rising action potential. This “spikiness” is a very general property of neurons. This may explain why very different neural systems exhibit short desynchronization dynamics. We also show how the distribution of desynchronization durations may be independent of the synchronization strength. Finally, we show that short desynchronization dynamics requires weaker synaptic input to reach a pre-set synchrony level. Thus, this dynamics allows for efficient regulation of synchrony and may promote efficient formation of synchronous neural assemblies. INTRODUCTION Synchrony of neural oscillations is believed to play important role in a variety of functions of the brain (e.g., Buzsáki and Draguhn, 2004;Colgin, 2011;Fell and Axmacher, 2011;Buzsáki and Schomburg, 2015;Fries, 2015;Harris and Gordon, 2015). Improperly organized (too excessive or too weak) synchrony is associated with several neurological and neuropsychiatric dysfunctions (e.g., Schnitzler and Gross, 2005;Singer, 2006, 2010;Oswal et al., 2013;Pittman-Polletta et al., 2015;Spellman and Gordon, 2015). However, the synchrony in cortical and subcortical circuits may not necessarily stay perfect for a prolong intervals of time (if it can be perfect at all). If synchrony is induced by transient stimulus, the transient character of this synchrony would be expected. But even in the idling dynamics of neural circuits of the brain prolonged perfect synchrony is rarely (if at all) reported. This implies that for some intervals of time synchrony may be stronger, while for other intervals of time it may be weaker. The temporal patterns of synchrony may exhibit variations of synchrony strength yielding some average synchrony values. Few long intervals of desynchronized dynamics may be functionally different from many short desynchronized intervals, although the synchrony may be the same on the average. Detection and quantification of the transient, varying, intermittent synchronization have been considered in the past (e.g., Hurtado et al., 2004;Le Van Quyen and Bragin, 2007). But these attempts were limited by the need in sufficiently long time-windows to obtain statistical significance, because synchronization is not an instantaneous phenomenon (Pikovsky et al., 2001). However, recent developments in time-series analysis (Ahn et al., 2011) allowed exploring the temporal patterning of synchrony on very short time-scales. The analysis of this fine temporal structure of synchronization is possible, because if some synchrony level is present on the average, then one can look at each cycle of oscillations and detect whether the signals are in the synchronous state or not. These methods were used to study neural synchronization in several different systems: synchronization between single units and LFPs in the basal ganglia of Parkinsonian patients (Park et al., 2010;Ratnadurai-Giridharan et al., 2016), synchronization of EEG signals in healthy humans (Ahn and Rubchinsky, 2013), synchronization of LFPs between prefrontal and hippocampus circuits in normal rats and rats experiencing repetitive psychostimulant injections . All these studies had one common feature: neural oscillations were observed to go out of synchrony frequently, but primarily for a short time interval. The observed synchrony level was reached by potentially very frequent, but short desynchronizations. Since this short desynchronization dynamics was observed across different species, different conditions, and different signal types, it may be a universal feature of synchronized activity of neural systems. In this study we are providing a possible explanation for this apparent experimentally observed universality. We do so by looking for answers to two questions: what are the cellular or network mechanisms of this dynamics? What is its potential functional advantage? We hypothesize that if this kind of dynamics is universal, it may be grounded in some general properties of neuronal excitability. In connection with this hypothesis, it is important to recall early insightful computational study (Somers and Kopell, 1993), which suggested that membrane conductances responsible for spiking help to speed up the establishment of synchrony. Here we will explore how experimentally observed short desynchronizations dynamics is defined by the kinetics of ionic channels, responsible for the generation of spikes. We also hypothesize that short desynchronization dynamics permits creation of synchronous states with weaker inputs. This may make neural systems more adaptable as they can easily create synchronous assemblies in response to synaptic or sensory inputs. Since short desynchronization dynamics may be a generic phenomenon based on the properties of membrane channels, which are hard to alter in experiment, we use computational neuroscience techniques to study very simple conductancebased neuronal models. We alter the properties of conductances to explore their critical features for short desynchronization dynamics and investigate how coupled neurons may be efficiently entrained by external input. Models are subjected to the same kind of time-series analysis techniques as were used in earlier experimental studies. As a result, we reveal potential cellular basis of short desynchronization dynamics in the model and present its potential functional advantages. Neuronal Model We use a conductance-based modified Morris-Lecar neuronal model (Izhikevich, 2007;Ermentrout and Terman, 2010). We choose it because it is a simple (perhaps, the simplest) model that directly retains membrane conductances. Even though the original Morris-Lecar model includes calcium and potassium currents, it is equivalent to a reduced classical Hodgkin-Huxley sodium-potassium model (Izhikevich, 2007;Ermentrout and Terman, 2010). So, by studying this neural model, we model neurons with sodium-potassium spiking mechanism with fast sodium and delayed rectifier potassium currents. We consider the model in the form: v is a transmembrane voltage and w is the gating variable of potassium current. are sodium, potassium, and leak currents; I app is a constant parameter and I syn is a synaptic current (see below). g Na , g K , g L are the maximal conductances for the Na + , K + , and the leak currents. The functions are the steady-state activation functions of the gating variables of the Na + and K + currents, and the activation time-constant for K + current. The functions m ∞ (v) and w ∞ (v) have sigmoid shapes while τ(v) has a unimodal shape. The term I syn represents the synaptic current between cells. We consider neurons connected with excitatory synapses adapted from Izhikevich (2007) and Ermentrout and Terman (2010). For a cell i , the synaptic current I syn, i = g syn v i − v syn j =i s j , where the sum is over those cells that send synaptic inputs to a cell i. The synaptic variable s is modeled by the first-order kinetic equation in the form: where H ∞ (x) = 1/[1+exp − x σ s ] is a sigmoidal function and v is the presynaptic neuron voltage (Izhikevich, 2007;Ermentrout and Terman, 2010). The parameter values are g Na = 1, v Na = 1, g K = 3.1, v K = −0.7, g L = 0.5, v L = −0.4, I app = 0.045, v m 1 = −0.01, v m 2 = 0.15, β = 0.145, v w 1 = 0.08, ε = 0.02, g syn = 0.005, v syn = 0.5, α s = 2, β s = 0.2, θ v = 0, σ s = 0.2. We will further vary the values of ε, β, and v w 1 as will be described in the Results. Synchronization Analysis Phase analysis is frequently used to analyze synchronous neural activity of both continuous (LFP, EEG) and spiking signals (see, e.g., Lachaux et al., 1999;Hurtado et al., 2004;Le Van Quyen and Bragin, 2007). This analysis was used in experimental studies revealing prevalence of short desynchronization dynamics (Park et al., 2010;Ahn and Rubchinsky, 2013;Ahn et al., 2014;Ratnadurai-Giridharan et al., 2016). So we assume a very similar approach here. For a spiking activity, a phase of a neuron i is reconstructed by computing where (v i ,ŵ i ) is a rest state of a neuron, set as a center of rotation in the (v i , w i )-plane. Then we consider an average synchronization index to measure the strength of the phase locking between two signals (Pikovsky et al., 2001;Hurtado et al., 2004): where ϕ t j = ϕ 1 t j − ϕ 2 t j is the phase difference, the t j are the sampling points, N is the number of data points to be considered, and . is the absolute value of a complex number. This phase synchronization index γ varies from 0 (lack of synchrony) to 1 (perfect synchrony). It provides an average value of phase-locking. There may be cycles of oscillations, where phase difference is close to the average value of the phase difference (phase-locked, synchronized state) and where it is not close to it (desynchronized state). To study the fine temporal structure of the dynamics of synchronization we use the methods recently developed in Park et al. (2010) and Ahn et al. (2011). Whenever ϕ 1 crossed the zero from negative to positive values, we recorded the value of ϕ 2 , generating a set of consecutive phase values {φ i }, i = 1, . . . , N. If the value of φ i differs from the average value of φ i by less than π/2 then the neurons are considered to be in a synchronized state, otherwise they are in the desynchronized state. We chose the value of the threshold to be π/2 because the experimental studies we discussed above used this value. The duration of desynchronization events is defined as the number of cycles of oscillations that the system spends in the desynchronized state minus one. The mode of the distributions of desynchronization durations is defined as the number with the highest probability of desynchronization durations. We characterize the fine temporal structure of intermittent synchronization by quantifying the properties of distribution of desynchronization durations. We compute the relative frequencies (probabilities) of the durations of desynchronization events. This is similar to how the experimental data were characterized in the studies of the temporal patterns of synchrony (Park et al., 2010;Ahn and Rubchinsky, 2013;Ahn et al., 2014;Ratnadurai-Giridharan et al., 2016). We use the mode of the distribution of desynchronization durations and the probability to observe this mode, p mode . If the mode of the desynchronization duration is short, but other desynchronizations (especially longer ones) are almost as frequent, then the dynamics is not necessarily dominated by short desynchronizations. However, if p mode is close to one, then all other desynchronization durations are rare. In our approach the duration of synchronization and desynchronization intervals is measured not in the absolute time units, but in cycles of oscillations, as was done in experimental studies. It makes easier to compare synchronization patterns between rhythms of different frequency. However, as we study the differences between different desynchronization durations in the modeling, we also compare the dynamics with the same frequencies of rhythms (see Results). RESULTS We will study the dynamics of coupled model neurons as we vary parameters of potassium current. We do so by varying three different parameters: ε, β and v w 1 (see Equations 4 and 5), they all affect the effective value of activation time-constant τ(v) of potassium current. Larger values of τ delay activation of potassium current and promote characteristic shape of spike with very sharp rise of voltage, faster decay of voltage, and prolong interval between spikes. Lowering effective values of τ in the model (for example, by using larger values of ε) will lead a model neuron to generate less spiky and more quasisinusoidal profile of activity (Figure 1). By changing the values of ε, β, v w 1 we can study the model neurons exhibiting spiking activity like at Figure 1A as well as more sinusoidal activity like at Figure 1B (which is not necessarily very realistic, but will help in understanding the mechanisms and functions of physiological activity). Kinetics of Voltage-Gated Potassium Channel and the Temporal Patterning of Synchronization We consider a minimal neuronal network to exhibit synchronized dynamics: two neurons mutually connected with excitatory synapses (Figure 2). These two neurons satisfy (Equations 1-5) and the synapses are described by Equation (6). We consider two weakly coupled neurons with a small difference in firing rate (that is, the frequency of oscillations of voltage) due to slightly different ε. Note that since ε 1 = ε 2 and the coupling strength g syn = 0.005 is weak, two cells are not fully synchronized. Thus, the synchrony is intermittent rather than perfect. The Effect of the Peak Value of Activation Time-Constant The magnitude of the voltage-dependent activation timeconstant τ (v) of potassium current is inversely proportional to the parameter ε in such a way that the maximal value of τ (v) is 1/ε. We consider here how ε affects the durations of desynchronization events and accompanying changes in the average synchrony level and the mean frequency of spiking. As the values of ε 1 and ε 2 increase, the fine temporal structure of synchronization changes as evident by the changes of the mode of the distribution of desynchronization durations ( Figure 3A). Smaller values of ε promote short desynchronization intervals lasting for only one cycle of oscillations. On the contrary, the increase in ε leads to the increase of the mode of the distribution of desynchronization durations. That is, as ε increases, the most frequent desynchronization intervals are getting longer. Since the activation time-constant τ (v) is inversely proportional to ε, larger value of τ (v) (which promotes spikelike waveform in the model) promotes short desynchronization dynamics. The synchrony strength γ experiences only very small variations ( Figure 3B). This indicates that the durations of desynchronizations may be independent of the synchrony strength. The same level of synchrony may be reached with numerous short desynchronizations or few long desynchronizations. The mean frequency (firing rate) grows substantially ( Figure 3C). This is expected because the growth rate FIGURE 2 | Diagram of a minimal network of excitatory coupled neurons. We use ε 1 = ε 2 (i.e., neurons have different firing rates) and the coupling strength g syn is not very strong. of w is proportional to ε (Equation 2). As a result, while desynchronization intervals measured in cycles of oscillations are longer for larger ε, their durations in the absolute time are not necessarily growing. We will address this issue below. Note that the probability of the dominant duration of desynchronization events p mode (thin gray line without dots in Figure 3A) is mostly close to 1 and always higher than 0.5. Thus, the desynchronization durations of the corresponding number of cycles is really dominant (because the sum of all probabilities of the durations of desynchronization events is 1). The Effect of the Widths of Voltage-Dependence of the Activation Time-Constant τ (v) In the model, the parameter β is related to the width of the steady state activation function w ∞ (v) and range of activation constant τ (v), where τ (v) is substantially different from 0. The results presented below will show that it is mostly the widths of voltage-dependence of the activation time-constant τ (v) that matters for the properties of desynchronization durations. As β increases, the width of τ (v) increases. That is, the range of voltages, where the activation time-constant is different from 0, is getting larger. This may effectively bring τ closer to the maximal possible value (which is 1/ε in the model) for a larger range of voltage and thus for a longer time. Thus, similar to the decrease of ε, larger β will promote more "spiky" and less quasi-sinusoidal waveforms. Figure 4 shows how the parameter β affects the synchronized dynamics of coupled neurons. Larger values of β promote shorter desynchronization episodes ( Figure 4A). There is also an effect on the synchrony strength ( Figure 4B) and the frequency ( Figure 4C). Shorter desynchronizations correspond to the higher synchrony level. As β is changing, the frequency is changing. This may mitigate the short desynchronization phenomena if desynchronization duration is measured in absolute units of time instead of cycles of oscillations. Nevertheless, similar to the case considered above, a change in parameter that leads to larger values of activation time-constant τ (increase in β) promotes desynchronizations of shorter durations [as signified by the high value of the probability at the mode of distribution of desynchronization durations (gray thin line in Figure 4A)]. The Effect of the Voltage of Half-Activation and Maximal Activation Time-Constant We now consider the effect of v w 1 which is the midpoint of the steady state activation function w ∞ (v), that is the voltage at which half of the channels open. The same parameter defines the voltage at which the activation time-constant τ (v) peaks. Increase in v w 1 shifts both curves w ∞ (v) and τ (v) in the direction of higher voltages. The conductance will start to increase at higher voltage and at a later time, and will start to decrease at earlier time. Thus, larger value of v w 1 may be expected to have an effect analogous to decrease in τ . The results of numerical simulation presented at the The mean frequency of activities of both neurons. Since ε 2 = 1.2ε 1 , neuron 2 has slightly higher frequency than the mean frequency while the neuron 1 has slightly lower frequency than the mean frequency. shorter desynchronizations (Figure 5A). In this case, the synchronization strength is larger for short desynchronization dynamics ( Figure 5B), but the frequency is almost constant ( Figure 5C). Thus, the desynchronizations are short here not only if measured in the number of cycles (spikes), but also measured in absolute time units. Changing Desynchronization Durations Independently of Both Frequency and Synchrony Strength The changes in desynchronization durations in the numerical experiments above are accompanied by changing either average synchrony strength or firing rate (or even both). Here we consider whether the desynchronization durations can vary independently of both synchrony strength and firing rate. To study this, we modify the Equations (4) and (5) so that the values of parameter β in equations for w ∞ (v) and τ (v) are not identical. This means the half-activation voltage is different from the voltage at which τ (v) has maximal value: Smaller β w makes the slope of the steady-state activation function w ∞ (v) larger, while smaller β τ makes the width of the constant function τ(v) smaller. We let β w and β τ be changing in opposite directions. As β w decreases from 0.134, β τ increases from 0.061 at a different rate (β w = 0.134−0.001k and β τ = 0.061+0.0005k, where k = 0, 1, . . . , 40.). For other parameters, we use ε 2 = 1.3ε 1 , ε 1 = 0.03, I app = 0.04, g syn = 0.005, v w 1 = 0.07. These changes of β w and β τ are not necessarily biologically realistic, but they allow us to explore whether the changes of desynchronization durations must covary with the changes of average synchrony or firing rate. Figure 6 shows that in this case the synchrony strength and the firing rate are almost constant while the mode of desynchronization durations changes drastically. In other words, simultaneous variations of the width of w ∞ (v) and τ (v) vary the distribution of desynchronizations independently from synchrony strength and firing rate. Thus, the same level of synchrony strength may be supported either with many short desynchronizations or few long desynchronizations regardless of whether the durations of desynchronizations are measured in cycles of oscillations or in absolute time units. Short Desynchronization Dynamics and Synchronization Threshold To study potential functional advantages of short desynchronizations dynamics, we will consider two mutually excitatory connected neurons (as before) receiving common synaptic input from a third neuron: neuron 3 excites neurons 1 and 2 through the excitatory synapses (but does not get any feedback, Figure 7). We consider two different versions of three-neuron networks in Figure 7. In the first version, the parameters are selected in such a way that when g syn1 = 0 neurons 1 and 2 exhibit dynamics with mostly short desynchronizations. The second version exhibits partially synchronized dynamics with the most common desynchronization intervals lasting for 4 cycles of oscillations when g syn1 = 0. In other words, we consider how two coupled neurons exhibiting either short desynchronization dynamics or longer desynchronization dynamics respond to the common synaptic input. One network has β w = 0.094 and β τ = 0.081, this is the left end of the horizontal axis in Figure 6A. The mode of desynchronization durations is just 1 and we will call this network "cycle 1" network (short desynchronizations network). The other network has β w = 0.134 and β τ = 0.061, this is the right end of the horizontal axis in Figure 6A. The mode of desynchronization durations is 4 and we will call this network "cycle 4" network (longer desynchronizations network). It is important to note that both networks have almost the same synchrony strength ( Figure 6B) and firing rate ( Figure 6C). So, except the difference in desynchronization durations, the dynamics of two networks are similar. That is, they have the same synchrony level and the same period of oscillations in the absence of synaptic input from the neuron 3. In the numerical experiments, ε 1 = 0.03 and ε 2 = 1.3ε 1 , the same values as used in Figure 6. We consider four different values of the firing rate in the neuron 3: ε 3 = 0.5ε 1 , ε 1 +ε 2 2 , 1.5ε 1 , 2ε 1 . So, the firing rate in neuron 3 is either substantially lower than in neurons 1 and 2, equals to the average of firing rates of neurons 1 and 2, or is higher than firing rates in neurons 1 and 2. All other parameters of neurons 1, 2, and 3 are the same and fixed as those used in Figure 6. Now let us consider these two networks as the common input to neurons 1 and 2 is getting stronger due to increase of g syn1 from zero (while g syn = 0.005 is fixed, that is, the coupling between neuron 1 and neuron 2 is relatively weak). As synaptic input from neuron 3 to neurons 1 and 2 is getting stronger, neurons 1 and 2 are becoming more synchronous and will eventually be in full synchrony with each other due to common synaptic input and mutual synaptic coupling. We compute the synchrony index γ for "cycle 1" and "cycle 4" networks (γ (1C) and γ (4C) respectively) when increasing values of g syn1 . To study how differently these networks are synchronized, we consider the absolute and relative difference of synchronization indices γ (1C) and γ (4C) for different values of g syn1 . Figure 8 presents the averages of γ (1C) − γ (4C) (thick solid line) and figure). Both quantities indicate how much synchronization in "cycle 1" (short desynchronizations) network is stronger than synchronization in "cycle 4" network when they receive the same synaptic input from the neuron 3. When this input is weak (g syn1 is small), γ (1C) and γ (4C) are close to each other. When g syn1 is large, γ (1C) and γ (4C) are again close to each other because both networks are necessarily strongly synchronous due to strong input. However, for the values of g syn1 between zero and synchronization threshold value, γ (1C) − γ (4C) is large and positive. So the networks exhibiting short desynchronizations dynamics in the absence of input ("cycle 1" networks) reach either the same synchrony levels or higher synchrony levels than long desynchronization ("cycle 4") networks for the same strength of synaptic input g syn1 . This phenomenon is observed regardless of the firing rate in presynaptic neuron 3 (i.e., regardless of ε 3 ). Sometimes this difference in the synchronization strength is moderate, but sometimes it is quite substantial (see Figure 8). The synchrony index is bounded by one from above, so the magnitude of the phenomenon is more emphasized by observing the relative value of synchronization index difference (inserts in Figure 8). We also measure the threshold value g syn1 for two neruons to reach synchornized dynamics without desynchronization events (Figure 9). This does not imply complete synchrony, but implies only small deviations between the phases of two signals, so that it is small enough to have no desynchonization events. As can be seen in Figure 9, the computed synchrony thresholds for short desynchronization ("cycle 1") network were lower than the synchrony thresholds for long desynchronization ("cycle 4") network for all considered firing rates (all possible ε 3 ). The results presented in Figures 8, 9 indicate that with average synchrony level and mean firing rate being equal, neural systems with short desynchronization dynamics reach higher synchrony for the same synaptic input strength and need weaker inputs to be synchronized than neural systems with long desynchronization events. ) is presented at the inserts (γ (1C) and γ (4C) represent the synchrony index γ for "cycle 1" (short desynchronizations) and "cycle 4" (long desynchronizations) networks, respectively). Subplots (A-D) are for different values of the firing rate of incoming signal, corresponding to ε 3 = 0.5ε 1 , ε 1 +ε 2 2 , 1.5ε 1 , 2ε 1 respectively. Frontiers in Computational Neuroscience | www.frontiersin.org FIGURE 9 | Threshold value of synaptic strength g syn1 to reach synchornized dynamics without desynchronization events for different values of ε 3 . Black squares represent the critical value of g syn1 for short desynchronization ("cycle 1") network and the gray circles represent the critical value of g syn1 for long desynchronization ("cycle 4") network. Cellular Mechanisms of Short Desynchronization Dynamics Imperfect synchrony is widely observed in the activity of neural networks of the brain. New time-series analysis techniques showed that intervals of synchronous dynamics are interspersed between desynchronized episodes, and most desynchronized episodes are very short (see references in Introduction). This stereotyped fine temporal structure of neural synchronization is not an artifact of the analysis method because other types of patterning of synchronization are possible in non-neural coupled oscillators (Ahn et al., 2011;Rubchinsky et al., 2014). The present study provides potential mechanisms for this type of temporal patterning of neural synchrony. We varied several parameters of potassium conductance and identified conditions leading to the intermittent neural synchrony with predominantly short desynchronization episodes similar to experimental ones. All these conditions (large peak value of activation time-constant, large width of dependence of activation time-constant on voltage, lower values of voltage for peak activation time-constant) lead to the relatively large values of the activation time-constant τ (v) in the right range of voltages. The large value of τ (v) leads to the delay in activation of potassium current, so that a sharp spike can be generated. And, as our results show, it promotes the short desynchronizations dynamics. The results of the computational modeling also indicate that the distribution of desynchronization durations may be independent of the synchronization strength. The same synchrony strength may be achieved with desynchronizations of different durations. Moreover, our results regarding comparison of synchronization in networks exhibiting short desynchronizations and long desynchronizations are obtained for the case when not only average synchrony level is the same, but the period of oscillations (the firing rate) is the same. By appropriate adjustment of model parameters we dissociated the effects of frequency of oscillations and of average synchrony strength from the effects of fine temporal patterning of synchronized dynamics. These model-based observations fit with experimental observations of the changes in the distribution of desynchronization durations in prefrontal cortex-hippocampal synchrony in behavioral sensitizations experiments . In these experiments, the desynchronization durations were predominantly short and their distributions were altered after psychostimulant administration, while the average synchrony levels stayed the same. The "spikiness" of oscillations of transmembrane voltage is a very generic property of many neurons, which relies on the fast activation of current with high reversal potential and slow activation of current with low reversal potential. Our results show that the same conditions that promote short desynchronization dynamics promote the characteristically sharp shape of an action potential. This may explain why very different neural systems exhibit short desynchronization dynamics, as we described in Introduction. Limitations of the Modeling Approach We use a very simple model of a neuron and very simple model of a network. There are many factors, which may affect synchronous dynamics of neural activity, yet they are not represented in the model. Other important factors, which affect neural synchrony, are different membrane currents and their properties (we have a model with just two conductances and consider only several parameters of one conductance) and the size of the network (we have a very small network). Heterogeneity of the networks is also important (we have a very minimal representation of heterogeneity). Synaptic plasticity is not incorporated in our model (and is the subject of the future research). Finally, noise may affect temporal patterns of synchrony, which is not considered in this study either. However, even though these factors are not incorporated in the model (which captures only some very basic mechanisms of neural activity), the model is able to generate realistic synchrony patterns. So, the right way to interpret the modeling results is to see what these basic mechanisms are capable of. These modeling results suggest that these very basic neural mechanisms are capable of explaining the properties of experimentally observed intermittency of neural synchrony. As we discussed above, short desynchronization dynamics has been observed in several different neural systems. An ability of a minimal neural network considered here to describe the properties of the intermittent synchrony (which is common to all those systems) is probably an indicator that the general neural mechanisms built in the model are adequate to the considered phenomena. Inhibition is playing an important role in neural synchronization, but is not considered in our model. The experimental data discussed here were collected from cortical and subcortical networks with excitatory and inhibitory synapses. It will be interesting to see how the intermittent patterns of synchrony are affected by inhibitory synapses. We would also like to note that our earlier study with more advanced neural and network model (which included excitatory and inhibitory synapses) did provided a quantitatively adequate description of the short desynchronization dynamics at the betaband oscillations in the basal ganglia in Parkinson's disease . The present modeling study is not designed to provide a quantitative description of a specific experiment, but rather it provides a qualitative description of common aspects of neural synchrony in different neural systems. Potential Functional Significance of Short Desynchronization Dynamics Our computational results suggest one way of how short desynchronization dynamics can be beneficial for neural systems. With two important properties of dynamic (average synchrony strength and firing rate) being equal, neural systems with short desynchronizations are easier to synchronize with common synaptic input. We showed that the same strength of common synaptic input leads to larger synchrony level in short desynchronization system. In other words, short desynchronization dynamics allows reaching a pre-set synchrony level with weaker input. So, if a strong synchrony is needed, systems with short desynchronizations will reach the pre-set synchrony strength with weaker inputs compared to longer desynchronizations. Given the functional importance of synchronization in many neural systems (see references in Introduction), short desynchronization dynamics may allow for efficient regulation of synchrony levels. While the same level of synchrony may potentially be achieved with few long desynchronization episodes as well as with many short desynchronization episodes, only short desynchronization dynamics is experimentally observed in the neural synchrony in the brains. Our modeling results suggest that this short desynchronizations dynamics is easier to control with synaptic input. Thus, very basic properties of delayed rectifier potassium current (its delayed activation) is likely to promote efficient formation and break-up of synchronized assemblies. AUTHOR CONTRIBUTIONS LR conceived research; SA and LR designed research; SA performed numerical simulations, SA and LR analyzed and interpreted the results; SA and LR wrote the manuscript.
2017-06-15T18:42:44.716Z
2017-05-30T00:00:00.000
{ "year": 2017, "sha1": "a57b7e56710d06326d2696b3fdadbdd4d72727f0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncom.2017.00044/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a57b7e56710d06326d2696b3fdadbdd4d72727f0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
221151188
pes2o/s2orc
v3-fos-license
Fatigue Assessment using ECG and Actigraphy Sensors Fatigue is one of the key factors in the loss of work efficiency and health-related quality of life, and most fatigue assessment methods were based on self-reporting, which may suffer from many factors such as recall bias. To address this issue, we developed an automated system using wearable sensing and machine learning techniques for objective fatigue assessment. ECG/Actigraphy data were collected from subjects in free-living environments. Preprocessing and feature engineering methods were applied, before interpretable solution and deep learning solution were introduced. Specifically, for interpretable solution, we proposed a feature selection approach which can select less correlated and high informative features for better understanding system's decision-making process. For deep learning solution, we used state-of-the-art self-attention model, based on which we further proposed a consistency self-attention (CSA) mechanism for fatigue assessment. Extensive experiments were conducted, and very promising results were achieved. INTRODUCTION Fatigue is one of the main medical symptoms to define weakness and ageing problems [3], and in many chronic diseases it is also a key factor in the loss of work efficiency and health-related quality of life (HRQoL), which may impose considerable health and economic burdens [9]. Reliable and sensitive fatigue assessment is a key aspect of evaluation of the therapeutic effects of treatments. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. preprint, 0000/00000 © 2020 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn Most fatigue assessments are based on self-reporting [14]. However, like most questionnaire-based approaches, such subjective measurement has key limitations such as recall bias [1], and is challenging to quantify in a repeatable and reproducible way [13] [2]. Although its reliability can be improved via tracking in longterms or with high-frequency during short periods, the costs and patient burden can be increased significantly. Recently, sensing and machine learning (ML) techniques have been widely used for automated health assessment [5,7,8,12,17,20,25]. Through modelling the collected behaviour or physiological signals, health can be assessed in an objective and continuous manner, in contrast to the subjective, and non-continuous self-reporting methods. For automated fatigue assessment, potential sensing modalities include accelerometer [21,34], HR [4,21], ECG [11], EEG [23], EMG [15], etc. based on which ML methods were used for modelling. Yet these approaches were tested on subjects in controlled environments. In this work, we developed models based on ECG/Actigraphy data collected from free-living environments. The participants were told to wear Actigraph watch and Vital Patch for 7 days, and they were required to self-report fatigue levels 4 times a day. After preprocessing and feature engineering, we developed regression systems to map the signals into the fatigue score for objective fatigue assessment, as shown in Fig. 1. Specifically, both interpretable solution and deep learning solution were proposed for modelling. METHODOLOGY In this paper, we aim to develop a fatigue assessment system, which can automatically output fatigue scores given Actigraphy/ECG. Fig. 1 demonstrates an overview of this system, and in this section details are provided from data collection, ECG/Actigraphy data processing, to interpretable and deep learning solutions. Data Collection In this preliminary study, 13 participants were recruited and data was collected to cover two clinical visits. After each visit, the participants were required to wear two medical-grade devices (for 7 days in free-living environments), namely, Actigraph GT9X Link [27] and Vital Patch [30], from which Actigraphy/ECG data can be obtained. The visual analog scale questionnaires [22] were also sent out to participants asking the question "How severe has your fatigue been now?". The fatigue score ranging from 0 to 10 represents fatigue levels from "No fatigue"(0) to "Maximal imaginable fatigue" (10). During the 7-day data collection period, the subjects were asked to record their fatigue levels 4 times a day, i.e., morning(6am-12pm), afternoon(12pm-6pm), evening(6pm-12am), and night(12am-6am). Accordingly, we divided daily ECG/Actigraphy signals into 4 segments, corresponding to the 4 daily recorded fatigue scores. Preprocessing and Feature Engineering After ECG/Actigraphy collection it is crucial to filter out the irrelevant information, caused by various reasons such as device artifacts, inappropriate wearing positions, non-wearing, etc. After that, modality-specific feature engineering can be performed for compact representation. Processing ECG. ECG signal can be distorted due to various artifacts, e.g., equipment inducing movement artifacts [16]. Heart rate variability (HRV), derived from ECG, was normally considered as a reliable measure that is less sensitive to these artifacts [19]. HRV features can be extracted based on Normal-Normal Interval (NNI), which can be achieved by R-R interval(RRI) detection, followed by a post-processing. In this work, we segmented the raw ECG into 5-min windows, and for each window to estimate NNI we performed the following procedure: (1) detecting the R-points/R-peaks (in QRS complex); (2) computing the RRIs based on the detected R-points; (3) removing RRI outliers, i.e., the ones with the range 300-2000 ms [24] and the ones with a difference by more than 20% than the previous interval; (4) linearly interpolating the removed RRIs; We achieved Step(1) using Python package NeuroKit [31] and step(3)-(4) by using toolbox HRV-Analysis [29]. Based on NNIs, we further assessed the data quality for each window and removed the ones with low quality. Given NNIs for each window, based on the source code 1 provided by [33] we extracted 30 HRV features in four domains (time, geometrical, frequency, and non-linear domains), as shown in Table 1. Processing Actigraphy. For Actigraphy, one major quality issue is the missing data problem caused by non-wearing. Via ActiLife [28], we managed to calculate the Actigraphy counts every 30 seconds, and detect the non-wear time (for invalid data removal). Within every 5-min window, based on Actigraphy counts we further extracted 8 statistical features, i.e.,mean, median, standard deviation, variance, minimum value, maximum value, skewness and kurtosis for further processing. Multimodal Feature Sequence Construction. Since there exists substantial missing data for both modalities (ECG/Actigraphy), we only preserved the ones (i.e., 5-min windows) when both are valid. In the segment-level (i.e., up to 6 hours), we further excluded the ones with less than 100 minutes of usable data. After preprocessing and feature engineering, the original segment can be transformed into a D-dimensional sequence X = {x t ∈ where T is the sequence length (i.e., the number of windows/epoches within a segment), and x t may be referred to as features extracted from Actigraphy, ECG, or both at timestamp t (i.e., the t th window/epoch in a segment). It is worth noting that due to the aforementioned data cleaning operations, the sequence length T is a non-fixed number, and in our dataset we have T 's mean ± std: 56.9 ± 13.6 with maximum/minimum value 72(i.e.,6 hours) and 21 (i.e.,105 minutes), respectively. Interpretable Solution In this subsection, we aim to extract the interpretable features from sequence X = {x t ∈ R D } T t =1 , before mapping them to the corresponding fatigue score y ∈ {0, 1, ..., 10} High-level Feature Extraction. There are several interpretable machine learning models such as linear models, decision tree, etc., yet they cannot be directly applied to time series data. To address this issue, we further extracted highlevel features from the sequence X over the time axis. Specifically, for each dimension (out of D), we calculated 11 descriptive statistics, namely 10th percentile, 25th percentile, 50th percentile, 75th percentile, 90th percentile, mean, minimum, maximum, standard deviation, skewness and kurtosis. The high-level feature vector x hiдh ∈ R 11D can be formed simply by vector concatenation. Feature Selection. For x hiдh ∈ R 11D , there may exist high-level of feature redundancy. For example, in Fig. 2b we visualised the feature correlation matrix corresponding to the combined ECG/Actigraphy modality, and we can observe high-level of feature correlation (as indicated by brighter colours). Although there exists feature decorrelation and dimension reduction algorithms such as PCA, in order to preserve the interpretability of the features here we proposed to use a Feature Selection(FS) mechanism. Given training set {X hiдh , y}, where x hiдh ∈ X hiдh , y ∈ y, we performed the following procedure: (1) similar to Fig.2b, calculating the correlations among all features, i.e., corr (X hiдh , X hiдh ); (2) grouping features together with pair-wise correlation coefficients higher than a threshold (0.8 in this work); (3) calculating the correlation coefficients between each feature and the fatigue score, i.e., corr (X hiдh , y); (4) selecting the top feature from each group based on the coefficients corr (X hiдh , y); (5) performing LASSO to further select the key features. It is worth noting owing to high dimensionality, before applying LASSO we performed the "coarse feature selection" mechanism via step(2)-(4), which can select the less correlated features with highest relevance (to fatigue score). LASSO, a data-driven feature selection approach, can further refine the feature selection procedure. Based on the selected features, interpretable regressors (e.g., linear models, tree-based approaches) can then be used. Deep Learning Solution In interpretable solution, during high-level feature extraction only 11 simple statistical features were used to encode the time-series, which may cause information loss. A popular way to preserve the temporal information is to use deep sequential modelling such as long short term memory(LSTM) [10], which can encode input sequence X = {x t ∈ R D } T t =1 into hidden states H = {h t ∈ R D l } T t =1 , before using the last hidden state h T for prediction. Note D l is the hidden state dimension. LSTM can be trained by backpropagation through time (BPTT) [10], and for regression problem, mean squared error (MSE) loss L M S E is normally used. LSTM with Self-Attention (LSTM-SA). Compared with LSTM which uses the last hidden state h T for prediction, Self-Attention(SA) [26] further employs information from the sequence by utilising an attention vector α ∈ R T . In LSTM-SA model, three additional model parameters are needed to be estimated, namely W Q ,W K ,W V ∈ R D a ×D l , where D a is the attention dimension. Given the hidden states H ∈ R D l ×T (from LSTM), three matrices named Queries Q ∈ R D a ×T , Keys K ∈ R D a ×T , and Values V ∈ R D a ×T can be calculated via linear transformations such that K = W K H , Q = W Q H , and V = W V H . Specifically, K is employed to learn the distribution of attention matrix on condition of Q. V is used to exploit information representation. Given that, attention vector can be calculated via where q T is the T t h column of Q. With attention vector α , the new D a -dimensional representation z T = V α can be used for prediction. For regression problems, L M S E can be used. LSTM with Consistency Self-Attention (LSTM-CSA). LSTM-SA is a powerful tool in many applications, but it has a nonsmooth attention distribution α over the sequence.For continuous signals, temporal attention regularisation was normally used which encourages the continuity [32]. Although our sequence may not be strictly continuous (due to feature engineering or missing data), there may exist certain levels of consistency for the adjacent entries, and thus we used the following regularisation term: where α t ∈ α . The loss function can then be updated to L = L MS E + λΩ(α ), where λ is the regularisation parameter. It is worth noting that with T in Eq. (2), Ω(α ) tends to penalise heavily with a larger T (i.e., with less or no missing data) to maintain its global consistency (i.e.,continuity). EXPERIMENTS The collected data suffered from quality issues from various reasons, resulting in a substantially reduced data size after data preprocessing. Data used in the experiments were from 9 subjects, with 198 sequences. The demographic information of these subjects can be found in Table 4 (in Appendix). To evaluate the prediction models, unless stated otherwise we performed 5-fold cross-validation (5-fold CV). MAE/RMSE were used as the evaluation metrics. The implementation details of our methods can be found at: https://github.com/baiyang4/Sjogrens_questionnaire Interpretable Models. Based on the high-level features (e.g.,x hiдh in Sec. 2.3) and our Feature Selection (FS) method, linear regression was used for prediction. In Table 2, we reported results based on different settings, from which we can observe that the performance deteriorates significantly without FS (for all modalities). In terms of sensing modality we see Actigraphy has the worse results. One of the possible explanations is the inadequate feature engineering-only 8 simple statistical features were extracted from Actigraphy (see Sec. 2.2.2). In contrast, we extracted the domain knowledge-driven features from ECG, yielding much better results. One of the advantages of FS is to rank the most important features for better interpretation. In Fig. 2a, we demonstrated some of the most important features, which were the aggregate from 5-fold CV (based on ECG+Actigraphy with FS(#Feat.15)). We can see features 1) very low frequency (75th percentile), 2) range NNI (maximum), 3) Mod. Cardiac Symp.IdNx (50th percentile), and 4) high frequency (std) have contributed the most to the general performance, which might give some insights to clinicians and practitioners. More details of these features can be found at Table 1. Deep Learning Models. For all LSTM models, we used one hidden layer with D l = 128 and set batch size to 1; for LSTM-SA/LSTM-CSA, we set D a = 128. Main results of the LSTM models were reported in Table 3, from which we can see that LSTM-CSA achieved the better results than other models irrespective of modalities. In contrast to linear models, LSTM-CSA can exploit useful information from Actigraphy, making the combined modality with the best results (than ECG only). Based on combined modality, we also presented the scatter graphs of linear model and LSTM-CSA in Fig. 3, and we can see LSTM-CSA has a larger correlation coefficient (between ground truth and prediction). We also visualised the attentions of LSTM-SA, and LSTM-CSA in Fig. 5 in Appendix for a sequence with some missing data, and we can see the consistency nature (i.e.,smoothness) of LSTM-CSA's attention, in contrast to the non-smooth attention from LSTM-SA. Limitations and Discussion Although we saw promising results of LSTM-CSA on the combined modality, it was based on 5-fold CV. For practical applications, we also performed leave-one-subject-out-cross-validation(LOSO-CV), and the results were shown in Fig. 4, from which we can see there is a significant performance drop (e.g., correlation coefficient from 0.81 to 0.63). One of the major reason is the small subject number Fig. 4), making the trained model (on other subjects) hardly generalise to that subject. Such overfitting problem can be reduced by using larger dataset. In this work, although deep learning solution can provide better results, they are less transparent than the interpretable solution, from which we can list the key human-understandable features. However, interpretable solution relies heavily on feature engineering. For example, we used the simplest features from Actigraphy, and these features played a negative role, making the combining effect of ECG+Actigraphy worse than ECG only. On the other hand, deep learning is a data-driven approach and it can learn the useful information from Actigraphy, making combined modality with the best results. Nevertheless, both solutions have their own advantages and they can be used together. For example, deep learning can be used as a prototype tool to guide feature engineering for interpretable solution, which will be explored in the future. CONCLUSION AND FUTURE WORK To develop an automated fatigue assessment system, in this work we introduced a pipeline from data collection, data preprocessing, feature engineering to interpretable solution and deep learning solution. Both solutions were evaluated on the collected dataset, and some promising results were achieved. This work is a pilot study of a larger project, where 120 subjects will be recruited, based on which the proposed solutions' generalisation capability will be further evaluated. Next, we will also explore 1) how to use deep learning as a prototype tool to guide feature engineering for interpretable solution; 2) how to further improve the generalisation capability of the deep learning solution (e.g., LSTM-CSA), e.g., by using LSTM ensemble [6].
2020-08-10T01:00:28.618Z
2020-08-06T00:00:00.000
{ "year": 2020, "sha1": "1bb8c3304ceae7f9f52e5853573bdc47b622aa6d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2008.02871", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "67e5b5db12240cef05d09f70da8f9110bf3f87a6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
263157589
pes2o/s2orc
v3-fos-license
A case report and literature review on a rare subtype of triple-negative breast cancer in children Background Triple-negative breast cancer (TNBC) is a type of breast tumor with a poor prognosis because it lacks or expresses low levels of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER-2). TNBC is more common in middle-aged and older women, and cases of TNBC in children are rarely reported. This is the only case of childhood SBC in our hospital in more than 70 years, and the disease is extremely rare internationally. We analyzed and studied the disease and TNBC from both clinical and pathological aspects and found that SBC is very different from TNBC. Case presentation We report a case of secretory breast cancer (SBC), a subtype of TNBC, in an 8-year-old girl from our institution. The child presented with a single mass in the left breast only, with no skin rupture and no enlargement of the surrounding lymph nodes. The child underwent two surgeries and was followed up for one year with a good prognosis. Conclusions SBC is highly prevalent among the multiple pathological types of pediatric breast cancer. Almost all pediatric SBC patients are characterized by the ETV6-NTRK3 fusion gene, which has a good prognosis and a 10-year survival rate of more than 90% when compared with other TNBC subtypes. According to the patient, we performed local mass resection and a postoperative pathological diagnosis of SBC (a subtype of BL-TNBC). The TNBC case had a good prognosis and differed from basal TNBC in several aspects, including clinical presentation, treatment, and prognosis. It is necessary to exclude SBC from BL-type TNBC, enhance understanding of the disease, and individualize the treatment plan, so as to avoid medical errors. Background As a common malignant tumor in humans, breast cancer has jumped to the top of malignant tumors in terms of prevalence, and it is the leading cause of female malignant tumors [1].Triple-negative breast cancer(TNBC) is a subtype of breast cancer, which accounts for approximately 15-25% of breast cancer cases and is characterized by the absence or reduction of estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER-2) [2].The criteria for ER/ PR-negative are met if there is less than 1% expression of those receptors within the tumor.A tumor is considered HER2-negative if the immunohistochemistry result is 0, 1 + or 2 + , and FISH negative.TNBC has a high recurrence rate, a high metastatic potential, and a short overall survival, with some reports indicating that patients with recurrent metastatic TNBC have an overall survival of only 13-18 months [3].secretory breast cancer(SBC) is more common in middle-aged women [4,5].SBC accounts for less than 0.15% of breast cancers, with a higher incidence in adults than in children and that in the female than in the male, with previous literature showing a ratio of 6:1 and a recent National Cancer Data Base(NCDB) review demonstrating a ratio of 31:1 [6].The pathogenesis is closely related to genomic instability and mutations [7].A study by Min Ji Song [8] analyzed the exome sequencing data of three groups of SBCs, totaling 1105 somatic cells, and identified 1,046 somatic mutations.Among the 44 genes detected, 11 mutated genes of SBC (KIAA2012, SUCLG2, SEMA3G, KLHL18, FOXB2, KMT2D, DHX38, CERS4, CD209, AP3D1, HELZ2) differed from those of TNBC.The prognosis for children with SBC is good.There is no consensus on the optimal treatment strategy for SBC.Therefore, it makes sense to report each case to provide treatment experience. Case presentation An 8-year-old girl was presented to Red Flag hospital with a progressive enlargement of her right breast mass for 4 months.She had undeveloped breasts, pubic/axillary hair development at Tanner 1 stage, no menstrual flow, no birthmarks of any kind, no areas of mucosal pigmentation; and no findings suggestive of cancer susceptibility syndrome.There was a hard mass about 1.5 cm in diameter under the right nipple, with clear borders and good mobility.Preoperative ultrasonography showed a hypoechoic lesion near the right nipple, measuring about 1.4 × 0.8 cm, with clear borders, regular morphology, and obviously blood flow signal.The patient has no family history of breast cancer or other types of cancers (ovarian cancer, colon cancer, etc.).The presence of gastrointestinal polyps could not be determined because no relevant tests were performed.The patient underwent local excision of the mass.The tumor was white to gray-red tissue measuring 1.6 × 1.4 × 0.7 cm, with a solid gray-white cut surface.Histological findings showed a dominant papillary structure, as well as tubular pink or pale pink cells with secretory vesicles and interstitial powdery secretions.Immunohistochemical results showed ER few cells ( +), PR) (-), HER-2 (-), and Ki-67 less than 10% ( +).A diagnosis of SBC was made.The child underwent a second extended excisional scope surgery with an anterior lymph node biopsy.No tumor cells were found in the residual tumor cavity around the incision margin or in the anterior lymph node biopsy.The child did not undergo next-generation breast cancer multigenomic testing but fluorescence in situ hybridization (FISH) for the ETV6 gene, which revealed the presence of the ETV6-NTRK3 fusion gene.The patient presented with features of a benign breast tumor.The tumor was small in size (less than 2.0 cm), with clear borders, no adhesion to surrounding tissues, no nipple discharge, and no axillary lymph node metastasis.There was no obvious adhesion or tumor malignant tendency during the operation.Based on the therapeutic precedents documented in pertinent scholarly literature, it is not necessary for the patient to undergo radiotherapy, chemotherapy, immunotherapy, or targeted therapy subsequent to the surgical procedure.(The tumor and postoperative pathological images were shown in Fig. 1). Discussion In recent years, TNBC has become a hot spot for clinical research in recent years due to an increase in incidence and a younger age group [9].Studies have shown that deletion or mutation of breast cancer1/2(BRCA1/2) is closely associated with the development of TNBC.The term "homologous recombination defect" (HRD) refers to a situation in which the homologous recombination (HR) process is unavailable or impaired [10,11].In this case, DNA repair is more prone to errors, leading to genomic instability.Breast cancer susceptibility gene 1/2 (BRCA1, 2) is an oncogene that maintains genomic stability by playing a key role in DNA repair, cell cycle arrest, and transcriptional control.BRCA1 and BRCA2 play a role in DNA double-strand break (DSB) repair through the HR process, with BRCA1 guiding repair toward error-free HR [12].BRCA1/ 2 losses of function lead to HRD [13].Ki67 is significantly associated with the proliferation level of tumor cells and can predict the prognosis of TNBC [14].Recent studies have shown that programmed death-ligand 1 (PD-L1) and purine/pyrimidine endonuclease 1 (APE1) play a role in the development of TNBC.It is hypothesized that APE regulates PD-L1 expression, promotes greater metastatic and invasive capacity of tumor cells and further participates in tumor immune escape [15].Therefore, the emergence of immune checkpoint therapies may be a future research direction.SBC has a characteristic t (12:15) homozygous transition that produces an ETV6-NTRK3 gene fusion.The resulting chimeric tyrosine kinase produced activates the Ras-Mek1 and PI3K-Akt pathways, which are essential for the growth and survival of mammary cells, and have potent transforming activity on fibroblasts and mammary duct epithelial cells.This ultimately leads to the emergence of SBC [16].The current general consensus for SBC is to perform FISH fusion gene testing, so BRCA, PDL-1, and APE1 testing are not performed in patients.TNBC is widely thought to be highly malignant and has a poor prognosis, but a large body of literature shows that TNBC is heterogeneous and varies greatly in terms of pathological features, biological behavior and gene expression profiles [2,[17][18][19].Therefore, it is important to understand the classification of TNBC in clinical practice.TNBC can be classified into several subtypes based on their biological characteristics.Chen Lin [20] found that the classification of luminal androgen receptor (LAR), basal-like (BL), mesenchymal (MES), and immunomodulatory/basal-like immune activation (IM/ BLIA) was repeatedly mentioned in several TNBC classifications and is a relatively recognized one.JoensuuH [21] classified TNBC into BL and non-BL types, with BL type accounting for 80%, having a high degree of malignancy and a poor prognosis compared to the non-BL type.BL-TNBC is further divided into BL1 and BL2 types, which are characterized by overexpression of cell cycle-related genes and DNA damage-response genes [22] and exhibit a strong proliferative capacity through these features.BL-1 has a higher value-added rate than other subtypes of TNBC [23], indicating active tumor cell proliferation and poor prognosis.Breast cancer in children [24] can be diagnosed by clinical examination, radiological/ imaging examinations (including mammography, magnetic resonance imaging (MRI), ultrasonography, etc.), and immune histopathological examination.TNBC is diagnosed by using intraoperative rapid frozen section pathology and postoperative pathological findings.The primary treatment for TNBC is surgery combined with neoadjuvant chemotherapy [25].As different histological subtypes of TNBC vary greatly in clinical presentation, treatment, and prognosis; histological classification can help scientists to develop the best-individualized treatment approach.Platinum-based drugs and PARP inhibitors [16] are effective in the treatment of TNBC of BL-1 type. According to the worldwide classification of intrinsic subtypes, SBC is classified as basal-like TNBC [26].However, the profile of SBC is very different from that of TNBC.SBC accounts [27] for less than 0.15% of breast cancers but is more prevalent in children with a family history of cancer.In children, SBC is usually solitary, firm, well-defined, less than 2.0 cm in diameter, mobile, slow-growing, and mostly with no lymph node metastasis.When SBC is combined with other breast diseases (intraductal papillary carcinoma or fibroadenoma), the mass typically shows signs of malignant breast cancer, such as spillage, poorly defined borders, poor mobility, [28] and rapid mass growth.On ultrasonography, they are mostly well-defined cystic masses or solid nodules with isoechoic or hypoechoic.The microscopic histologic pattern of SBC contains four main morphologies: microcystic, solid lamellar, tubular and papillary structures, with most cases containing all four morphologies in varying proportions, while a few cases are completely dominated by a single histologic pattern.Early diagnosis of SBC in children is difficult.Conventional imaging is shy of diagnosis.However, recent studies suggest that elastography scoring examinations [29] may offer a new method for the early diagnosis of SBC in children.Elastography is an advanced modality of ultrasound imaging that leverages variations in tissue stiffness for diagnostic purposes, effectively differentiating between soft and hard tissues.This technique is particularly instrumental in the detection of malignant lesions in organs such as the thyroid and breast.This technique is particularly instrumental in the detection of malignant lesions in organs such as the thyroid and breast.The elastography scoring standard includes a total of five points: the lesion is combined or not with green, the lesion and its surroundings are blue, 5 points, a little green in the lesion or blue overall, 4 points, blue and green in the lesion.The scale is equal to three points, with green around the lesion and blue inside, as two points, and the lesion is basically or completely green as one point.Score 4 or higher indicates malignancy, and Score 3 indicates benign.The surgical approach to SBC in children is under discussion.For SBC children with tumors less than 2.0 cm and good clinical performance, local mass excision and breast-conserving surgery can be performed, followed by sentinel lymph node biopsy [30].SBC has good prognosis, with a 10-year survival rate higher than 90% [31].For children with tumor diameter greater than 2.0 cm, rupture, nipple discharge and lymph node metastasis, modified radical mastectomy or radical mastectomy for breast cancer should be performed.Tyrosinase and RAS inhibitors [32] can be used as therapeutic agents in children who have a significant malignant tendency or even metastasis.This is the first case of SBC in a child in our hospital.The attending surgeon performed a local tumor resection by making a minimally invasive incision from the outside of the breast.During the operation, the tumor was found to adhere slightly to the breast but did not infiltrate the surrounding tissue.Unfortunately, no interpretative rapid pathological examination was performed.The lesson for us is that doctors must always be serious and responsible.Before starting the operation, all the conditions that may occur during the operation should be evaluated and appropriate treatment solutions should be provided for patients.Children with tumors at special sites should be treated cautiously and the possibility of malignant results should be considered during the operation.A quick pathological examination is required during the operation.If the results suggest that it is a malignant tumor, the scope of resection and lymph node dissection should be expanded to ensure the child's safety to avoid physical, psychological, and financial harm from a second operation. Conclusions Although SBC is classified as a BL-1 type of TNBC, it differs significantly from BL-TNBC in terms of pathogenesis, clinical presentation, treatment, and prognosis.Therefore, it is extremely important to exclude SBC from BL-TNBC and to select the optimal individualized treatment plan.Some scholars [16,33] believe that distinguishing SBC from BL-TNBC was necessary to avoid misdiagnosis and mistreatment.Childhood breast cancer is mainly associated with genetic mutations.Among the numerous subtypes of breast cancer, Secretory Breast Carcinoma (SBC) is the most prevalent in children, despite its relative rarity.Notably, SBC typically presents an excellent prognosis [34].The case of our current patient aligns with these findings reported in the literature.We concur with these observations.The purpose of this study is to supplement the SBC sample database to inform epidemiological studies of the disease and to improve pediatric surgeons' management of pediatric breast tumor disease.This is the first report to question the classification of TNBC from a clinical and pathological perspective.Despite these differences, SBC has so far been classified as a BL-1 subtype of TNBC; whether this is due to TNBC heterogeneity or other factors remains to be further studied and confirmed. • fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year • At BMC, research is always in progress. Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from: Fig. 1 A Fig. 1 A: Tumour.B and C: Postoperative pathological results of the tumour under the microscope(secretory vacuoles can be seen)
2023-09-29T14:04:34.810Z
2023-09-29T00:00:00.000
{ "year": 2023, "sha1": "06921ec6b95ed8ee9a8e97be6248978cc4ef63a8", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/counter/pdf/10.1186/s12887-023-04286-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "521f0d1608c8ba05423dd1c68227d86bffcfafda", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13622331
pes2o/s2orc
v3-fos-license
Therapeutic Roles of Heme Oxygenase-1 in Metabolic Diseases: Curcumin and Resveratrol Analogues as Possible Inducers of Heme Oxygenase-1 Metabolic diseases, such as insulin resistance, type II diabetes, and obesity, are associated with a low-grade chronic inflammation (inflammatory stress), oxidative stress, and endoplasmic reticulum (ER) stress. Because the integration of these stresses is critical to the pathogenesis of metabolic diseases, agents and cellular molecules that can modulate these stress responses are emerging as potential targets for intervention and treatment of metabolic diseases. It has been recognized that heme oxygenase-1 (HO-1) plays an important role in cellular protection. Because HO-1 can reduce inflammatory stress, oxidative stress, and ER stress, in part by exerting antioxidant, anti-inflammatory, and antiapoptotic effects, HO-1 has been suggested to play important roles in pathogenesis of metabolic diseases. In the present review, we will explore our current understanding of the protective mechanisms of HO-1 in metabolic diseases and present some emerging therapeutic options for HO-1 expression in treating metabolic diseases, together with the therapeutic potential of curcumin and resveratrol analogues that have their ability to induce HO-1 expression. Introduction The clustering in an individual of multiple metabolic abnormalities associated with metabolic diseases, including insulin resistance (IR), type II diabetes (T2D), and obesity, is defined as the metabolic syndrome (MS) [1]. It is well accepted that MS increases the risk of developing cardiovascular disease (CVD) [2]. Although there is a therapeutic treatment to combat some of metabolic diseases, especially T2D and CVD, both the intake of proper diets and maintaining healthy lifestyles are considered the best preventive measures [3]. However, for patients with MS, it is difficult to follow a diet/exercise regime that would improve their symptoms. Therefore, the identification of agents that may deal with more serious aspects of MS is an important medical field for research. Numerous experimental studies have confirmed the important role of naturally occurring phytochemicals in prevention and treatment of metabolic diseases [4]. Curcumin (Cur), resveratrol (Res), and their related derivatives are the most studied compounds in these fields so far [5]; therefore, we will discuss the therapeutic usage of Cur and Res in the context of metabolic diseases, together with the underlying mechanisms of action. Recent studies have suggested that almost all of metabolic diseases are associated with a low-grade chronic inflammation (hereafter referred as to inflammatory stress), oxidative stress, and endoplasmic reticulum (ER) stress, and the integration of these stresses is critical to the pathogenesis of metabolic diseases [6]. Moreover, these stresses may 2 Oxidative Medicine and Cellular Longevity interact with each other and amplify during pathogenesis of metabolic diseases. Thus, agents and cellular molecules that can reduce these stresses are emerging as potential targets for intervention and treatment of metabolic diseases. Heme oxygenase-1 (HO-1), a ubiquitous inducible cellular stress protein, serves a major metabolic function as the rate-limiting step in the oxidative catabolism of heme, leading to formation of equimolar amounts of biliverdin (BV), free iron, and carbon monoxide (CO) [7]. BV formed in this reaction is rapidly converted to the strong antioxidant bilirubin (BR), which is then converted back into BV through the actions of reactive oxygen species (ROS) [8]. This cycle allows for the neutralization of ROS, which is considered as one of the antioxidant functions of HO-1 [8]. Besides heme degradation, HO-1 has been shown to exert other biological activities that play important roles in cellular protection [9]. The protective biological activities conferred by HO-1 include antioxidant, anti-inflammatory, and antiapoptotic effects [9]. By virtue of these protective activities, HO-1 has been suggested to play important roles in pathogenesis of metabolic diseases [10]. In this review, we will explore our current understanding of the protective mechanisms of HO-1 in metabolic diseases and present some emerging therapeutic options for HO-1 expression in treating metabolic diseases, together with the therapeutic potential of Cur and Res. Metabolic Stress A growing body of evidence suggests an early and central role of increased systemic oxidative stress as causal pathways linking with metabolic diseases [11]. In addition to oxidative stress, inflammatory stress and ER stress are also present in the patients with metabolic diseases [6]. Although a role for individual processes, such as oxidative stress, ER stress, and inflammatory stress, in metabolic diseases has been recognized in scattered reports, how these processes are interrelated in bringing about metabolic diseases has not been clear. However, these processes are ultimately integrated in the pathogenesis of metabolic diseases, which is referred as to metabolic stress. Oxidative Stress in Metabolic Diseases. The ROS of which production is an unavoidable consequence of aerobic metabolism in animal cells consist primarily of the various oxygen free radicals, including superoxide anion radical (O 2 •− ) and hydroxyl radical (HO • ), as well as the potent oxidizing molecules, including hydrogen peroxide (H 2 O 2 ). At their high concentrations, ROS can react with many different macromolecules, thereby causing damage to, for example, DNA, proteins, and lipids [12]. ROS, therefore, play a major role in many disease processes. Despite their destructive activity, low/moderate levels of ROS are indispensable in several biochemical processes, including intracellular messaging and defense against microorganisms [13]. Thus, it is necessary for the cells to control the levels of ROS tightly to avoid any oxidative injury and not to eliminate them completely. This is supported by the fact that levels of ROS are tightly regulated by cellular antioxidant defense systems including small antioxidant molecules, such as glutathione (GSH), and ROS-scavenging enzymes, such as superoxide dismutase (SOD), catalase, and glutathione peroxidase (GPX) [14]. In this regard, oxidative stress has been shown to describe a condition in which these cellular antioxidant defense mechanisms are insufficient to inactivate ROS, or excessive ROS are produced, or both. Oxidative stress has been implicated in the development of IR and subsequent T2D [15]. One of the main cellular organelles involved in the production and regulation of ROS levels is the mitochondria, where superoxide is generated through electron transport chain (ETC) and converted to hydrogen peroxide either spontaneously or by SOD. In a state of chronic nutrient/energy overload, the flux of nutrients through the mitochondrial ETC can be increased, thereby enhancing ROS production and eventually inducing oxidative stress. ROS have been hypothesized to inhibit the cell signaling of the insulin receptor by blocking the pathway between insulinreceptor substrate 1 (IRS-1) and phosphatidylinositol-4,5bisphosphate 3-kinase (PI3K), thereby inducing IR [16]. This hypothesis has been supported by the findings demonstrating that IR animal models are characterized by persistently elevated ROS levels [17]. In these animal models, pharmacological or genetic strategies designed to decrease ROS levels at least partially prevent IR status [17]. However, ROS are also necessary to maintain normal insulin sensitivity, which was underscored by the findings that mice lacking the antioxidant GPX showed higher ROS levels and enhanced insulin sensitivity [17]. Interestingly, treatment of mice lacking GPX with an antioxidant actually made the glucose metabolism worse. It is not clear why in certain cellular and animal models IR is associated with increased ROS levels, whereas in other models high ROS levels are associated with improved insulin sensitivity. Inflammatory Stress in Metabolic Diseases. The metabolic diseases, including obesity and IR, are associated with a low-grade inflammation characterized by overexpression of proinflammatory cytokines produced by the expanding adipose tissue, activated macrophages, and other immune cells [18]. Inflammatory mediators, such as tumor necrosis factor-(TNF-), interleukin-1 (IL-1 ), IL-6, leptin, resistin, monocyte chemotactic protein 1, plasminogen activator inhibitor-1, C-reactive protein (CRP), fibrinogen, angiotensin, visfatin, retinol binding protein-4, and adiponectin, can affect the metabolic functions of several organs, including the liver, heart, muscle, and brain [18], and some of them, especially TNF-and IL-1 , can impair insulin signaling in insulin-responsive organs, which, as such a result, causes systemic IR [18]. In fact, the elevated levels of proinflammatory cytokines are detected in patients with the IR-associated clinical states and in experimental mouse models of obesity [19][20][21]. The exact mechanism by which proinflammatory cytokines can induce local and systemic IR is as yet unclear. However, several potential mechanisms for metabolic effects of TNF-and IL-1 have been described. TNF-stimulates serine kinases, such as c-jun N-terminal kinase (JNK) and p38 mitogen-activated protein kinase (MAPK) [22]. Activation of JNK and/or p38 MAPK by TNFresults in serine phosphorylation of IRS-1 and IRS-2, which in turn reduces the downstream insulin signaling [23]. In adipocytes, TNF-induces IR by reducing the expression of glucose transporter 4 (GLUT4) and peroxisome proliferatoractivated receptor- [24]. Similarly, prolonged IL-1 treatment reduces the insulin-induced glucose uptake in 3T3 adipocytes, which is associated with a slightly decreased expression of GLUT4 and marked inhibition of GLUT4 translocation to the plasma membrane in response to insulin [25]. Although the mechanisms causing inflammation during obesity are under investigation, it is now recognized that the immune sensors, including Toll-like receptors (TLRs), Nodlike receptors (NLRs), inflammasome, and other pathogensensing kinases, participate in the development of obesityassociated inflammation. Of these receptors, TLR4 has been shown to be activated by saturated fatty acids (FAs) to generate inflammatory signals in macrophages, endothelial cells, and adipocytes, which ultimately results in the production of proinflammatory cytokines, such as TNFand IL-1 [26]. The bacterial endotoxin lipopolysaccharide (LPS) is a classical ligand for TLR4 in most cell types. The majority of the biological activity of LPS is contained within a moiety that is acylated with saturated fatty acids, and removal of these fatty acids results in complete loss of its ability to activate TLR4, suggesting that there is a degree of similarity in structure among LPS and saturated FAs. The NLR family can also sense obesity-induced signals in multiple contexts. In macrophages, NLR activation stimulates the cryptopyrin/NLRP3 inflammasome to induce IL-1 and IL-18 production via caspase-1 activation [27]. Similar to their activation of TLR4, saturated FAs, such as palmitic acid (PA), have been linked to inflammasome activation in macrophages [28]. PA is a major component of dietary saturated fat, representing up to 20% of the total serum FAs. PA has been shown to be present in a high percentage of atherosclerotic lesions [29]. It has been demonstrated that in cultured endothelial cells (ECs), downregulation of IRS-1 signaling by PA is dependent on each of the key proteins in the TLR4 signaling pathway: TLR4, myeloid differentiation factor-88 (MyD88), interleukin-1 receptor-associated kinase (IRAK), inhibitory B-kinase (IKK ), and nuclear factor-B (NF-B) [30,31]. PA activates TLR4, which in turn engages MyD88 and IRAK, subsequently activating IKK and NF-B. Activation of NF-B inhibits IRS-1 tyrosine phosphorylation via an as yet unidentified mechanism. ER Stress in Metabolic Diseases. The ER, a membrane compartment located near the nucleus, is a highly dynamic organelle responsible for protein folding, maturation, quality control, and trafficking. This organelle also has an important role in Ca 2+ storage and signaling. When the ER becomes stressed due to the accumulation of newly synthesized unfolded proteins, this condition has been referred to as an ER stress and the unfolded protein response (UPR) is activated to increase protein folding capacity and to decrease unfolded protein [32]. If these mechanisms of adaptation are insufficient to recover ER homeostasis, the UPR will induce cell death programs to eliminate the stressed cells and might subsequently contribute to disease states, such as diabetes and its complications. In animal cells, the UPR is mediated by at least three transmembrane proteins: inositol-requiring enzyme 1 (IRE1), protein-kinase-RNA-like ER kinase (PERK), and activating transcription factor 6 (ATF6) [33]. Under unstressed conditions, these transmembrane proteins are maintained in an inactive state by binding to the major ER chaperone, or immunoglobulin heavy chain binding protein/glucose-regulated protein 78 (BiP/GRP78), at the side of the ER lumen. During ER stress, BiP is displaced to interact with misfolded luminal proteins, resulting in the release of IRE1, PERK, and ATF6 and leading to their activation [34]. The activation of PERK can result in sequential phosphorylation of the subunit of eukaryotic translation initiation factor 2 (eIF2 ), leading to rapid reduction in the initiation of mRNA translation and ultimately reducing the load of new proteins in the ER. Phosphorylation of eIF2 by PERK also allows the translation of activating transcription factor 4 (ATF4) that can induce transcription of genes involved in amino acids synthesis and apoptosis, such as CCAAT/enhancer-binding protein homologous protein (CHOP) [32][33][34]. IRE1 activation unmasks its endoribonuclease activity that is responsible for the unconventional splicing of the X box-binding protein 1 (XBP1) mRNA and its translation into the transcription factor XBP1 protein. The XBP1 protein upregulates the transcription of genes encoding ER chaperones, phospholipid biosynthesis, and components of the ER-associated degradation (ERAD) machinery that can dispose misfolded proteins [32]. IRE1 also activates JNK by recruiting the scaffold protein tumor necrosis factor receptor-associated factor 2 as well as the apoptosis signal-regulating kinase and caspase-12 [33]. Once activated, ATF6 translocates from the ER to the Golgi, where it is cleaved by regulated intramembrane proteolysis by site 1 and site 2 proteases. The cytoplasmic part of ATF6, an active transcription factor, transactivates genes encoding ER chaperones, ERAD components, and protein foldases [6,34]. A number of biochemical, physiologic to pathologic stimulus, such as those that cause ER calcium depletion, altered glycosylation, nutrient deprivation, oxidative stress, DNA damage, or energy perturbation/fluctuations, can interrupt the protein folding process and result in ER stress. Interestingly, high metabolic process and obesity, which are induced due to high nutritional intake, also result in ER stress which suppresses insulin signaling [6]. A study has demonstrated protection against obesity-induced T2D in mice by overexpression of ER chaperones, while knockdown of chaperones was diabetogenic [35]. Furthermore, animal treatment with chemical chaperones that alleviated obesityinduced ER stress led to improvement in insulin sensitivity [35]. The mechanism by which ER stress can induce IR is not clear. One possibility is that UPR activation may stimulate stress kinases (e.g., JNK) that interfere with insulin signaling, thereby promoting IR. Also, transcription factors activated by UPR may modify transcription of key enzymes involved in gluconeogenesis or lipogenesis, thereby participating to the abnormal activation of these pathways in IR states. Finally, it is also possible that ER stress may lead to an increase in oxidative stress and/or inflammatory stress that in turn may contribute to IR. The Role of HO-1 in Metabolic Diseases Although HO-1 is known initially for its role in heme catabolism, HO-1 has become increasingly recognized to exert a major role in cellular defense mechanisms [9]. The protective biological activities conferred by HO-1 include its antioxidant, anti-inflammatory, and antiapoptotic properties [9]. These protective effects of HO-1 are dependent on the generation of its enzymatic reaction products (i.e., CO, BV/BR). There is ample evidence that HO-1, in particular, can protect against metabolic diseases (Figure 1) [36][37][38][39][40][41][42]. HO-1 Expression. Targeted modulation of HO-1 expression for potential therapeutic interventions requires detailed knowledge of the mechanisms that can regulate HO-1 gene expression. The nuclear factor-erythroid 2-related factor 2 (Nrf2) is recognized as a major contributor to the upregulation of multiple antioxidant defense system in response to exogenous and endogenous stimuli or naturally occurring phytochemicals. Nrf2 belongs to the cap'n' collar family of basic region-leucine zipper-type transcription factors [7,9,32]. Nrf2 binds to the antioxidant-responsive element (ARE) or the electrophile-responsive element [7]. ARE has been detected in the promoter or upstream promoter regions of the genes encoding phase II antioxidant enzymes, including glutathione S-transferase subunits, glutamate-cysteine ligase catalytic and glutamate-cysteine ligase modifier subunits, the thioredoxin and peroxiredoxin families, and NAD(P)H:quinone oxidoreductase [7,9,32]. HO-1 is upregulated via activation of the Nrf2-ARE pathway. Several phytochemicals (Res, Cur, flavonoids, carnosol, etc.) and endogenous mediators can upregulate HO-1 expression via Nrf2-ARE pathway [9]. Nrf2 activation is mainly controlled by the cytosolic inhibitor Kelch-like enoyl-CoA-hydrataseassociated protein1 (Keap1) [7,9]. Under normal conditions, Nrf2 is anchored in the cytoplasm through binding to Keap1, which in turn facilitates the ubiquitination and subsequent proteolysis of Nrf2. Such sequestration and degradation of Nrf2 in the cytoplasm are mechanisms for the repressive effects of Keap1 on Nrf2. Keap1 contains two critical cysteine residues which are a second group of cysteines important for stress sensing. Disruption of the Nrf2-Keap1 complex can result from modification of critical cysteines of Keap1. Numerous stimuli cause disruption of the Nrf2-Keap1 complex via modulation of its critical cysteines, which permits subsequent nuclear translocation of free Nrf2. Thus, the Keap1/Nrf2 system appears to be a central sensor for a broad spectrum of unfavorable cellular conditions. HO-1 against Oxidative Stress. The antioxidant effect of HO-1 has been highlighted in HO-1-knockout mice. As compared with wild-type mice, the liver from HO-1knockout mice shows higher levels of oxidized proteins and lipid peroxidation [43]. Moreover, peritoneal macrophages from HO-1-knockout mice, as compared with wild-type controls, exhibit increased ROS [44]. Similarly, cells from the human case of HO-1 deficiency showed increased sensitivity to oxidative injury [45]. Upregulation of HO-1 expression protects against oxidative stress-induced cell death [46,47]. Thus, HO-1 expression plays a role to counteract oxidative stress. The specific mechanisms by which HO-1 can mediate antioxidant effect are not clear, but BV and BR, a byproduct generated during the heme catabolism, have been suggested as potential antioxidants. In fact, addition of BR to the culture medium was reported to markedly reduce the cytotoxicity produced by oxidants [8,9]. Similarly, HO-1expression by the HO-1 inducer hemin increased the resistance against oxidative cell injury; notably, this protective effect occurred only in cells that were actively producing BR [48]. It is important to note that upregulation of HO-1 is often associated with increased ferritin [49], which sequesters redox-active iron, a toxic byproduct of heme degradation [9]. Considering that HO-1, as noted above, has an ability to reduce oxidative stress, it is not surprising that HO-1 appears to protect from the development of metabolic diseases, such as the diabetes that is consistently associated with increased oxidative stress [10]. HO-1 against Inflammatory Stress. The anti-inflammatory effect of HO-1 has been also highlighted in HO-1knockout mice. As compared with wild-type mice, HO-1knockout mice exhibited hallmarks of a progressive chronic inflammatory state [43]. Peritoneal macrophages from HO-1-knockout mice, as compared with wild-type mice, exhibited increased proinflammatory cytokines [44]. Similarly, a case of human HO-1 deficiency also exhibited hallmarks of a proinflammatory state [45]. Various naturally occurring phytochemicals, which are a group of antioxidant compounds and are currently investigated for their antiinflammatory and anticancer activities, have been shown to provide antiinflammatory protection via HO-1 expression [50]. The specific mechanisms by which HO-1 can mediate anti-inflammatory effects are not clear, but CO has been suggested as a potential mediator. Studies have shown that administration of CO inhibited the production of LPSinduced proinflammatory cytokines, such as TNF-and IL-1 [51], and increased LPS-induced expression of the antiinflammatory cytokine IL-10 [52]. Several possible mechanisms have been postulated to explain the anti-inflammatory action of CO. CO modulated MAPK pathways, including p38 MAPK, ERK, and JNK pathways [9]. CO causes a general downregulation of proinflammatory cytokine production through p38 MAPK-dependent pathways and NF-B inactivation [51]. Given that obesity, IR, T2D, and many related cardiometabolic complications share a metabolic milieu characterized by elevated inflammatory and oxidative insults [53], HO-1 expression would suppress these insults by exerting anti-inflammatory and antioxidant effects. Figure 1: Therapeutic targets of HO-1 during pathogenesis of metabolic diseases. Metabolic diseases, such as CVD, T2D, and obesity, frequently arise from defects among coordinated actions of multiple tissues. Cells in a tissue may be exposed to oxidative stress generated mainly by mitochondria, inflammatory stress initiated probably by saturated FA-TLR4 interaction, and ER stress triggered by inflammatory and oxidative stresses, and these stresses, when prolonged, may amplify and integrate with each other. The integration of advanced stresses may cause one or more of metabolic diseases. HO-1 expression may reduce oxidative stress, inflammatory stress, and ER stress, thereby exerting therapeutic actions. HO-1 against ER Stress. Molecules involved in ER stress response have two opposing functions: adaptive or proapoptotic. ER stress-responsive molecules have an adaptive function in cells that are exposed to mild and transient stresses, whereas these molecules have a proapoptotic function in cells exposed to severe and chronic stress. Chronic ER stress triggered by chronic nutrient overload may decrease insulin signaling in cells with its receptor, resulting in IR, and may also induce apoptosis of pancreatic -cells that can produce insulin. Thus, ER stress-responsive molecules may play an important role in insulin biosynthesis and IR. A study has shown that HO-1 expression was induced in response to ER stress-inducing chemicals, such as thapsigargin, homocysteine, and tunicamycin (TM), in smooth muscle cells (SMCs) [54]. Interestingly, exogenous application of CO inhibited apoptosis induced by ER stress-inducing agents in SMCs, which was associated with the downregulated expression of the proapoptotic proteins. In human ECs, HO-1/CO also inhibited ER stress-induced apoptosis via p38 MAPKdependent inhibition of the proapoptotic CHOP expression [33]. These studies suggest that HO-1/CO can confer cytoprotection against apoptotic signals originating from ER stressresponsive molecules. In addition to its antiapoptotic effect, CO, as abovementioned, has been shown to downregulate the inflammatory response triggered by ER stress. TM, an ER stress inducer, could induce ER stress in the liver of mice, which was determined by the level of spliced XBP-1 mRNA, an indicator of the ER stress response, and increased the levels of CRP, a marker of inflammation; high levels of CRP are linked to the development of cardiovascular disease and T2D [55]. A water-soluble CO donor reduced the levels of spliced XBP-1 mRNA, alone with a remarkable downregulation of CRP mRNA expression in TM-treated mice. The injection of a CO donor also suppressed ER stress-induced CRP expression in the serum of TM-treated mice, which was similar to the results of the in vitro experiments where CO reduced TM-induced CRP expression in human liver cells. This study suggests that inflammation triggered by ER stress can be suppressed by HO-1/CO, providing evidence that HO-1/CO may potentially be used in therapeutic strategies 6 Oxidative Medicine and Cellular Longevity designed to control inflammatory diseases related to ER stress. Collectively, because ER stress has been associated with a number of metabolic diseases [53], HO-1 expression that, as noted above, can reduce ER stress may have therapeutic potential as novel treatments of metabolic disorders. HO-1 against Metabolic Stress. A growing body of evidence now exists to support the view that there may be the integration of oxidative stress, inflammatory stress, and ER stress in metabolic diseases [53]. Depending on the cell type and physiological process, either oxidative stress, or inflammatory stress, or ER stress may be more prominent or upstream of the others. However, these signaling pathways may interact and be ultimately integrated in the pathogenesis of metabolic diseases. Given the integration of oxidative stress, inflammatory stress, and ER stress, targeting only one of them may not be effective in controlling disease pathogenesis. As abovementioned, HO-1 has its potential ability to modulate oxidative stress, inflammatory stress, and ER stress, and this may explain why HO-1 expression could be effective in controlling metabolic diseases ( Figure 1). The important changes that are observed after increased expression of HO-1 in obese and diabetic animal models include (1) prevention of weight gain, (2) reduction of inflammatory cytokines levels, (3) restoration of normal insulin sensitivity, and (4) improved vascular reactivity. A sustained increase in HO-1 expression may ameliorate IR and compensatory hyperinsulinemia. It has been demonstrated that systemic induction of HO-1 by treatment with the HO-1 inducer, hemin, or cobalt protoporphyrin (CoPP) in ob/ob mice or Zucker diabetic rats reduced adiposity and improved insulin sensitivity [56,57]. The protective effect of systemic HO-1 induction was attributed to an increase in adiponectin expression, enhanced AMP kinase (AMPK) activation in both adipocytes and skeletal muscles, and suppression of adipogenesis and inflammatory cytokine expression. It has been also demonstrated that adipocytespecific overexpression of HO-1 attenuated high fat-(HF-) mediated adiposity and vascular dysfunction, increased insulin sensitivity; and improved adipocyte function by increasing adiponectin and by decreasing inflammatory cytokines [58]. These effects are reversed by the HO activity inhibitor, stannous mesoporphyrin, suggesting that HO-1 plays important roles in mediating such effects. HO-1 expression may prevent the development of obesity in metabolic diseases. Administration of CoPP resulted in sustained body weight loss between 20 and 25% compared with rats receiving vehicle [59]. Actions of systemic CoPP administration on body weight are dependent on HO-1 expression, as evidenced by studies that have demonstrated that coadministration of an HO inhibitor significantly attenuates weight loss in male ob/ob mice [56]. Treatment with CoPP has been demonstrated to lower body weight in leptin receptor-deficient Zucker diabetic fatty rats [60]. While it is clear that induction of HO-1 both centrally and systemically can prevent the development of obesity, the mechanism by which HO-1 expression elicits weight loss is not known. HO-1 Inducers and Their Therapeutic Potential HO-1 may be protective against stress-associated physiological disorders on the basis of its rapid upregulation under various stress conditions and potent physiological regulating properties. Therefore, HO-1 expression has been suggested to have a general adaptive response and enhanced resistance to various stresses [9]. Some studies have suggested that HO-1 expression is downregulated in abnormal metabolic states and that HO-1 overexpression may ameliorate metabolic diseases [61,62]. For example, compared with Zucker lean rats, Zucker obese rats showed a decrease in HO-1 expression and an increase in the proinflammatory TNF-and IL-6 levels, and treatment of Zucker obese rats with the HO-1 inducer CoPP increased HO-1 expression, which was associated with a decrease in superoxide levels and TNF-and IL-6 levels and an increase in plasma adiponectin, as compared with untreated controls [61]. This treatment also decreased the visceral and subcutaneous fat content and reduced weight gain. A study has explored the vascular cytoprotective effects of HO-1 against hyperglycemia-induced oxidative stress in experimental diabetes and found that vascular extracellular SOD and plasma catalase activities were significantly reduced in diabetic rats compared with nondiabetic rats and that upregulation of HO-1 expression by administration of CoPP caused a large increase in extracellular SOD levels [62]. In addition, aortic ring segments from diabetic rats exhibited a significant reduction in the vascular relaxation response to acetylcholine, which was reversed by CoPP administration [63]. The results of an in vivo study have provided support for the protective effects of HO-1 on islet cells, as administration of CoPP upregulated HO-1 expression in the pancreas, preserved -cell numbers in the islets, and decreased blood glucose levels to normal in nonobese diabetic mice compared with untreated controls [64]. Accordingly, pharmacological expression of HO-1 may be a novel therapeutic intervention for metabolic diseases. Many phytochemicals, which have reported antioxidant and anti-inflammatory properties, could be explored for their potential to reverse oxidative stress, inflammatory stress, and ER stress, which may be finally useful for management of metabolic diseases. Cur Analogues as HO-1 Inducers. Turmeric is prepared by grinding dried rhizomes of Curcuma longa. Traditionally, turmeric has been used as a foodstuff and has been an important component of Indian medicine and traditional Chinese medicine [65]. Curcuminoids are the active components responsible for the majority of the medicinal properties of turmeric, and there are 3 naturally occurring curcuminoids: Cur, demethoxycurcumin (DMC), and bisdemethoxycurcumin (BDMC). Tetrahydrocurcumin (THC) is one of the major metabolites of Cur, and dimethoxycurcumin (DiMC) is one of synthesized Cur derivatives with metabolic stability over Cur. The chemical structures of Cur analogues are shown in Figure 2. While Cur contains two methoxyl groups at its ortho-position, DMC contains only one and BDMC contains none. In comparison with Cur, DiMC contains additional two methoxyl groups instead of two hydroxyl groups, and THC, like Cur, contains two methoxyl groups and two hydroxyl groups but lacks conjugated double bonds in the central seven-carbon chain. Cur has been first reported to induce in vitro HO-1 expression through Nrf2/ARE pathway in renal epithelial cells [66], which was further confirmed in rat vascular SMCs [67]. The , -unsaturated carbonyl group appears to be an important structure of curcuminoids, because THC, lacking this functional group, was virtually inactive in inducing HO-1 expression [68]. In fact, compounds carrying this reactive group have been reported to induce HO-1 expression through Nrf2 nuclear translocation [66]. It has been noted that three naturally occurring curcuminoids vary in their ability to induce HO-1 expression in human ECs [69]. The level of HO-1 expression was found to be highest with Cur, followed by DMC and BDMC. Considering that the main difference among the three curcuminoids is the number of methoxyl groups (none for BDMC, one for DMC, and two for curcumin), the presence of methoxyl groups in the ortho-position on the aromatic ring has been suggested to be essential to enhance HO-1 expression [69], and this finding may be useful in designing more efficacious HO-1 inducers. Cur is rapidly metabolized in vivo into THC and other reduced forms [70]. Moreover, HO-1-inducing property of Cur is lost when it is reduced to THC [67,68]. Thus, there would be a need to develop Cur analogues with higher metabolic stability than the original Cur. DiMC, one of several synthetic Cur analogues, was reported to have increased metabolic stability in comparison with Cur [71], and, similar to Cur, induced HO-1 expression via Nrf2 activation in RAW264.7 macrophages [68]. Recently, a novel water soluble Cur derivative (NCD) has been developed to overcome low in vivo bioavailability of Cur and to evaluate its therapeutic effects in rats with diabetes mellitus induced by STZ [72]. Administration of oral NCD or pure Cur to diabetic rats significantly decreased blood glucose levels and increased the plasma insulin, as compared with the diabetic group, and NCD was more effective in such effects than Cur. Oral NCD did not change the plasma glucose levels in the control group, while it significantly increased the plasma insulin in the control group. Interestingly, treatment of diabetic rats receiving oral NCD with the HO-1 inhibitor zinc protoporphyrin resulted in a significant increase in the plasma glucose level and a significant decrease in insulin levels, when compared with the diabetic group receiving oral NCD only, and this strongly suggests that the antidiabetic effects of NCD might result from its activation of HO-1 but not from its activation of other antioxidant enzymes, such as SOD and catalase. Administration of oral NCD or pure Cur significantly increased the HO-1 expression level in the pancreatic tissues of the diabetic group, as compared with controls. Thus, it was suggested that the hypoglycemic action of Cur might be mediated through HO-1 expression. Res Analogues as HO-1 Inducers. Res, which is a plant polyphenol abundant in the skin of red grapes and also found in berries and peanuts, has been reported to induce HO-1 expression via Nrf2/ARE activation in neuronal PC12 cells [73]. It has been also reported that Res acts on the adipocyte to decrease the incorporation of lipids into adipocytes and retard the conversion of glucose to lipids [74]. Obese Zucker rats that were fed Res had lower blood pressure, plasma glucose, triacylglycerols, cholesterol, free fatty acids, leptin, and liver weight than animals that did not consume Res [75]. In another study, normal rats given a high-fat diet and Res had lower glucose and better insulin levels than rats fed a high-fat diet without the supplement [76]. In a diabetic rat model, rats that had a diet supplemented with Res had lower serum glucose, increased plasma insulin, normalized levels of carbohydrate metabolism enzymes, and lower hepatic proinflammatory cytokine levels [77]. Although many in vivo studies have demonstrated that HO-1 expression in specific tissues was induced by administration of Res, it is not certain whether Res could exert its beneficial effects directly by inducing HO-1 expression in targeted tissues. A study has examined whether HO-1 expression by Res could increase serum adiponectin levels and ameliorate vascular dysfunction in diabetic animals [78]. Administration of Res or CoPP increased serum levels of adiponectin in STZinduced diabetic rats. Both Res and CoPP increased HO-1 expression in the aorta, compared to untreated diabetic rats. Interestingly, CO, one of HO-1 byproducts, which was released by a CO donor, also increased adiponectin levels, indicating a direct involvement of HO-1 activation. The increase in adiponectin was associated with a significant decrease in EC death. Res treatment in hypercholesterolemic rats also enhanced in vivo HO-1 expression, which plays an important role in neovascularization of the hypercholesterolemic myocardium [79]. In another study, treatment with Res decreased the blood glucose level and increased HO-1 expression, when compared to STZ-induced diabetic rats [80]. It is worthwhile to point out that the pharmacokinetic properties of Res are not favorable since Res has poor bioavailability being rapidly and extensively metabolized and excreted [81], which casts doubt on the physiological relevance of the high concentrations typically used for in vitro experiments. Nevertheless, a number of studies have demonstrated that Res was capable of exerting some of its beneficial effects in vivo, as abovementioned. Thus, it is most likely that one of Res metabolites may mimic some of in vitro beneficial effects of Res. Piceatannol (Pic) is a naturally occurring analog of Res and Pic has been also identified as one of Res metabolites [81]. Because Pic is generated during phase I metabolism of Res by cytochrome P450 enzymes and represents one of its main phase I metabolites [81], it was, therefore, hypothesized that Pic may have biological activities similar to those of Res. Pic possesses an additional hydroxyl group at Res structure ( Figure 2). Pic, partly due to such a difference in the chemical structure, has a stronger effect on HO-1 expression than Res [82]. Interestingly, trans-stilbene with no hydroxyl groups and the semisynthetic trimethoxytrans-stilbene (TMS) with methoxyl groups in lieu of the hydroxyl groups ( Figure 2) have been found to be inactive in HO-1 expression [83]. In this regard, the hydroxyl groups of Res appear to be important for HO-1 expression. TMS, by reinforcing the hydrophobic character of the molecule and so potentially its diffusion through cellular membranes, loses its capacity to induce HO-1 expression but presents other effects [84]. More mechanistic studies are needed to evaluate and potentially confirm the beneficial effects of Res and its derivatives as an additional therapeutic approach by treating metabolic diseases. Other HO-1 Inducers. Metalloporphyrins, such as CoPP and hemin, which are prototypical inducers of HO-1 and are commonly used in experimental cell culture and animal models, do not seem to be applicable for clinical interventions, because they lack cell-specificity and are severely toxic when it is used for long time periods. For example, CoPP suppresses thyroid and testicular hormone concentrations in serum, affects copper metabolism, elevates plasma ceruloplasmin levels, reduces hepatic cytochrome P450 levels, and has many other side effects [85][86][87]. By contrast, upregulation of HO-1 expression by natural dietary components widely present in food and nutraceuticals, such as Cur and Res, may exert beneficial effects under normal and pathological experimental conditions. A major advantage to the medical use of these compounds is that the dosage of each compound in a particular formulation needed to achieve a particular health-related effect is likely to be far below the range in which the compound may be toxic. Therefore, these substances would be an alternative for a possible HO-1 expression in humans. In addition, a number of currently available pharmacologic compounds, which can induce HO-1 expression and are applied in standard therapies, may be also useful for clinical interventions in metabolic disorders. Although potentially many phytochemicals may qualify to serve as HO-1 inducers, Cur and Res appear to have been tested most commonly in the literature. Besides Cur and Res, quercetin, a polyphenol found in a variety of fruits and vegetables, has been also reported to induce in vitro HO-1 expression via MAPK/Nnf2 pathway [88]. Overweight Zucker rats and normal-weight rats given quercetin had significantly lower glucose, triacylglycerols, and free fatty acids and higher insulin levels than rats that received no quercetin [89], suggesting its potential to treat metabolic diseases. However, whether quercetin, by inducing in vivo HO-1 expression, could ameliorate the risk factors that lead to the development of metabolic diseases remains to be investigated. Conclusions There is increasing evidence that complications related to metabolic diseases are associated with elevated oxidative stress, inflammatory stress, and ER stress (Figure 1) [6]. The integration of these stresses plays an important role in the pathogenesis and development of metabolic diseases [6]. HO-1 expression has been shown to be protective against metabolic diseases, at least in part, by reducing these stress responses, and this has generated immense interest in HO-1 as a therapeutic target. Naturally occurring phytochemicals ameliorate the risk factors that lead to the development of metabolic diseases, but the mechanisms of their actions remain to be established. In animal models, some of them, such as Cur and Res, reduce the incidence of metabolic diseases via Nrf2-dependent HO-1 expression [72,[78][79][80], which allows them to be considered as HO-1 inducers that may provide an alternative strategy for controlling the initiation and progression of metabolic diseases. However, their introduction into the clinical setting may be hindered largely by their poor solubility, rapid metabolism, or a combination of both, ultimately resulting in low therapeutic concentrations at the target site. To overcome the bioavailability, advanced drug delivery systems, designed to provide localized or targeted delivery of these agents, may provide a more viable therapeutic option in the treatment of metabolic diseases.
2016-05-04T20:20:58.661Z
2013-09-11T00:00:00.000
{ "year": 2013, "sha1": "38dc1f487c178da420175bb6a712e674c5cd28c8", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/omcl/2013/639541.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93d0e423141dd04f3619e2051bcdf025e9b1a9d8", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3876787
pes2o/s2orc
v3-fos-license
Dynamical Coordination of Hand Intrinsic Muscles for Precision Grip in Diabetes Mellitus This study investigated the effects of diabetes mellitus (DM) on dynamical coordination of hand intrinsic muscles during precision grip. Precision grip was tested using a custom designed apparatus with stable and unstable loads, during which the surface electromyographic (sEMG) signals of the abductor pollicis brevis (APB) and first dorsal interosseous (FDI) were recorded simultaneously. Recurrence quantification analysis (RQA) was applied to quantify the dynamical structure of sEMG signals of the APB and FDI; and cross recurrence quantification analysis (CRQA) was used to assess the intermuscular coupling between the two intrinsic muscles. This study revealed that the DM altered the dynamical structure of muscle activation for the FDI and the dynamical intermuscular coordination between the APB and FDI during precision grip. A reinforced feedforward mechanism that compensates the loss of sensory feedbacks in DM may be responsible for the stronger intermuscular coupling between the APB and FDI muscles. Sensory deficits in DM remarkably decreased the capacity of online motor adjustment based on sensory feedback, rendering a lower adaptability to the uncertainty of environment. This study shed light on inherent dynamical properties underlying the intrinsic muscle activation and intermuscular coordination for precision grip and the effects of DM on hand sensorimotor function. muscles have lower coupling of surface electromyography (sEMG) than extrinsic muscles 18 . Stronger coupling of extrinsic muscles is favorable to synergistic force production, whereas the weaker coupling among intrinsic muscles helps independent control of individual fingers for fine motor tasks [19][20][21] . It would be an intriguing issue whether the DM impairs the coordination of intrinsic muscles during sustained precision grip that requires continuous sensory inputs and real-time neuromuscular adjustments. Quantifying intermuscular coordination entails suitable analytical tools. Traditional time-domain approaches, such as the cross-correlation analysis, are based on the magnitude computation and could be easily tampered by abrupt "cross-talks" or additivity noise over the original sEMG waveforms 22,23 . The frequency-domain methods, such as the coherence analysis, work on the cross-spectrum of the sEMG signals and usually show limitations in analyzing the by nature highly complex, nonlinear and nonstationary sEMG signals 24 . Recently, the cross recurrence quantification analysis (CRQA) has been developed as an advanced tool in assessment of dynamical coordination of nonlinear, nonstationary neurophysiological signals 25 . The CRQA provides a set of parameters to quantify the structure of a cross recurrence plot (CRP), which is a visualization of a cross-matrix consisting of all the moments whenever the trajectories of one system pass through the neighborhoods of another system trajectories in the same phase space 25 . Superior to the traditional time-and frequency-domain approaches, the CRQA has advantages in evaluation of the intermuscular coordination as it reveals the dynamical interactions between the two muscles with robustness against nonstationarity transients, model presumption, outliers, and noise 26 . A group of measures derived from CRQA provide quantifications for the deterministic or stochastic components, structural complexity, periodic patterns, or motor synchronization underlying the dynamical coordination across muscles; and these measures can disclose the functionally meaningful features in highly fuzzy, complex, and dynamic control process in neuromuscular systems 25,27 . This study aimed to investigate the effects of DM on the dynamical coordination of hand intrinsic muscles during precision grip using CRQA. The sEMG signals of the abductor pollicis brevis (APB) and first dorsal interosseous (FDI) were recorded and analyzed using CRQA. In order to examine the neuromuscular control in accordance with impaired sensory inputs with DM, the precision grip was tested by two contrast conditions -the apparatus with stable and with unstable load. The unstable load was supposed to be an effective perturbation for grip control. It was hypothesized that patients with DM would exhibit higher CRQA parameters than the controls during precision grip. It was also hypothesized that the loads could interfere with the intermuscular coordination for patients with DM, and there would be lower CRQA parameters with the unstable load than with the stable load. Subjects. Thirty-two individuals with Type II DM and the same number of gender-and age-matched healthy subjects participated in the experiment. Subjects' characteristics are presented in Table 1. All subjects were righthanded with normal or corrected-to-normal vision. The handedness was verified by the Edinburgh Handedness Inventory 28 . The DM patients received clinical diagnosis of Type II DM following the 1997 guideline of American Diabetes Association (ADA). The glycated hemoglobin (HbA1c), fasting plasma glucose (FBG) and post meal blood glucose (PBG) were examined for each DM patients on the day of experiment. The healthy subjects should never be diagnosed or suspected of having hyperglycemia, and their blood sugar levels were tested on spot and should be lower than the criterial level. None of the enrolled subjects reported any history of (1) central nervous system disorders (e.g. multiple sclerosis, Parkinson's disease, stroke); (2) musculoskeletal or neurological trauma or surgical intervention on their arms and hands; (3) entrapment neuropathies (e.g. cervical spondylosis, brachial plexus injury, shaft tube syndromes or carpal tunnel syndromes); (4) osteoarthritis or rheumatoid arthritis of the hand or wrists. All the subjects were fully informed the purposes of this study and provided written consent prior to the experiment according to the protocols approved by the Institutional Review Board at Shandong University. This study was in accordance with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Neuromuscular Tests. Potential effects of DM on the neuromuscular system were examined using a group of tests. The neuropathy total symptom score-6 (NTSS-6), the Michigan neuropathy screening instrument (MNSI) and the Michigan hand outcomes questionnaire were three tests to screen and evaluate the symptoms and degrees of diabetic neuropathies, as well as the effect of DM on the hand functions. The fingertip tactile sensitivity of the thumb and index finger was assessed using the Semmes-Weinstein Monofilament tests following a standard protocol 29 . The nerve conduction velocity of the median nerve was assessed for both the sensory and motor pathways. Both the grip and pinch strength values were assessed following a standard testing protocol 30 . All the tests were equally performed on the patients and controls. Experimental Set-Up. An apparatus was designed to measure the forces of the thumb and index finger during precision grip (Fig. 1). Two miniature 6-component force/torque transducers (Nano17, ATI Industrial Automation, Inc., Apex, NC) were instrumented inside plastic shields for the thumb and index finger, respectively ( Fig. 1a,b). The x-and y-axes were along the vertical and horizontal directions in the surface plane of each transducer, and the z-axis was in the perpendicular direction to the contact surface (Fig. 1a). The signals were amplified and multiplexed using custom ATI interface boxes (ATI Industrial Automation, Inc., Apex, NC) and converged to 16-bit analog-digital converters (PCIe-6343, National Instrument, Austin, TX). The signals from the two transducers were recorded, transmitted and processed independently with interactions across channels (Fig. 1b). The pinching surfaces were covered with 100-grit sandpaper and oriented in parallel with a pinch span of 50 mm. A steel ball was rigidly attached below the center of the bottom to offer an extra stable load, or was hung at the center of the bottom with a piece of string to offer an unstable load. The gross weight of the apparatus with the stable/unstable load was 172 g. Surface EMG signals of the APB and FDI muscles were recorded by two miniature sensors using a wireless EMG system (Trigno TM Mini, Delsys, USA). The EMG system uses silver-contact wireless bipolar bar electrodes with fixed 10 mm inter-electrode spacing. This parallel bar detection approach ensures reliability, robustness to cross-talk, ease-of-use and consistency across all data collection protocols. The mini-electrodes were positioned above the muscle belly parallel to the muscle fibers for the APB and FDI, respectively (Fig. 1c). Positioning of the electrodes was confirmed by an experienced therapist through testing functional movements related to the target muscles following the recommendation 31 . To improve the signal quality, the skin covering the APB and FDI muscles were washed with water and soap, shaved, and cleaned with alcohol. The electrodes were then fixed on the skin with adhesive elastic tape. Proper arrangement of the electrodes could minimize the effects of cross-talk on the target muscles. The sEMG signals were band-pass filtered at 20-450 Hz. Digit force and sEMG collections were implemented using a custom Labview program (National Instrument, Austin, TX). The force and EMG signals were recorded simultaneously at a sampling frequency of 1000 Hz. Grip Test Procedures. Each subject sat comfortably in a height adjustable chair at a testing table. The right upper arm was approximately abducted 60° in the frontal plane and flexed 30° in the sagittal plane. The elbow was flexed approximately 120° and the forearm was in a neutral pronation/supination position. The grip test included the following steps. Session I -Relaxation. Each subject was required to position their hands on the testing table without any action. Subjects were required to maintain a relaxed state for 1 min, watching their hands during the first 30 s and shutting their eyes for the second 30 s. This session serves as a reference for the following grip sessions. Session II -Grip with stable load (SL). Subjects slightly opened their thumb and index finger but closed up the middle, ring and pinky fingers. Once the subject heard a start sign, they reached their grasping hand close to the apparatus, grasped and held it about 30 cm over the table for 1 min. The thumb abducted and internally rotated to establish a posture to oppose the index finger in a dexterous manner 32 . The metacarpophalangeal joint of the thumb was fully extended without flexion observed. The metacarpophalangeal, proximal interphalangeal and distal interphalangeal joints of the index finger flexed about 30°, 45° and 20°. Subjects were instructed to hold the apparatus as stably as they could, maintaining the base of the apparatus horizontally, using a minimum grip force that just prevents the apparatus from slipping. Subjects watched their hands during the first 30 s, and closed their eyes for the other 30 s. After holding it for 1 min, the apparatus was returned back to the initial position. Session III -Grip with unstable load (UL). Subjects grasped and held the apparatus with the tips of the thumb and index finger as they did in Session II (Fig. 1c). The load (steel ball) with the string in Session III formed a pendulum. Given an initial angle, the pendulum oscillated around the center of the base in a simple harmonic motion in sagittal plane (Fig. 1c). There were one trial for both hands in Session I and four trials for each hand in Session II and Session III. The testing orders for Session II and III, as well as for the left and right hands, were randomized between subjects. A one-min rest was given between two consecutive trials, and a five-min rest was provided between states. Each subject familiarized with the protocol before the formal tests. Data Analysis. The forces of the thumb and index finger and the sEMG signals from the APB and FDI muscles from one representative subject are depicted in Fig. 2. For both the force and sEMG signals of each trial, the holding phase from 20-40 s with visual feedback and the phase from 50-70 without visual feedback were retained for the following signal processing and statistical analysis ( Fig. 2a-c). The mean values and coefficient of variance of the thumb and index finger forces were calculated for the phases with and without visual feedback 33,34 . The root-mean-square (RMS) and median power frequency (MPF) of the EMGs recorded from the APB and FDI were also calculated for the two visual conditions (with vs. without visual feedback). Recurrence quantification analysis (RQA) was applied to quantify the nonlinear dynamical properties of the APB and FDI contractions. For the N-length sEMG series of the APB {x(i), 1 ≤ i ≤ N} and that of the FDI {y(i), Similarly, based on formula (1) we can get the RP of FDI by replacing the → u i ( ) by  → v i ( ). The dynamical correlation of the EMG signals recorded from the APB and FDI were analyzed using the CRQA. The CRQA is calculated from the CRP, a graphical representation of a cross matrix defined as: where the ε is the predefined threshold, the Θ • ( ) is the Heaviside function and the is the Euclidean norm. The following four parameters were derived from the RPs of the APB and FDI, and the CRP: the recurrence rate (RR), the determinism (DET), the entropy of the diagonal lines (ENTR), and the laminarity (LAM) 11 . The RR is defined as , defined in (2), then we can get the RR for CRP. The RR indicates the regularity by computing the probability of similar states occurrence in two dynamic systems 25 . Greater RR corresponds to greater correlation in the EMG series. The DET is defined as structures to all recurrence points in the RP or CRP, reflecting the deterministic or predictable structures between two dynamic systems 35 . Define the ENTR as: The ENTR refers to the Shannon entropy of the probability p(l) to find a diagonal line having exactly length l in RP. It is related to the exponential divergence of the phase space trajectory and correlation entropy, and reflects the complexity of the RP in respect of the diagonal lines 25 . The LAM is defined as: The LAM quantifies the laminar phases which is the ratio of recurrence points forming vertical structures to all recurrence points in RP. The distribution ε P v ( ) of vertical line lengths v can be used to quantify laminar phases occurring in a system; and the computation of LAM is realized for those v that exceed a minimal length v min . For the recurrence plots of the current study, v min = 2 is an appropriate value. The denominator summation counts all the recurrence points. Therefore, the LAM will decrease if the RP consists of more single recurrence points while less vertical structures. The LAM demarcates time intervals during which the system's state is relatively constant compared to intervals of sudden bursts of activity 25,35 . The parameters of RQA and CRQA, such as embedding dimension or time delay, were determined by both quantitative and empirical ways. First, mutual information (MI) and false nearest neighbors (FNN) were applied to screen the time delay and embedding dimension, respectively. For all trials, the time delay estimated by MI ranged from 1 to 9 but mostly at 1 (64%) and 2 (21%). The embedding dimension estimated by FNN ranged from 1 to 13, with a majority at 1 (82%). To deal with the data with identical parameters, the empirical values were also taken into account 27 . Eventually, the RQA and CRQA was performed on all trials using an embedding dimension of 1, a time delay of 1 sample, and a threshold setting to 10% of the maximum phase space radius. Window with the size 1000 points (1 s) and with an overlap of 200 points (0.2 s) were applied on the signal series to calculate the RQA and CRQA values of each window (Fig. 2d,e). The mean values of all the windows were calculated to indicate the results of the signals. Parameters of RQA and CRQA were implemented with the cross recurrence plot toolbox 5.16 of MATLAB (The Mathworks, Natick, MA, USA). The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Statistical Analyses. Statistical analyses were performed using SPSS (SPSS Inc., Chicago, IL). Kolmogorov-Smirnov test was applied to examine the data distribution. Independent samples t-tests were applied to examine the difference between the DM and controls on neuromuscular functions. To examine differences between DM and controls in digit force performance, analysis of variance (ANOVA) with repeated measures on the mean and coefficient of variation (CV) of digit forces with Condition (with versus without visual feedback), Hand (the right versus left) and Digit (the thumb versus index finger) as within-subject factors and Group (DM versus controls) as the between-subject factor, for the states with stable and unstable loads respectively. To quantify the effects of DM on muscle activity, we performed ANOVAs on RMS, MPF, RR, DET, ENTR, LAM with repeated measures with Condition, Hand and Muscle (APB versus FDI) as within-subject factors, and with Group as between-subject factor. For the parameters showing significant difference between the DM and controls, a repeated-measures ANOVA with State (relax, SL, UL) as within-subject factors and Group as between-subject factor were further applied. The Huynh-Feldt correction was used when the assumption of sphericity was violated. Post-hoc pairwise comparison was performed using the Holm-Sidak test. A p-value of less than 0.05 was considered statistically significant. Results Neuromuscular function of the DM and controls are shown in Table 2. The DM group had higher scores in NTSS-6 and MNSI than the controls (p < 0.05). The SWM scores of the thumb and index finger of both the left and right hands of DM were significantly higher than those of the controls (p < 0.05). The DM group showed a reduction in the NCV along the left and right median nerves (motor: p < 0.001; sensory: p < 0.05). No significant difference was observed between the two groups in MHQ (left: p = 0.176; right: p = 0.279), grip strength (left: p = 0.138; right: p = 0.187) or pinch strength (left: p = 0.310; right: p = 0.580). The mean and CV of the thumb and index finger forces with stable and unstable loads are shown in Table 3. The repeated measures ANOVA did not show any significant difference between the DM and controls for either the mean (SL: p = 0.355; UL: p = 0.415) or CV (SL: p = 0.431; UL: p = 0.132). The within-subject factors, such as the hands, visual conditions, showed significant effects on the mean and CV of digit forces. The mean force had significant difference between the right and left hands (F 1,62 = 4.114, p < 0.05), between the visual and non-visual conditions (F 1,62 = 45.450, p < 0.001), and between the thumb and index finger (F 1,62 = 11.828, p < 0.05) with SL; and between the visual and non-visual conditions (F 1,62 = 55.915, p < 0.001) and between the two digits (F 1,62 = 20.450, p < 0.001) with UL. The CV showed significant differences between hands (F 1,62 = 5.974, p < 0.05) and between visual conditions (F 1,62 = 49.113, p < 0.001) with SL; and between the visual and non-visual conditions (F 1,62 = 56.912, p < 0.001) with UL. The repeated measures ANOVA showed that the DM did not affect the RMS (SL: p = 0.794; UL: p = 0.932) or the MPF (SL: p = 0.829; UL: p = 0.778) of sEMG (Table 4). Significant differences of RMS were observed between the visual and non-visual conditions (SL: The RPs corresponding to the APB and FDI muscle contractions within 1 s (shown in Fig. 2d,e) are depicted in Fig. 2f and g, respectively. The RQA parameters with time from one representative DM patient and the control subject are illustrated in Fig. 3. Statistical results of RQA parameters of APB and FDI in DM patients and controls Fig. 4a,c,e,g). Compared to the relaxed condition, holding the apparatus with either SL or UL led to a significant increase in the DET, ENTR and LAM of both the APB and FDI (p < 0.05, Fig. 4). For the patients with DM, precision grip with UL showed significantly lower DET, ENTR and LAM of the FDI than with SL (p < 0.05, Fig. 4). No significant difference between the SL and UL conditions in the RQA parameters of APB (p > 0.05, Fig. 4). Discussion This study examined the effects of DM on the dynamical coordination of intrinsic muscles during precision grip. Compared to the healthy subject, the patients with DM had higher blood sugar level (e.g. the HbA1c and PBG in Table 1), reduced digit-tip tactile sensitivity (e.g. the SWM in Table 2), lower motor and sensory nerve conduction Table 2. Neuromuscular tests scores of the DM patients and the controls. (1) Neuropathy total symptom score-6; (2) Michigan neuropathy screening instrument; (3) Michigan hand outcomes questionnaire; (4) Semmes-Weinstein Monofilament; (5) Nerve conduction velocity; *Significant difference between the patients and controls (t-test, p < 0.05); **Significant difference between the patients and controls (t-test, p < 0.001). Visual Non-Visual velocity along the median nerve (Table 2), more neuropathy symptoms (e.g. the NTSS-6 and MNS in Table 2) and impaired overall hand function (MHQ in Table 2). The left and right hand showed comparable reduction in tactile sensation and sensory and motor conductivity via the median nerve, which confirmed the bilateral development of neuropathy -a typical manifestation in DPN 36,37 Table 2). Although several studies have reported that the DM performed relatively lower grip force than controls 16,38-40 , Gorniak et al. argued that the subject characteristics in most of the previous studies were loosely controlled and led to contradictory conclusions about the effects of DM on motor ability (e.g. the grip and pinch strength) 13 . In their study, the DM patients and the controls did not show significant difference in their pinch forces 13 . More studies support the notion that the DPN at early stage could be associated with deficits of sensory afferents but not necessarily with detectable reduction in grip strength 41,42 . Result showed that the DM did not affect the amount (mean) or variability (CV) of the thumb and index finger forces during stable precision grip (Table 3). Considering the precision grip is a sensorimotor process, the motor systems are under both feedforward and feedback controls. The feedforward mechanism allows individuals to program appropriate motor commands prior to grasping according to previous experiences of the object properties; whereas the feedback mechanism adjusts neuromuscular activations according to real-time sensory information 27,34,43 . By examining the grip force performance in process of holding to lifting an object, Chiu et al. found that the DM patients exhibited reduction in capacity of online digit-force adjustment according to the inertial load during the object's dynamic movement 10 . The current study revealed that holding the object stably in air might exempt the digit force exertion from online feedback control based on peripheral sensory inputs, but rather rely on a feedforward control based on pre-programmed motor commands or default modes for force production 44 . It could be inferred that the feedforward control could extensively compensate for the loss of sensory information associated with the DM, guaranteeing the patients and controls with similar digit force production for the stable precision grip task. Results showed that when grasping and holding a light object (the gross weight of apparatus was only 170 g), the patients with DM and the controls maintained roughly the same digit force levels and variability, as well as the similar muscle contraction magnitudes (RMS) and frequency (MPF , Table 4). Interestingly, based on the comparable force outputs and similar levels of muscle activation, patients with DM and the controls exhibited quite different dynamical patterns for intermuscular coordination (Figs 3-5). The DM patients had significantly higher CRQA parameters (RR, DET, ENTR and LAM) than the controls (Fig. 5). The increased RR shows a higher probability of similar states occurrence in the neuromuscular dynamic systems, revealing a higher regularity in the intermuscular dynamical coupling. The increased DET indicates a more deterministic structure (or predictable structures) of intermuscular coordination in the DM patients compared to the controls. The increased ENTR in DM reflects augmented complexity of the deterministic structure during coupling of the muscles. The higher LAM represents the increased occurrence of laminar states in the neuromuscular systems of the APB and FDI, implying more constant firing rate and reduced instability in the neuromuscular control in DM. In addition, the RQA results from individual muscles showed that the DM patients had significantly higher RR, DET, ENTR and LAM than the controls from the FDI's sEMG signals (Fig. 4b,d,f,h); by contrast, no significant difference between the patients and controls were found in the RQA indicators of the APB's sEMG signals (Fig. 4a,c,e,g). This finding suggests that the altered intermuscular coordination associated with DM would be highly related to the changes in neuromuscular activation of FDI. Previous electrophysiological studies have found that the patients with DPN had about 30% reduction of motor unit number estimates, 20% reduction of compound muscle action potentials and 15% reduction of mean firing rates on their FDI muscle compared to the healthy subjects, leading to muscular remodeling and altered firing patterns 45 . The degeneration and dysfunction of FDI associated with DPN would present in the dexterous hand task and significantly affect the dynamical structure of muscle activity during sustained precision grip. It should be noted that the effects of DM reflected by RQA (Fig. 4) and CRQA (Fig. 5) were associated with muscle contractions; otherwise, at relaxed state without muscle contraction, no significant difference was observable between the DM and control groups. This may further confirm the The CRQA is an analytical tool to identify functionally meaningful actions in fuzzy, complex, and dynamic neuromuscular activities, such as deterministic or stochastic components, structural complexity, periodic patterns, or motor synchronization, underlying the dynamical coordination across the APB and FDI muscles that contribute to prehensile kinetics 25 . The anatomical and neural arrangement of the APB and FDI muscles substantiates the two as relatively interdependent systems that strongly couple and intelligently match each other during precision grip 46,47 . This study found that the DM patients had significantly higher CRQA parameters (RR, DET, ENTR and LAM) than the controls (Fig. 5), suggesting altered dynamical coordination across muscles. A compensatory mechanism underlying grip control may explain for this phenomenon 27,33 . With long-term high blood glucose, sensory feedback of DM patients was intensely obstructed (Table 2) so that subjects need to compensate for the loss of sensory feedback by relying more on the feedforward control. Under this compensatory mechanism, preprogrammed motor commands were reinforced but the online feedback regulation was diminished, rendering more deterministic structures for the APB-FDI coordination. This study examined precision grip with UL in addition to SL. The purpose to set the UL condition is to provide a perturbation by which the effects of DM on precision grip control would be magnified. Results showed that precision grip with UL had significantly higher DET, ENTR and LAM for the FDI (Fig. 4d,e,f), and higher CRQA parameters (DET, ENTR and LAM) across the APB and FDI (Fig. 5b,c,d), in comparison of the grip with SL. This revealed that the load perturbation could remarkably interfere with the FDI contraction and intermuscular coordination between the two intrinsic muscles during sustained precision grip. This finding is in line with the previous studies that both the intrinsic muscle activation and muscle synergy for precision grip are under modulation of environmental factors [48][49][50] . It is noteworthy that the differences in RQA and CRQA parameters between the UL and SL were observed in patients with DM rather than in the controls, suggesting that the patients with DM had lower adaptability to the uncertainty of environment than the healthy individuals. Specifically, as grasping and interacting with the object with UL, the digits need to produce suitable forces to the changing center of gravity of the apparatus, thereby demanding higher level of online regulation than grasping an object with SL 51,52 . The higher RQA and CRQA values in DM with UL indicate a more regular organization of motor potentials in FDI and a stronger intermuscular coupling between APB and FDI than with SL. These findings provide evidence that the sensory deficits in DM could remarkably decrease the capacity of online motor adjustment based on sensory feedback, making the grip control rely more on the feedforward strategy. The APB and FDI were selectively examined in this study since they are key intrinsic hand muscles involved in precision grip tasks and are reflective of abundant neurophysiological information 53,54 . Relative to the other intrinsic muscles the ABP and FDI muscles are easy to access by the surface EMG electrodes. In literature, the APB and FDI are a muscle pair that has been frequently examined, particularly in the studies focusing on the intermuscular coherence or neural drive coordination for dexterous manual tasks 46,55 . It is noteworthy that more muscles, including the intrinsic muscles like the flexor pollicis brevis and opponens pollicis and the extrinsic muscles like the abductor policis longus and extensor pollicis brevis, may also contract synergistically with the APB during precision grip. Activation of the synergistic muscles may share the motor commands delivering to the target muscles and alter the regulation of motor unit recruitment for the specific muscle. This could partially explain the results that no significant effects of DM was found on the APB activation (Fig. 4a,c,d,g). Previous studies found there are probably no unique and deterministic synergistic muscle activation pattern in low force range during precision grip 47 . It would be essential to examine more synergistic muscles to better understand how they interact with the APB and how the synergistic contraction is affected by DM with higher level of force production. This study investigated the effects of DM on dynamical muscle coordination underlying sensorimotor control for a functionally meaningful action. A freely moveable apparatus was chosen instead of a spatially fixed handle. By this set-up the thumb and index finger need to comply with mechanical or task constraints such as producing zero residual forces and moments; guaranteeing enough load force to counterbalance the weight of the handle; avoiding handle tilt from unbalanced forces across the digits, and accommodate to the load perturbations, etc. To fulfill all these constraints the motor system needs to produce well-coordinated digit forces 34,56 . The DM-related patterns observed in the dynamical muscle coordination were associated with the inter-digit force coordination during precision grip of a freely moveable apparatus. It would be of interest to further examine the effects of DM on the dynamical muscle coordination when individuals pinch upon a spatially fixed apparatus with less coupled digit forces. This may help us learn more about the associations between each muscle activation and the specific digit force production. Conclusions The DM changed the dynamical structures of muscle activation for the FDI and of the intermuscular coordination between the APB and FDI during stable precision grip. More deterministic structures were found in the sEMG signals of FDI in DM patients, potentially attributable to the muscular remodeling and altered firing patterns associated with DM. A compensatory feedforward mechanism may be responsible for the stronger intermuscular coupling between the APB and FDI muscles. Sensory deficits in DM remarkably decreased the capacity of online motor adjustment based on sensory feedback, rendering a decreased adaptability to the uncertainty of environment. This study shed light on the inherent dynamical properties underlying the intrinsic muscle activation and intermuscular coordination, and the role of DM on sensorimotor function. Findings of this study may facilitate the development of a non-invasive method for clinical diagnosis of DPN.
2018-04-03T04:05:45.639Z
2018-03-12T00:00:00.000
{ "year": 2018, "sha1": "0d7c1e18f949cab25ab21dad6581a677f2f594f3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-018-22588-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d8a376a59fb13210f3d5c5276444545592709870", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
119459063
pes2o/s2orc
v3-fos-license
Benchmark Results and Theoretical Treatments for Valence-to-Core X-ray Emission Spectroscopy in Transition Metal Compounds We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparison to experiment. I. Introduction In the current landscape, the field of x-ray absorption spectroscopy (XAS) occupies a position of broad scientific scope and technological importance. This footing, however, was not easily achieved. While the roots of XAS extend back to the first observations by de Broglie in 1913, [1] the first 60 years of its life was spent as a topic of fundamental research with limited opportunity for application. It was not until the 1970s, with the establishment of several electron storage rings for dedicated synchrotron radiation experiments, [2][3][4] that XAS became a methodology with steadily growing reliability, availability and, especially, breadth of impact. Since that time a tremendous amount of work and resources have gone into building synchrotron lightsources, and more recently x-ray free electron lasers, around the world. [5][6][7] The theoretical understanding of XAS has been similarly fraught. Settling the central conceptual issue of the locality of the interrogated density of states was a nearly 50 year battle. [8] The discovery of the 'EXAFS equation', [9] effectively casting the extended absorption oscillations as a single-or few-scattering process, was merely the first shot which launched several decades of work in finding optimal descriptions of the phase shifts due to atomic and inter-atomic potentials. [10][11][12][13][14][15][16][17][18][19] The establishment of reliable theoretical predictions and interpretations for oscillations in the main body of the near-edge fine structure required, first, a simplified computational framework for the influence of full-multiple scattering and, second, a careful treatment of core-hole effects. [8] While a number of these issues have been settled, others are still matters of contemporary research. Chief among these is the interpretation of preedge features, especially those coupled to charge-transfer effects or other dynamical rearrangement of charge density that go beyond 'typical' excitonic effects induced by the corehole potential. [20][21][22][23][24][25][26] The history of XAS hence represents a clear example of a reoccurring lesson in science: the growth of any analytical method requires parallel development in each of instrument technology, cross-technique validation, and theory. Indeed, due to the lack in each of the prior three criteria, few could have seriously imagined in the early 1970's that XAS would evolve to the point where it is now routinely used to solve forefront problems in metallorganic chemistry or that it would become a work-horse for industrial and fundamental research in catalysis [27][28][29][30][31] and electrical energy storage, [32][33][34][35][36][37] to name only a few prominent examples. Following in the technical and, to a growing extent, historical footsteps of XAS, x-ray emission spectroscopy (XES) has over the past few years emerged as an important new spectroscopic tool, spreading from the realm of fundamental condensed matter science to, e.g., applications in catalysis, [28,[38][39][40][41][42][43] electrochemistry, [44][45][46] biological sciences. [47][48][49][50][51][52][53] While the semi-core and deeper core transitions involved in XES are often reasonably well described by perturbed atomic multiplet approaches due to the extreme localization of the atomic-like initial and final states, the situation is markedly less clear for those transitions involving valence electron density of the host species and ligands. As the name suggests, this valence-to-core (VTC) x-ray emission involves the filling of a deep core-hole via de-excitation of valence-level electronic states. The valence orbitals, with energies within a few eV to ~15 eV of the Fermi level, are the most sensitive to the chemical environment and therefore VTC-XES has much greater sensitivity to local coordination effects than do diagram lines involving only deeper core shells. While various other x-ray spectroscopy techniques exist (e.g., x-ray photoemission, x-ray absorption, x-ray Raman, etc.), there exist a number of fine issues of local and electronic structure that are best addressed through VTC-XES. A recent, well-known example is the identification the central atom in the nitrogenase ironmolybdenum cofactor for dinitrogen reduction in biological and industrial catalysis. [41] From the most general perspective, VTC-XES should be viewed as a natural complement to the preedge and very near-edge regions of XAS, in that VTC-XES is sensitive to the occupied, rather than unoccupied, states near the Fermi level. At the same time, VTC-XES comes with a certain advantage in that, due to the final-state rule, theoretical treatment of VTC-XES is simplified because of the absence of a core hole after emission. Again, following the developmental history of XAFS, and all other modern spectroscopies, when the applications and demands of VTC-XES expand, so too must the supporting infrastructure in experimental apparatus and in methodology and validation of theory. While the early stages of growth in XES have benefitted from the pioneering work conducted at several synchrotron end-stations, [51,[53][54][55][56][57][58] the relative scarcity of these dedicated beamlines is a serious hurdle to routine application. This has led to continuing effort by several groups to develop laboratory-based XES capabilities. [59][60][61][62][63] Here, using this equipment at the University of Washington, [64][65][66] we present a high-quality VTC dataset of several inorganic Zn and Fe compounds. These compounds provide an interesting range of local electronic and atomic structure while retaining sufficient structural simplicity such that theoretical treatment should not, a priori, be challenged by material complexity. To date, the most successful models are those based on density functional theory (DFT). Different implementations, however, often differ in significant ways (treatment of electron exchange-correlation, basis sets, real vs. reciprocal space, inclusion of relativistic effects, etc.). We therefore present a critical assessment of several state-of-the-art DFT-based electronic structure codes in the context of this new experimental dataset. While the proper choice of theoretical method may vary from application to application, the present investigation will help identify strengths and limitations of the various approaches. We note that we are not the first to seek a better understanding of the validity of theory in order to expand the range of application of VTC-XES. In recent years, the DeBeer group and collaborators have embarked upon a course of study aimed at establishing the information content available in VTC-XES of complex molecular systems [38][39][40][41][42][43]52] with the ultimate goal of using time-dependent DFT to develop an understanding of chemical information in a molecular orbital framework. This manuscript continues as follows. First, in section II, we present experimental details. This includes both sample preparation and details of the laboratory-based spectrometer used here. Second, in section III, we provide technical details for the implementation of the three different theoretical codes that are compared to experiment. Next, in section IV, we present results and discussion. This begins with necessary demonstration of baseline spectrometer performance metrics and the methods used for subtraction of fluorescence contributions not associated with the VTC transitions, subsequently continuing to a complete presentation of all experimental and theoretical results. We conclude in section V. II. Experimental All samples for this study were prepared from high purity powders (99.9% or better) from Sigma Aldrich or Alfa Aesar, the exception being the Zn and Fe metal samples which were foils (99.9%) from ESPI Metals. Powder samples were pressed into few-mm thick pellets and encased in pouches made from 25-m thick polyimide films. Although VTC features were first observed in the laboratory as early as the mid-1930s, [67][68][69] it is only in recent years that laboratory-based equipment has been employed in chemical studies. [59] Here we employed a Rowland-circle spectrometer developed at the University of Washington. [64][65][66] This low-powered prototype instrument achieves synchrotron-quality energy resolution and also count rates comparable to what would be obtained for the same XES studies at monochromatized bending magnet beamlines at third-generation synchrotrons. Briefly, sample fluorescence was stimulated sample via output from a commercial x-ray tube We note that the spectral resolution is poorer for the Fe compounds than it is for Zn (where it is close to core-hole limited). We believe this result stems from defects in the Ge (620) optic leading to increased bandwidth. Nonetheless, the performance is sufficient to cleanly resolve key features in the VTC spectra. As DFT is ill-equipped to model the core-to-core Kβ 1,3 emission due to difficulties in correctly estimate 3p-3d splitting, the intensity contribution of the high-energy tails of these lines are typically subtracted from the valence region for comparison of theory to experiment. To this end, each full spectrum was fit to a series of pseudo-Voigt functions and a constant background using the Blueprint XAS package. [70,71] In addition to the main Kβ 1,3 and valence features, we include extra curves to model the multi-electron excitation peaks (KLβ) above the Fermilevel [72][73][74] and the radiative Auger satellites [73] in the intermediate area as such features are not accounted for in the base theories. For the Zn spectra an additional pseudo-Voigt function is included to model the elastically scattered Au Lα 2 line originating from the tube anode. The width and position of this curve was constrained to be consistent across all samples. To emphasize the valence region in fitting, it was assigned a weighting of 6:1 relative to the Kβ 1,3 . Representative results of this procedure are shown in section IV.A, below. III. Theoretical Methods We perform calculations using three state-of-the-art, ab intio electronic structure packages: Quantum ESPRESSO (QE), [75] FEFF, [76] and NWChem. [77] While each of these codes has a basis in DFT, they are built around distinct treatments leading to unique calculations. We briefly discuss the methodology for each implementation below. First, calculations were performed within the generalized gradient approximation-DFT framework using ultra-soft pseudo potential with 125 Ry energy cutoff implemented in the QE package with adequate k-point sampling for convergence with the PBE correlation and exchange. [75,78] We calculate the off-resonant XES spectrum assuming the 'final-state rule' which assumes a filled core-hole and a screened valence-hole. The spectra calculated here consider only dipole contributions to the transitions and are thus due to p-type projection of the density of states (DOS). To simulate the natural core-hole lifetime broadening and experimental resolution, the calculated stick spectra were Lorentzian broadened by 6.0 eV and 2.5 eV for Fe and Zn respectively. Each spectrum was then shifted independently in energy to align with experiment. It should be noted that the calculated spectral widths tend to be unphysically compressed due to the well-known problem of DFT underestimating the band gap. [79] Second, theoretical spectra were also simulated using a full multiple scattering method implemented within the FEFF 9.6 code. [76,80] The potentials were calculated self-consistently using a 5.0 Å cluster. The spectra were calculated using a full multiple scattering radius of 6.0 Å. The initial core-state energy levels were calculated using the final self-consistent potential. This modification is intended to provide more accurate relative chemical shifts. Here, both electric dipole and quadrupole transitions were included. Following the standard practice for XES, these calculations were performed with no core hole. For comparison to experiment, these results were convolved with a Lorentzian (4.5 eV for Fe and 0.5 eV for Zn) and shifted independently in energy to match experiment. FEFF also includes calculations for the main Kβ 1,3 lines, but these contributions have been removed for the sake of comparison. Finally, the VTC-XES approach in NWChem is based on linear-response time-dependent density functional theory (LR-TDDFT), which has been used successfully to simulate the VTC-XES spectra of low-and high-spin model molecular complexes involving Cr, Mn and Fe transition metal centers in good comparison with experiment. [81] First a neutral ground state calculation is performed, a full core hole (FCH) ionized state is then obtained self-consistently where the 1s core orbital of the transition metal (TM) absorption center is swapped with a virtual orbital combined with the maximum overlap constraint to prevent core hole collapse. A LR-TDDFT calculation, within the Tamm-Dancoff approximation (TDA), is then performed with the FCH reference state to simulate the VTC emission process. This approach allows one to go beyond the single-particle picture as all orbital pairs with significant contributions to the emission process are included naturally. To describe excitations beyond the dipole approximation, higher-order contributions are included in the calculation of the oscillator strengths. All systems (non-magnetic Zn compounds) were represented with finite clusters constructed from crystal structures obtained from experiment. To account for the surface states, the clusters were terminated using a set of suitably chosen pseudo-hydrogen saturators whose charges are calculated using the formal charges of the surface atoms. [82,83] The Los Alamos effective core potential (LANL2DZ) [84][85][86][87] and associated basis sets were used for all the atoms (Zn, Cl, S, O) except the Zn absorbing center in each system which was represented with the Sapporo-TZP-2012 [88] all electron basis set. The PBE0 exchange-correlation functional [89] was used for all calculations. For comparison to experiment, each calculated spectrum was convolved with a 2.0 eV Lorentzian and energy shifted. We note that, in contrast to the QE and FEFF results, this shift was constant across all samples indicating an accurate accounting for chemical shifts. Unfortunately, the corresponding calculations for the Fe materials were not performed in this study due to the added complexity of dealing with magnetic effects in finite cluster calculations. IV.A. Instrument Baseline Performance To begin, it is important to briefly consider instrument performance and its systematic limitations before proceeding to the results themselves. First, in Figure 1, we show a typical spectrum from Fe metal, with data collection extending well past the Fermi level. Note that the figure is presented on a semi-logarithmic scale. The key point is that the noise floor from stray scatter is far below the intensity of the VTC transition. Second, in Figure 2, we show the instrumental insensitivity to sample preparation or positioning. This important characteristic, described in detail elsewhere, [65] is a consequence of moving the sample location slightly behind the Rowland circle and inserting an entrance slit onto the nominal 'source' location on the Rowland circle. The resulting spectra have less than 25 meV irreproducibility in overall energy scale even upon large sample movement or sample exchange. Third, as mentioned in Section II, DFT methods do not calculate several real fluorescence 'backgrounds' that contribute in the same energy range, i.e., the high-energy tail of the Kβ 1,3 fluorescence, radiative Auger contributions, or fluorescence resulting from multi-electron excitations. Consequently, we follow prior practice and make use of physically-motivated fits to these backgrounds; a representative example is presented in Fig. 3. In the case of the Zn samples, the 'background' contributions from the Kβ 1,3 are nearly identical across all species. Minor 'background' variances occurred primarily in the intensity of a modest background peak due to Au elastic scatter line and in the shape of the multi-electron excitations appearing about the single-particle Fermi level. The latter is not unexpected, as the observed intensity of these lines are strongly influence by sample geometry and the structure of the absorption coefficient, as measured in XANES. [74,[90][91][92][93] The issue of sample re-absorption of fluorescence prior to escape is also important in the shape of the Kβ 2,5 emission peak as its high-energy side often straddles the rising K-edge. This creates an important systematic effect in thick samples; fluorescence above the absorption edge is preferentially quenched when escaping outward from the sample bulk. Due to the fine structure modulations in absorption, and in some cases strong pre-edge features, this effect often distorts spectral shape in significant ways. As an example, self-absorption causes the apparent asymmetry in the Kβ 2,5 peak of Fe shown in Figure 4. In principle, sample self-absorption is correctable if the absorption coefficient, as measured in x-ray absorption spectroscopy, and sample thickness are known. An accurate correction, however, requires high precision in the relative energy scale between emission and absorption measurements. This is highly nontrivial as different instrumental setups are required for each type of measurement, and we do not attempt a correction for the data presented in this study, but a recent manuscript describes the methods needed for this correction in the context of multielectron excitations in Ni metal. [93] Here, we consider only the performance of our calculations only below the nominal edge energies (7112 eV for Fe and 9659 eV for Zn). IV.B. Experimental Spectra and Comparison to Theory Our VTC XES spectra for Zn compound are shown in Figure 5. Note, in particular, the clear splitting between the Kβ 2 and Kβ 5 lines, a situation that is somewhat unique to Zn among the transition metals. In Fe (below), and indeed most 3d-transition metals, these two features are indistinct due to core hole broadening and are thus referred to together as Kβ 2,5 . The origins of the weak Kβ 5 line, which was first investigated in the earlier twentieth century, [94] remains uncertain. While it is generally regarded as quadrupole-allowed transitions from state of metal 3d character, [95][96][97][98] it has recently been suggested that the major contribution could come instead from dipole allowed 4p-type states from neighboring atoms. [99] To address this issue, we present in Figure 6 the electric dipole and quadrupole contributions to the VTC spectrum of ZnO, as determined by FEFF. As the Kβ 5 sits atop the tail of the Kβ 2 line, we isolate the Kβ 5 dipole contribution for an accurate comparison. These calculations suggest that the above interpretations of Kβ 5 origin are individually incomplete and that both terms significantly contribute to the overall intensity. We return now to a comparison of the three calculations shown in Fig. 5. Outside of the missing quadrupole contribution in QE, we see similar predictions made between it and FEFF in terms of splitting between features, including a common underestimation of the splitting between Kβ 2 and Kβ 5 . Such a compression is a well-known problem in DFT, arising from difficulties in correctly predicting the bandwidth. [79] In contrast, LR-TDDFT based approach in NWChem generally shows an improved relative spacing of features compared to experiment. This response approach allows one to go beyond the single-particle picture as all orbital pairs with significant contributions (or multi-configurational character) to the emission process are naturally included. We note that unlike the calculations for QE (3.2 eV spread) and FEFF (6.3 eV spread), the NWChem predictions require a single, consistent energy shift across all samples to align with experiment. The ability to reliably predict relative energy shifts across sample chemistries is an important feature in VTC-XES analysis and hence this is a significant result in characterizing NWChem performance. One weakness in the NWChem results is the prediction of an apparent unphysical peak at ~9646 eV for pure Zn metal. We believe this feature may be an artifact of the finite cluster size used to represent a metallic system. Next, we present the results for several Fe-rich samples in Figure 7, including comparison to QE and FEFF. Overall, we see good reproduction of the experiment by both theories. We note that the QE and FEFF calculations produce similar spectra with nearly identical splitting between the Kβ 2,5 and Kβ′′ peaks, the latter being a cross-over feature originating from ligand orbital with metal-p character. The magnitude of this splitting, however, appears to be slightly underestimated especially in the case of Fe 2 O 3 and Fe 3 O 4 . As with the Zn results, this is likely due to difficulties in correctly predicting the bandgap. In the case of FeS, QE misses the crossover peak entirely, which we believe to be an issue of the DFT misidentifying the character of the state. In general, the intensity of this feature tends to be under-predicted with respect to FEFF. Unlike with Zn, the good agreement between FEFF (which contains both dipole and quadrupole contributions) and QE (dipole contributions only) for the Fe materials suggests that quadrupole contributions are weaker for Fe. In Figure 8, we present a separation of each term in the FEFF output for Fe 2 O 3 , confirming this assertion. The intensity of the quadrupole features is similarly negligible across all Fe-rich samples. Again, we stress that the required energy shifts to align calculation with experiment are inconsistent across samples, with a relative spread 3.5 eV and 10.6 eV for QE and FEFF respectively. As seen from this dataset the relative positioning of VTC features is not fixed, with real, physical shifts occurring due to changes in chemical state, particularly oxidation. This deficiency is therefore a topic that must be addressed in order to establish a robust, ab initio interpretation of experimental spectra. The above results suggest several interesting results that can guide improved theoretical treatment and its comparison to experiment. First, the mixed dipole-quadrupole nature of the Kβ 5 feature is likely not specific to the present simple crystal structures and the question of the magnitude of possible quadrupole character of the Kβ 5 feature for other transition metal species should be considered, although it does appear to be weak for Fe in the present study. Second, and not unexpectedly, the compression of features due to underestimation of bandwidths will be a persistent issue in the treatment of VTC-XES with DFT, although some progress should be possible with, e.g., using a GW approximation for the quasiparticle energy shifts. [100,101] Finally, there do appear to be benefits to using time-dependent DFT-based approach. With the exception of pure Zn, where we believe cluster-size effects played a role, we see generally better agreement with the positions and amplitudes of the observed VTC-XES features in the Zn compounds. Before concluding, it is useful to compare and contrast the present degree of agreement between theory and experiment with that observed in prior work conducted on molecular systems containing transition metals. In those studies, time-dependent DFT calculations were applied within the ORCA quantum chemistry package. [102] The full details of their methodology can be found elsewhere. [40,103] Overall, the strength and weaknesses of these calculations in reproducing experiment match well with our own results discussed above. In general, the ORCA-DFT results have been reported to successfully track relative intensities of VTC features. [39,40,49,52] While the absolute energy scale can deviate significantly, the relative energy scale, both between features within a single spectrum and when comparing chemical shifts across samples, tend to show excellent agreement much like the NWChem predictions. [40,52] In some cases, however, this code has been shown to predict features which are apparently absent in the experimental data, but that may be difficult to find in experiment due to limitations in removing the Kβ 1,3 background and due to the larger Poisson noise induced by this background. [52] V. Conclusions In summary, we have presented a high-quality VTC-XES dataset of several inorganic Zn and sample location. Also shown is the residual intensity between the two curves (gray line). In this absence of the on-circle, 0.5-mm wide entrance slit, this misalignment would correspond to a relative shift of ~900 meV. Here, the two spectra agree so well as to be nearly indistinguishable. and NWChem (purple) valence-to-core spectra for various Zn compounds. For comparison, the theoretical results have been broadened as described in the text and shifted to align with the main peak. Experimental data has been offset as indicated. (green) valence-to-core spectra for various Fe compounds. For comparison, the theoretical results have been broadened as described in the text and shifted to align with the main peak. Experimental data has been offset as indicated.
2018-11-18T03:06:27.142Z
2017-06-27T00:00:00.000
{ "year": 2017, "sha1": "3d4b153991eff24ea911fc9088012f10356fee8f", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.96.125136", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "3d4b153991eff24ea911fc9088012f10356fee8f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
75140194
pes2o/s2orc
v3-fos-license
Clinical Decision-making among Emergency Physicians: Experiential or Rational? It has been postulated that everyone has an affinity for one of two cognitive approaches: experiential (intuitive) or rational (conscious). The aim of this study was to analyze the thinking processes of Saudi emergency physicians at nine hospitals in Riyadh. This was a cross-sectional study, which was undertaken in Riyadh using a psychometric tool called the Rational–Experiential Inventory-40. The survey, sent by e-mail to 202 emergency physicians, had a 53% response rate. Most respondents were male (86%). The total surveyed participants included consultants (36%), associate consultants (19%), registrars, fellow or staff physicians (7%), and residents (38%). The results found a mean (standard deviation) score of 3.73 (0.51) for rational approaches to decision-making and 3.09 (0.45) for experiential approaches among the emergency physicians surveyed. The difference of 0.46 between the two scores was not statistically significant (p = 0.23). Female emergency physicians tended toward slower logical thinking (rational). Consultant emergency physicians had a higher score for fast intuitive automatic thinking (experiential) than nonconsultant physicians. This was statistically significant, t105 = 2.1, p = 0.4. Our results suggest that although both thinking styles are used in clinical decision-making, consultant emergency physicians prefer rational approaches to decision-making. INTRODUCTION Decision-making is an important yet complex task to be performed in any healthcare field. It relies on several mental processes, including perception, memory, and problem-solving skills [1]. Defects in any of these could lead to the medical errors seen in many areas of medicine, including the emergency department [2]. Understanding how decision-making happens, and what flaws could occur during the process, might help reduce medical errors. Decision-making is a cognitive process, and cognition is complex and hierarchical [3]. It starts with simple skill-based tasks that do not require much cognitive input compared with coordination skills [4]. Next come rulebased decisions that make use of clinical guidelines and diagnostic algorithms. More cognitive effort is needed for this, but a clinician can rely greatly on these rules [4]. At the top of the hierarchy is knowledge-based cognition, which involves clinical and diagnostic reasoning and requires a great deal of attentiveness to reach an appropriate end point in a given situation [4]. Many studies performed on healthcare errors and their characteristics have shown that a vulnerability exists in practice when a clinician is faced with a situation that needs integration between knowledge and a real-life situation [4,5]. Diagnostic errors appear to be one of the most common type of error, and are a property of knowledge-based cognitive behavior [2,6]. Sometimes other aspects are factored into the analysis of how the diagnostic error occurred, such as lack of information or false-negative results. However, these all stem from a cognitive error [7]. Other types of error exist in different clinical settings. For instance, when it comes to resuscitation in trauma, errors in clinical reasoning predominate, and have been attributed to failure to consider all available information in the disposition of trauma patients [8]. When it comes to errors, various elements could lead to adverse outcomes. One way to understand how they happen is to understand people's cognitive approaches to different situations. All cognitive approaches can be categorized as either experiential (intuitive) or rational (conscious) and it has been postulated that everyone has an affinity for one of these [9]. A study on emergency medicine physicians registered with the College of Physicians and Surgeons of Ontario reported that they favored rational thinking overall [9]. Having a similar understanding of the cognitive approaches of Saudi emergency physicians could help prevent medical errors and enhance patients' safety. Therefore, the aim of this study was to analyze the thinking processes of Saudi emergency physicians at nine hospitals and medical cities in Riyadh. Study Design and Sampling Technique Using a cross-sectional design, this study was undertaken in Riyadh, the capital city of Saudi Arabia, to assess whether emergency physicians favor an experiential or rational decision-making process. This was achieved using a previously published psychometric tool called the Rational-Experiential Inventory-40 (REI-40). The survey was sent to all the available Saudi emergency physicians working in the targeted hospitals, through their e-mail and personal social networks. A total 202 physicians were contacted (53% response rate). The electronic survey (Survey Monkey Inc., San Mateo, CA, USA) was provided as a link that included a first page outlining the study objectives, the researchers' contact information, and clearly stating that anonymity would be guaranteed in the final reports and answers would be confidential. The study was approved by the Institutional Review Board, King Abdullah International Medical Research Centre, Ministry of National Guard -Health Affairs, Riyadh, Saudi Arabia. We included all Saudi emergency physicians, from junior residents to consultants. Physicians who were retired or whose contact information was missing and could not be reached through social networks were excluded. Survey Instrument All the physicians were asked to complete an electronic survey of two parts: demographic data (gender, position, and institution) and the REI-40 questionnaire, which aimed to differentiate between faster intuitive automatic thinking (experiential) and slower logical thinking (rational). This tool has been validated in many different populations including paramedics [10], cardiologists [11], and emergency physicians [9]. The Cronbach' a for the tool ranged from 0.74 to 0.91, indicating high internal consistency and reliability. Participants were asked to indicate their responses to 40 statements using a 5-item Likert scale, ranging from definitely false (1) to definitely true (5). Data Manipulation and Analyses Data manipulation and analyses were done using Microsoft Excel (Microsoft, Redmond, WA, USA) and SPSS Statistics for Windows (version 22.0; IBM Corp., Armonk, NY, USA). The REI-40 was scored based on a coding manual provided by the lead investigator of the instrument, which gave reverse coding for some of the statements. Categorical variables were calculated using frequencies and percentages, whereas continuous variables were calculated as mean [standard deviation (SD)] and presented as histogram shapes. A 95% confidence interval was calculated for the difference between mean rational and mean experiential scores. Independent Student t-test was used to assess the differences between means. All tests were considered statistically significant if the p-value was <0.05. RESULTS Over the study period from September 2017 to January 2018, 107 emergency physicians responded to the survey. All the physicians were in one of the targeted aforementioned hospitals. Most participants were male (86% or 92 physicians). This was slightly higher than the overall ratio between male and female physicians working in Saudi Arabia. According to the Ministry of Health's Yearly Statistics Book 2016 [12], the ratio between male and female physicians is 37:13. Of the participants, 36% were consultants; 19% associate consultants; 7% registrar, fellow, or staff physicians; and 38% residents. Participants' characteristics are summarized in Table 1. Emergency physicians' mean (SD) rational score was 3.73 (0.51) and the mean (SD) experiential score was 3.09 (0.45). The difference of 0.46 between the two scores was not statistically significant using Pearson correlation, which gave p = 0.23. The distribution of the scores was normal, with some overlaps. Figure 1 illustrates the rational and experiential score patterns. Female emergency physicians tended toward slower logical thinking, with mean (SD) rational scores of 3.80 (0.5) compared with 3.72 (0.58) for males. Mean (SD) experiential scores were higher for male physicians [3.10 (0.45)] compared with those for females [3.01 (0.46)]. However, these results were not statistically significant: t 105 = -0.55, p = 0.58 for the gender mean rational scores and t 105 = 0.77, p = 0.44 for the gender mean experiential scores. Consultant emergency physicians showed a greater capacity for fast intuitive automatic thinking [mean (SD) experiential score 3.21 (0.45)] than nonconsultant physicians [mean (SD) experiential score 3.02 (0.45)]. This was statistically significant, t 105 = 2.1, p = 0.04. Table 2 summarizes the comparison of mean REI-40 scores for 107 respondents on the basis of demographics. DISCUSSION The study examined the thinking processes among emergency physicians in nine Riyadh hospitals to evaluate whether their clinical decision-making is more experiential or rational. The results showed a slight difference between the emergency physicians' mean (SD) rational score of 3.73 (0.51) and mean (SD) experiential score of 3.09 (0.45) that was not statistically significant. This suggests that emergency physicians tend to exploit both experiential and rational decision-making approaches, but perhaps favor the rational style [13]. knowledge as well as the use of approaches that were found to be more effective earlier, notwithstanding discrepancies between the present state and earlier ones [19]. The finding is consistent with the suggestion of McLaughlin et al. [20] that decision-making is a complicated process that differs widely among individuals based on social and context-specific influences. The results give crucial insight into how emergency physicians render decisions. This could be helpful in future research endeavors because it appears that accumulated experience over time plays a significant role in decision-making [21,22]. The evidence generated from this study could be useful when considering change management in the healthcare sector because scholars opine that those favoring rational decision-making tend to be more receptive to evidence-based medicine and knowledge translation efforts [23]. The research results assert the need for decision-support tools that are specifically designed to take into consideration both experiential and rational decision-making approaches. It is conceivable from the findings that male emergency physicians will react in a different manner from female physicians to particular decision-support tools. Physicians with dissimilar practice settings and diverse training backgrounds could also respond differently. Thus, it is critical to undertake careful refinement and specificity before choosing any tool [24]. The findings of the investigation point out the significance of REI-40 as a tool for self-assessment that is relevant to a clinical patient encounter when physicians are cognizant of their decision-making approaches and the inherent inadequacies. When someone is cognizant of their general decision-making approach, they may be in a better position to engage in metacognition, which is described as the practice of "thinking concerning how to think, " so as to tackle any noticeable cognitive biases [25,26]. Limitations One of the limitations of the study was the low response rate of 53%, which makes it difficult to generalize the results to other settings. The voluntary manner in which participants were recruited carries a risk of self-selection bias because the physicians were reached through their e-mail and personal social networks. The greater number of men in the sample might affect the findings. However, it may similarly epitomize a cultural perspective among the emergency physicians that rational decision-making is preferable to experiential decision-making. Moreover, it is possible that these conclusions were a result of social desirability bias. By contrast, Akinci and Sadler-Smith [14] found intuitive (experiential) thinking to be as accurate and effective as analytical thinking. Engebretsen et al. [15] posit that individuals tend to opt for rational decision-making when the risks are great, which several scholars affirm when to it comes to working in the emergency department. The female emergency physicians were more inclined toward slow logical thinking, which is more analytical and intuitive, with mean (SD) rational score of 3.80 (0.5) compared with 3.72 (0.58) for their male counterparts. Mean (SD) experiential score for males was 3.10 (0.45) compared with 3.01 (0.46) for females. Nonetheless, the gender mean rational scores and the gender mean experiential scores were not statistically significant. This finding was consistent with previous studies [15][16][17]. Numerous surveys have authenticated REI-40 as a psychometric tool with interesting similarity, showing that the female participants favored experiential decision-making more than male respondents did [9,14,18]. A link between years of clinical experience and decision-making was evident in that consultant emergency physicians demonstrated fast intuitive automatic thinking with a mean (SD) experiential score of 3.21 (0.45), whereas nonconsultant physicians had a mean (SD) experiential score of 3.02 (0.45). This finding is in agreement with assertions that decision-making is often based on acquired Furthermore, the sample of respondents was skewed toward male participants, and not representative of Saudi emergency physicians because, according to Ministry of Health of Saudi Arabia in the Yearly Statistics Book 2016, the ratio for male and female physicians is 37:13. CONCLUSION This study evaluated the general decision-making approach of emergency physicians, showing that although both rational and experiential techniques are used in clinical decision-making, consultant physicians prefer rational decision-making. The results of this investigation have fundamental implications for evidence-based medicine as well as knowledge translation efforts. This study supports the implementation of strategies that are focused on reducing errors in decision-making. Both styles of clinical decision-making are very important and no approach is considered more valuable than another. Future researchers may need to consider evaluating the decision-making approaches of emergency physicians on a broader scale with a larger sample size that is more representative. This could generate data enabling the design of decision-support tools relevant to diverse groups of emergency physicians. CONFLICTS OF INTEREST The authors have no conflicts of interest to declare.
2019-03-13T13:27:00.236Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "beccf7c950337016c8711935825df751eb8b55a2", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/125905566.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "beccf7c950337016c8711935825df751eb8b55a2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
181826365
pes2o/s2orc
v3-fos-license
Does Sustainability Score Impact Mutual Fund Performance? Given that sustainable investing constitutes a major force across global financial markets, in 2016 Morningstar began reporting Morningstar Sustainability and ESG scores. We use these scores to study the effects of Socially Responsible Investments (SRI) on European equity fund performance. Sustainability score and the different pillars of ESG scores (environmental, social, and governance) impact negatively on performance. We also test the effect on mutual fund flows and risk. The sustainability score is significant on the flows, so higher-rated funds receive a larger volume of funds. In terms of risk, the level of sustainability is negatively related to the VaR (value at risk) of the fund, supporting that higher scored mutual funds offer better protection against extreme losses. Introduction Socially Responsible Investment (SRI), also known as sustainable, responsible, and impact investing, is "an investment discipline that considers environmental, social and corporate governance (ESG) criteria to generate long-term competitive financial returns and positive societal impact" (US SIF, n.d. i ). According to the 2016 Global Sustainable Investment Review (GSIA, 2017), in 2016 there were $22.89 trillion assets being professionally managed under SRI strategies in the world. Bilbao-Tero, Álvarez-Otero, Bilbao-Tero and Cañal-Fernández (2017) conclude that the SRI label in the mutual fund industry is valued favorably by the market, which is an important factor that drives this growth. Barreda-Tarrazona, Matalín-Sáez and Balaguer-Franch (2011) conclude that social preference (instead of financial performance) is the primary factor for investors choosing SRI mutual funds. The growing interest in SRI in recent years has led to several organizations assessing mutual funds on how well the underlying companies perform on ESG issues. In 2016, Morningstar launched a Morningstar Sustainability Rating. The idea of the Morningstar Sustainability Rating is classifying mutual funds about ESG factors relative to their Morningstar category peers. The advantage of this product is that it makes it possible to find sustainable funds even if they aren't labelling themselves specifically as funds that support an SRI approach. The use of these scores shows an important difference to previous studies, which compare SRI funds with an index, or the most advanced studies apply a so-called matching approach, i.e. they compare the performance of SRI and non-SRI investment funds with similar characteristics (fund size, fund age, expenses, et cetera.) to properly considered management and transaction costs for both SRI funds and conventional funds (see, among others, Mallin, Saadouni and Briston, 1995;Gregory, Matatko and Luther, 1997;Statman, 2000;Kreander, Gray, Power and Sinclair, 2002;Kreander, Gray, Power and Sinclair, 2005). One important research question in the mutual fund industry about SRI investing is to know how SRI mutual funds perform. There are several studies that have demonstrated that companies with social responsibility policies and practices are good investments. For example, a recent paper of Friede, Busch and Bassen (2015) conducted a meta-analysis of about 2,200 unique primary empirical studies. They found that the majority of studies show a positive correlation between ESG factors and financial performance. But despite the investigations carried out to date, there is still a debate about whether these types of investments can create value for investors or not and why they put their money here. Although according to Lewis and Mackenzie (2000) and Webley, Lewis and Mackenzie (2001), some investors in SRI funds are willing to accept lower returns for their moral stance, the performance of SRI funds and conventional funds is still an open question. As Junkus and Berry (2015) sustain, after a review of the most recent work in major finance journals on SRI, "the performance of SR mutual funds and indexes are not generally significantly different to conventional funds or indexes, but again these results are also highly dependent on model specification, time period, benchmark, and other characteristics of the study". Authors such as Luther, Matatko and Corner (1992) and Mallin, Saadouni and Briston (1995) support the idea that SRI funds outperform market indexes. But the more conventional theory is that SRI mutual funds have the same return as any other funds, and authors such as Hamilton, Jo and Statman (1993), Sinclair (2002, 2005), Gregory and Whitakker (2007) and Bauer, Derwall and Otten (2007), Humphrey, Warren and Boon (2016) and Syed (2017) are in line with this theory. Another theory defends that choosing SRI funds are basically a "trade off" between investing in SRI and returns, so SRI investments underperform the benchmark (for example, White, 1995). One important recent paper is Nofsinger and Varma (2014), which provides a new perspective because they found that the different between socially responsible (SR) and conventional mutual funds depends on the state of the market. SR mutual funds outperform conventional mutual funds during periods of market crisis, but in non-crisis periods, SR funds underperform conventional funds. Previous research has studied the effect of sustainability on performance exclusively using a dichotomous variable to differentiate between socially responsible funds and conventional funds. However, the results could be biased because under "socially responsible", they could have funds with very different levels of sustainability. Statman and Glushkov (2016) conclude that there is a lack of clearly defined criteria to distinguish mutual funds as "socially responsible" results in inconsistently applied classifications that make it difficult to measure the performance of SRIs. Traditional methodology in empirical research is benchmarking with indices or, most recently, matched pair analysis, which was initially applied by Maillin, Saadouni and Briston (1995) and is based on comparing returns of SRI funds and conventional funds with similar characteristics in terms of volume of assets, interception dates, et cetera. For this reason, the inclusion of sustainability scores in our work allows us to evaluate whether the degree of sustainability of the portfolio in which the funds are invested has a positive effect on performance. As far as we know, only Dolvin, Fulkerson and Krukover (2017) and El Ghoul and Karoui (2017) analyzed this effect. Dolvin, Fulkerson and Krukover (2017) conclude that funds with higher Morningstar Sustainability scores have similar alphas from those with lower Sustainability scores. Authors also observe that there is little difference in the performance or Sustainability scores between self-proclaimed SRI funds versus those that fall in the top 50 and top 20 percent of Morningstar's Sustainability scores. Finally, they observe that mutual funds with higher Morningstar Sustainability metrics do not appear to be more attractive to investors compared to low scoring funds. In contrast, self-proclaimed SRI funds have performed significantly better regarding fund flows. El Ghoul and Karoui (2017) use CSR (Corporate Social Responsibility) scores to study the effect on fund performance and flows, concluding that higher values display poorer performance and weaker performance-flow relation. From an investor point of view, the advantage of using Sustainability scores is that they can select their SRI, taking into consideration the funds with better scores, whether or not they are declared as an SRI fund ii . This paper adds to the growing literature on SRI by specifically examining the effect of the degree of sustainability, measured though Morningstar Sustainability scores included in Morningstar Direct in 2016. In particular, we assess the effect of sustainability scores and the different dimensions in which the score is subdivided (environmental, social, and governance) in the performance, in addition to the downside risk and the flow of funds. On the other hand, the conventional dichotomous variable has been added to the models to evaluate to what extent the results may differ. Our empirical evidence also contributes to the literature on mutual funds that discusses whether applying a particular investment screening in portfolio selection affects the mutual fund performance (see, for example, Bauer, Derwall andOtten, 2007 or Muñoz, Vargas andMarco, 2014). SRI portfolios are subject to both positive and negative social screens (Rivoli, 2003). cThe Portfolio theory argues that narrowing the universe of assets restricts diversification opportunities and thus the risk-adjusted performance (Rudd, 1981); whereas Hill, Ainscough, Shank and Manullang (2007) and Chegut, Schenk, and Scholtens, 2011) consider that restricting investment screening allows the identification of companies with higher growth potential and better management, therefore leading to a better financial performance and risk profile. Sustainable mutual funds apply a specific portfolio screening by concentrating investments in socially conscious businesses. Although there is profuse empirical literature on the impact of social responsibility of the performance, little is known about the sustainability-based screening. Our empirical results show that a large number of funds are not declared sustainable but their portfolio is comparable to sustainable mutual funds. Furthermore, the Sustainability score is significant in explaining the level of performance, downside risk, and flows. We also achieved equivalent results for the three dimensions of sustainability (environmental, social and corporate). The signs are different on performance and downside risk when the conventional dummy to declare social mutual funds is used. The remainder of this paper is laid out as follows. In Section 2, we review the related literature on SRI performance, in Section 3 we describe our data and the performance evaluation metrics, in Section 4 we describe our empirical methods and results, in Section 5 we conduct robustness tests and, finally, we draw conclusions from our research. Literature Review Over the last few years, SRI investment research has been growing. The CFA Institute, which is a global association for investment professionals, states that "a key idea in the discussion of ESG issues is that systematically considering ESG issues will likely lead to more complete analyses and better-informed investment decisions" and "that every investment analyst should be able to identify and properly evaluate investment risks, and ESG issues are a part of this evaluation" (CFA Institute, 2015). For this association, there are basically two investors interested in considering ESG issues: value-motivated and values-motivated investors. We focus on the first kind of investors concerned with the financial performance of their SRI funds. Hamilton, Jo and Statman (1993) developed three hypotheses regarding the performance of SRI mutual funds. The first hypothesis is that SRI fund performance equals that of conventional funds, which is consistent with a market that does not regard the social responsibility feature. The second hypothesis is that SRI fund performance is lower than that of conventional funds, which is consistent with a market that values the social responsibility feature. Finally, the third hypothesis is that SRI fund performance is higher than that of conventional funds. There are several arguments which could explain why SRI mutual funds can outperform, in financial terms, the conventional funds (which do not consider ESG factors). First, SRI mutual funds have a higher proportion of their portfolio in the segment of small companies; these companies are better adapted to market changes (Luther, Matatko and Corner, 1992;Gregory, Matatko and Luther, 1997) and may also be more profitable in the long run. Second, social companies are more efficient, better managed and develop better in the market (Hamilton, Jo and Statman, 1993). From a theoretical point of view (for example, Margolis, Elfenbein andWalsh, 2009 or Flammer, 2015), social companies can reduce costs (penalties, etcetera.) or increase revenues (innovative products, greater employee effort, better public perception, increasing the likelihood that consumers will purchase the company's products or its share price, attract socially conscious customers, etcetera.). In contrast, one important argument of the detractors of SRI funds is that the universe of possible investments of these funds (individual companies) is small, so they assume a higher investment risk because of the lack of diversity (Chegut, Schenk and Scholtens, 2011). Humphrey and Tan (2014) replicate 10,000 pairs of SRI and conventional portfolios to test the impact of SRI screening on performance, finding no significant difference in the risk-adjusted return of screened and unscreened portfolios. They conclude that a typical SRI fund will neither gain nor lose from screening its portfolio. But Trinks and Scholtens (2017) find that negative screening implies an opportunity cost, because excluding controversial stocks for an investment portfolio may reduce financial performance. Authors such as Kurtz (1997) or Goldreyer and Diltz (1999) argue that SRI mutual funds managers need more information than conventional funds about the companies in which they invest; they base their decisions on deeper, more complete, and higher quality information, resulting in a significant reduction in the risk of their investment decisions. Empirical evidence of some authors, such as Luther, Matatko and Corner (1992) and Maillin, Saadouni and Briston (1995), support the idea that SRI funds outperform conventional investments. But there is also evidence to support the idea that SRIs are neutral to financial performance (Hamilton, Jo and Statman, 1993;Kreander, Gray, Power and Sinclair, 2005;Gregory and Whittaker, 2007;Bauer, Derwall and Otten, 2007;among others), or that SRI funds underperform conventional investments (for example, White, 1995). The first study about SRI investment was done by Luther, Matatko and Corner (1992), where these authors found that SRI investment funds did not under or outperform the index benchmark. They used 15 British Ethical funds, finding weak evidence that 15 UK SRI funds outperformed two stock market indices. Hamilton, Jo and Statman (1993) conducted a similar study where the difference of means of excess returns was not significant and only one of 17 mutual funds had a positive Jensen`s alpha. Luther and Matatko (1994) improved their prior work by including a small market index and they concluded that the excess returns of SRI funds are strongly influenced by the low capitalization of the small cap stocks. The study also shows that SRI funds have a neutral effect on performance. White (1995) researches US and German mutual funds using a simple regression against an environmental market index, showing that the SRI investments underperform the benchmark in terms of three performance measures (Jensen`s alpha, the Treynor ratio, and the Sharpe ratio). In this research, the author used a sample of six US funds and five German SRI Investment funds. All previous studies used an index as benchmark, so they have the problem of what is the appropriate index. Mallin, Saadouni and Briston (1995) avoided this problem by using a matched pair analysis to compare SRI mutual funds and conventional funds in the UK. The authors matched 29 SRI mutual funds to conventional ones using the size and the age of the funds as criteria. Their results showed no differences in the performance of both samples using the Sharpe and Treynor ratios as performance measures, but they found that ethical funds did better than the non-ethical funds when the Jensen performance measure was used. Gregory, Matatko and Luther (1997) studied 18 SRI funds where the investment area and the fund type were considered. They did not find differences in performance against conventional funds. Statman (2000) studied the performance of 31 US SRI mutual funds and the Domini 400 Social-Index (DSI) from 1990 to 1998. The results show that only some SRI funds could underperform the benchmark (S&P 500 or DSI). But, in general, SRI funds obtained a similar performance to S&P 500, DSI, and conventional funds. Kreander, Gray, Power and Sinclair (2002) used a matching procedure and the age, size, country and investment universe of the fund as variables. The study included mutual funds from Sweden, the Netherlands, Norway, Germany, the UK and Switzerland, and Jensen`s alpha and the Sharpe and Treynor ratios as performance metrics. Their results showed that SRI funds' performance was very similar to those of conventional funds. Kreander, Gray, Power and Sinclair (2005) studied the performance of 30 European SRI funds from four countries, finding that there is no difference between SRI funds and conventional funds. Bello (2005) studied 42 SRI U. mutual funds; he found no evidence of a performance difference between SRI and conventional funds. Both underperformed the Domini 400 Social Index and S&P 500 during the study period (1994 -2001). Bauer, Koedijk and Otten (2005) investigated the performance of 32 British, 16 German and 55 US SRI funds, they used Jensen and Carhart´s alpha and found that German and US SRI mutual funds underperformed in both their relevant indexes and the conventional funds, whereas UK funds slightly outperformed, however the differences were not significant. Scholtens (2005) investigated the performance of Dutch SRI funds and found that these funds outperformed conventional funds but with no statistically significant difference. Also Barnett and Salomon (2006) studied 61 SRI funds tracked by the US Social Investment Forum (USSIF). They found that the relationship between financial and social performance is neither strictly negative, nor strictly positive. Instead, they found a curvilinear relationship, suggesting that the two viewpoints may be complementary. Riskadjusted performance varies with the types of social screens used. Community relations screening (excludes firms that do not invest in and/or develop economically depressed communities) increased financial performance, but environmental and labor relations screening (excludes firms with a record of poor environmental performance and firms with a record of poor labor relations practices, respectively) decreased financial performance. Bauer, Otten and Rad (2006) investigated the performance of Australian ethical funds, and Bauer, Derwall and Otten (2007) invested evidence from Canada, finding no statistical difference in performance between conventional and SRI funds. Gregory and Whittaker (2007), in the UK market, found that neither SRI nor non-SRI funds exhibited significant under performance. Renneboog, Ter Horst and Zhang (2008) found that SRI funds in the US, the UK, and in many continental European and Asia-Pacific countries underperformed their domestic benchmarks. However, with the exception of France, Japan, and Sweden, the risk-adjusted performance of SRI funds is not statistically different from the performance of conventional funds. Gil-Bazo, Ruiz-Verdú and Santos (2010) found that during the period 1997-2005, US SRI funds had better performance (gross and net Carhart´s alphas) than conventional mutual funds with similar characteristics. Authors find that the differences are driven exclusively by SRI funds run by management companies specializing in SRI, while funds run by companies not specializing in SRI underperform conventional funds Climent and Soriano (2011) randomly selected US-based large-cap equity mutual funds ( 25 are members of the SIF and 21 are conventional funds) finding there were no significant performance differences between conventional and SRI mutual funds employing Data Envelopment Analysis. Nofsinger and Varma (2014) found that SRI mutual funds outperformed conventional funds in the global financial crisis, so they can be an optimal choice for investors who want to protect themselves from downside risk. They also found that SRI funds underperform at other times. Leite and Cortez (2014) performed a multi-country study focused on 54 international SRI funds located in eight European markets (Austria, Belgium, France, Germany, Italy, the Netherlands, the UK, and Spain); they applied the five-factor model and found a similar performance between socially responsible funds and conventional funds. Muñoz, Vargas and Marco (2014) studied 89 European green funds and 18 US funds from 1994 to 2013. They applied the Carhart four-factor model and stated that, for the US market, green funds did not perform any worse than the market, but with a global equity portfolio green funds showed evidence of underperformance. Becchetti, Ciciretti, Dalo and Herzel (2015) found no clear-cut dominance over the entire period analyzed (1992-2012), but also found that SRI funds generally did better than conventional funds in the period following the global financial crisis of 2007. Leite and Cortez (2015), focusing on the French market, found that SRI funds underperformed slightly more than their matched samples according to different models, but differences in alphas are not statistically significant in most cases. They only found significance in one of the estimated models at the 10% significance level. Humphrey, Warren and Boon (2016) found that SRI managers have longer tenure and are more likely to be female, but they did not find any significant difference in the performance of SRI and conventional funds. Ibikunle and Steffen (2017) conducted a comparative financial performance analysis on European green, conventional, and black mutual funds; they concluded that there was no difference in the performance of the green and the conventional funds and that green funds are beginning to significantly outperform black funds. Dolvin, Fulkerson and Krukover (2017) is the only reference, to our knowledge, that employs Morningstar Sustainability scores in their analysis. The authors conclude that funds with higher Morningstar Sustainability scores have similar alphas from those with lower Sustainability scores. The authors also observe that there is little difference in the performance or Sustainability scores between self-proclaimed SRI funds versus those that fall in the top 50 and top 20 percent of Morningstar's Sustainability scores. Finally, El Ghoul and Karoui (2017) employed a CSR score, which is an asset-weighted composite CSR fund score. They showed the effects of CSR on fund performance; compared to low-CSR funds, high-CSR funds displayed a poorer performance. Sample Our sample contains 1,593 European equity funds rated by Morningstar Sustainability in November 2016. The funds are the "open funds" type with an ESG score in the investment area of Europe. Furthermore, to avoid problems of multicollinearity, we have selected only an equivalent class for each fund. We obtained for each equity mutual fund several measures of performance and other variables such as size, volatility, socially conscious, expenses, and age. We also used the Morningstar style-box to control the effect of the different categories which are included in the sample. The number of funds varied when we considered the costs where the sample reduces from 1,593 to 571 motivated for the lack of data available in Morningstar Direct. Variables construction Our sustainable variables have been obtained from Morningstar Direct (original source Sustainalytics iii ). We will employ five variables: three are the pillars scores [Environment score variable (Envscore), Social score variable (Socscore) and Government score variable (Govscore)], the fourth is the ESG score of a portfolio (ESGscore), and finally, the Portfolio Sustainability Score (Sustscore), which is the ESG score minus the Portfolio Controversy Score. Bos (2017) In order to receive a portfolio sustainability score, a portfolio must have a portfolio ESG score and a portfolio controversy score, which, according to Morningstar (2016b), at least 50% of a portfolio's assets under management must have these scores. The Morningstar Portfolio ESG Score (ESGscore) iv is calculated as: Where: We have divided our sample funds into two groups based on whether the ESG scores are below or above the median. Then, we estimated the means and their differences between both groups. Table 1 reports the results of the univariate analysis. As can be observed, the differences are very significant between the two groups for the different scores, with a difference of approximately five points in favour of the funds included in the high score group. The funds are classified into low or high groups depending on whether their score is above or below the median. The t-statistic for difference of means is reported in the third column. Sustscore is the level of sustainability of the mutual fund measured by Morningstar. ESGscore is the ESG score of a fund. EnvScore, Socscore and GovScore are the mutual fund scores for the three dimensions (environment, social and corporate governance). *Significant at 10%; ** significant at 5% and *** significant at 1%. This table reports the number of mutual funds classified as sustainable using two different dummy variables. Sustainabledummy is based on low or high sustainable scores depending on whether their score is above or below the median. Sociallyconscious is for those mutual funds declared as socially conscious. Performance variables We considered different performance measures from the Morningstar Direct database. Given that we only have ESG data available for December 2016, we have analyzed the performance and risk effects using the performance and risk metrics for the last two years based on Wimmer (2012), who showed that ESG scores persisted for two years, and were motivated by the changes in the holdings of the SRI mutual funds. In particular, we used the raw return and Sharpe ratios. We also computed Carhart´s alphas based on values provided on Kenneth French's website v . The differences in performance between the high and low ESG scored funds are negative when considering raw returns, Sharpe ratios and two years alphas (Table 3). That is, higher ESG scored funds show a poorer performance, except in the case of a one year alpha. Our results are consistent with those achieved by El Ghoul and Karoui (2017) and Dolvin, Fulkerson and Krukover (2017) for US mutual funds. This table reports the values of the performance metrics indicated in the first column, Alpha (Carhart´s alpha) and Sharpe (Sharpe ratio) are risk adjusted returns calculated for two and one years estimated at the end of 2016. Return is the raw measure of profitability. The data has been obtained from Morningstar Direct database and Kenneth French's website. The funds are classified into low or high groups depending on whether their ESG score is above or below the median. The t-statistic for difference of means is reported in the third column. *Significant at 10%; ** significant at 5% and *** significant at 1%. Downside risk variables We also assessed fund performance by considering downside risk. Tail risk is commonly taken by mutual funds and it has been shown to be useful in explaining fund performance (Kelly and Jiang, 2014). Specifically, we examined whether sustainable mutual funds are more or less exposed to tail risk by measuring mutual fund downside risk by using the Value at Risk (VaR). VaR measures the maximum loss that a fund can obtain for a given time period and a given confidence level (1-p) as: ( ≤ ) which is the loss associated with the p-th percentile of the return distribution. It can be computed as = −1 ( ), where is the return distribution of the fund . Table 4 shows the difference of means for downside risk measured by the historical monthly VaR at a 99% confidence level. The evidence for VaR reveals that highly scored mutual funds display less tail risk but are only statistically significant for the two years measured. The funds are classified into low or high groups depending on whether their ESG score is above or below the median. The t-statistic for difference of means is reported in the third column. *Significant at 10%; ** significant at 5% and *** significant at 1%. Flow of funds We measure the flow of funds as: Where , and , −1 are the total net assets for fund at the end of year and − 1, respectively, and , is the return of fund in year . Table 5 displays the difference of means for the flow of funds showing positive differences for higher scored mutual funds. -0.10 -0.07 -1.75* This table reports the values of flow of funds obtained from Morningstar Direct database. The funds are classified into low or high groups depending on whether their score is above or below the median. The t-statistic for difference of means is reported in the third column. *Significant at 10%; ** significant at 5% and *** significant at 1%. Table 6 shows the different variables considered in our work. As can be seen, the variables related to the level of sustainability have an average level close to 60 points, and the difference between the minimum and maximum is around 25 points. On average, the funds have a negative alpha despite yielding positive returns for the term of 1 and 2 years. Descriptive statistics The average flow has been negative and the percentage declared to be socially responsible is very small (8%). The size is very variable, the expense ratio is greater than 1% because the mutual funds included invest in equity, and in general the funds have a high average age. Fund performance and Sustainability Scores In this part, we test if the degree of sustainability measured through ESG scores has a positive or negative effect on performance. In addition, we consider ESG scores to evaluate the contribution of each dimension to the portfolio performance. We propose the following model: = + 1 + 2 + 3 + 4 + 5 + 6 + ∑ + where: = Alternative performance metrics for fund . = 1 through N, where N is the total number of funds in the sample. is the sustainability score provided by Morningstar. Age = Years since inception date. LossDev = standard deviation of mutual funds returns. LogSize = logarithm of mutual fund market value. ExpRat = Net expense ratio of fund i. Sociallyconcious = dummy of SRI mutual funds. Category= dummies of categories except small style. and 1 , 2 , 3 ,and 4 are parameters of the regression and the term error. Our results show that Sustscore is significant in explaining the level of performance for all the metrics and terms. If we use ESG scores instead of Sustscore, the results are mainly the same. Most of the models present a negative sign in line with El Ghoul and Karoui (2017) and Renneboog et al. (2008), who suggest that socially responsible mutual funds underperform other funds. The dummy variable is also significant, showing that considering the level of sustainability can help to better understand the relationship between performance and social responsibility. Our results support Statman and Glushkov (2016), who conclude that the lack of clearly defined criteria to distinguish mutual funds as socially responsible affects the results of previous research based on dichotomy variables. Among the control variables, Table 7 shows that volatility and the expense ratio, but only in some models, are negatively related to performance, while size and age are not significant. coefficients for the regression models for different performance measures. Alpha is the Carhart´s alpha measure; Sharpe is the yearly risk-adjusted return and Return is the total net return. Sustscore is the level of sustainability of the fund provided by Morningstar and Sociallyconcious is the common dummy variable used to analyse sociallyconscious mutual funds. N is the number of observations and r2 the R-squared fit measure. The dummies of categories have been included and compared with small mutual fund of Morningstar Style Box. *Significant at 10%; ** significant at 5% and *** significant at 1%. Using the different elements in which ESG scores are subdivided, we have achieved similar results, finding in most models a negative relation between the dimensions of sustainability and performance (Table 8). Again, those mutual funds with higher environmental scores reduce the level of performance adjusted and non-adjusted in five of the six models estimated. For the other dimensions (social and governance) the results are quite similar, concluding that, in general, the effects of the different dimensions have a negative impact on alternative performance metrics. 0.1128 0.2555 0.3358 0.4698 0.1067 0.6361 This table reports the coefficients for the regression models for different performance measures. Alpha is the Carhart´s alpha measure; Sharpe is the yearly risk-adjusted return and Return is the total net return. Sociallyconcious is a dummy variable used to analyse sociallyconscious mutual funds. N is the number of observations and r2 the R-squared fit measure. *Significant at 10%; ** significant at 5% and *** significant at 1%. Downside risk and sustainability scores In this part, we test if the degree of sustainability measured through ESG scores and their components has a positive or negative effect on the historical VaR of the portfolio. We used the following model: = + 1 + 2 + 3 + 4 + 5 + 6 + ∑ + As Table 9 shows, the downside risk of mutual funds is affected by the level of sustainability (ESG score). Specifically, we observed how the variable Sustscore is negatively and significantly related to the VaR of the fund at a 99% confidence level in both terms of one and two years. These results support that funds with a higher degree of sustainability better protect investors against extreme losses. As Kurtz (1997) or Goldreyer and Diltz (1999) explain, SRI mutual fund managers base their decisions on deeper, more complete, and higher quality information, resulting in a significant reduction in the risk of their investment decisions. On the other hand, the dichotomous variable commonly used has a positive sign. and opposite sign to that resulting from using a continuous variable. We also made the analysis for the different sub factors, observing again a negative and significant relationship for most of the estimated models. As can be seen in Table 9, the increase in the level of environmental, social, and governance sustainability reduces the level of extreme losses of investment funds. It is again observed that the dummy variable is significant and positively related to the level of risk. From this analysis, we observed that the results of evaluating the effect of sustainability based on dichotomous variables may yield contradictory results to those obtained when continuous variables are used. This table reports the coefficients for the regression models. VaR is the maximum loss that a fund i can obtain for a given time period and a given confidence level. Sustscore is the level of sustainability of the mutual fund measured by Morningstar. Envscore, Socscore and Govscore are the mutual fund scores for the three dimensions (environment, social and corporate governance). Sociallyconcious is a dummy variable used to analyse Sociallyconcious mutual funds. N is the number of observations and r2 the R-squared fit measure. *Significant at 10%; ** significant at 5% and *** significant at 1%. Flows and sustainability scores In this section, we analyze the effect of sustainability on the flows of investment funds. In particular, flows of sustainable funds are generally considered to be less sensitive to changes in performance because investors value other elements in their utility function. Benson and Humphrey (2008) and Renneboog et al. (2011) obtained evidence in favor of greater stability in flows for sustainable funds, while Bollen (2007) found that SRI mutual funds are more sensitive to positive returns and less to negative ones. In line with Doven et al. (2017) and El Ghoul and Karoui (2017), we argue that funds with higher ESG scores attract more conscious investors, who are less worried about performance and therefore the flows are less sensitive to past performance. Thus, we estimate the following model to evaluate the effect of sustainability on the flow of funds using the different performance metrics (alpha, Sharpe, raw return), the sustainability score, and the interaction of the product (SustPerf: sustsharpe, sustalpha or sustreturn): = + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 ociallyconcious + ∑ + Where: SustPerf: is the product of Sustscore and sharpe (sustsharpe), alpha (sustalpha) or net return (sustreturn) depending on the model. hand, in Model 3, the sustainability score is also significant, so that higher-rated funds received a larger volume of funds than those with a lower score. This fact shows that the degree of sustainability stimulates fund raising and more when the degree of sustainability is higher. Also, when we analyzed the effect of the sustainability dummy variable (Sociallyconcious), it is significant in all models, which confirms the importance of sustainability in attracting investors interested in funds that are declared sustainable. This fact can be related to both greater social awareness and expectations of greater profitability in SRIs. Finally, the negative sign of the interaction variable (sustreturn) shows the lower sensitivity of sustainable funds, supporting the results found by Doven et al. (2017) and El Ghoul and Karoui (2017) using alternative metrics and US funds. This table reports the coefficients for the regression models. Alpha is the Carhart´s alpha measure; Sharpe is the yearly risk-adjusted return and, Return is the total net return. Sustscore is the level of sustainability of the mutual fund measured by Morningstar. Sociallyconcious is a dummy variable used to analyse socially conscious mutual funds. N is the number of observations and r2 the R-squared fit measure. *Significant at 10%; ** significant at 5% and *** significant at 1%. Robustness We conducted some additional robustness tests to check the consistency of our results and to provide other complementary analyses. We checked whether performance may differ according to the fund manager skills, considering the quantiles of different performance measures; differences in the quantiles would indicate differences in the fund manager's ability to deal with performance. Quantile regression allowed us to capture information about the coefficients at different quantiles of the dependent variable given the set of endogenous variables. In addition, the conditional quantile regression developed by Koenker and Bassett (1978) successfully deals with skewed distributions of fund performance. In particular, we adopted the bootstrapping method proposed by Efron (1979) and implemented in the software Stata 12. Given as the different performance metrics used in this paper (alpha, Sharpe and returns), and as a vector of exogenous variables representing the sustainable score of each mutual fund and other controls, the quantile model can be written as: = ´+ Assuming that: Table 11 reports quantile parameter estimates for three different adjusted riskreturn performances. Our evidence for all quantiles confirms no differences in the results and sustainability seems to be important independent of the level of performance analysed. We also calculated the models excluding the expense ratio because this variable has many blanks and reduces the sample. After the calculations, we again observed no differences with the models presented in the previous empirical analyses. Finally, we recalculated the models for each category and we obtained different results depending on the category, concluding that on average the effect is negative on performance but specific for each category. 571 570 571 570 541 541 This table reports the coefficients for the quantile regression models (q25 or lower quartile, q50 or median and q75 or upper quartile). Sustscore is the level of sustainability of the mutual fund measured by Morningstar. Sociallyconcious is a dummy variable used to analyse socially conscious mutual funds. N is the number of observations. *Significant at 10%; ** significant at 5% and *** significant at 1%. Conclusion In Europe, SRI strategies grew by 11.7% from 2014 to 2016 to reach $12.04 trillion (GSIA, 2017). Traditional studies focus their work on mutual funds which declare themselves as funds that support an SRI approach. One important limitation of this approach is that results could be biased, because SRI mutual funds could have different levels of sustainability and differences with conventional funds may not be significant. Recently, Morningstar launched the Morningstar Sustainability Score to classify mutual. The use of sustainability scores in our work allows us to evaluate the effect of the degree of sustainability on performance, risk, or flows on European equity mutual funds. Our results show that there are a large number of funds that are not declared sustainable but their portfolio is comparable to sustainable mutual funds. Furthermore, Sustainability Score is significant, explaining the level of performance for all the metrics analysed (alpha, Sharpe, and net return), and has negative sign in most models. Using a conventional dummy to declare social mutual funds, the results are significant but with the contrary sign, showing that considering the level of sustainability can help to better understand the link between performance and social responsibility. Our results are in accordance with Statman and Glushkov (2016), who concluded that the lack of clearly defined criteria to distinguish SRI mutual funds affected the results. Also, we obtained similar results to El Ghoul and Karoui (2017) for the US mutual funds market. Using the different pillars of ESG scores (environmental, social, and governance), we were able to achieve a negative link between the dimensions of sustainability and performance, showing that all the dimensions play an important role in explaining performance. Our results are consistent with the idea that investors are paying a premium for investing in high scored mutual funds. In terms of downside risk, the level of sustainability is negatively and significantly related to the VaR of the fund, supporting that higher scored mutual funds better protect against extreme losses. The opposite is found for the conventional dummy, showing the advantages of employing a quantitative measure of sustainability to evaluate assets´ risk. This result could mean that SRI mutual fund managers base their decisions on a deeper analyses resulting in a significant reduction in the risk of their investment decisions. Our work shows that sustainability scores can be used by investors worried by extreme losses and not only by values-motivated investors. Finally, we analyzed the effect of sustainability on the flows, confirming the importance of sustainability in attracting investors. The effect of the sustainability dummy variable is significant in all models. Unadjusted returns and Carhart´s alphas have a positive influence on investment decisions. The sustainability score is significant on the flows, so higher-rated funds received a larger volume of funds. Finally, the negative sign of the interaction variable (product of sustainability and return) shows the lower sensitivity of sustainable funds. This shows the different sensitivity to performance of values-motivated investors. Future research will benefit from the increasing amount of data to make empirical studies based on sustainability criteria. Unfortunately, due to data limitations, Morningstar Sustainability scores are only available from 2016, our sample assumes the score is constant prior to 2016. Another limitation of our work is that there may be some survivorship bias, but since our sample only includes two years, this bias must be very small.
2019-06-07T23:15:54.934Z
2019-05-24T00:00:00.000
{ "year": 2019, "sha1": "680cb6e0c3d63bbc6e4a3df34d19d2131f52f4fb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/11/10/2972/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "3971d75e556b50e6897115b8bd91cb94285602a6", "s2fieldsofstudy": [ "Business", "Economics", "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
52194640
pes2o/s2orc
v3-fos-license
Impact of body mass index on survival of medical patients with sepsis: a prospective cohort study in a university hospital in China Objective To evaluate the impact of body mass index (BMI) on survival of a Chinese cohort of medical patients with sepsis. Design A single-centre prospective cohort study conducted from May 2015 to April 2017. Setting A tertiary care university hospital in China. Participants A total of 178 patients with sepsis admitted to the medical intensive care unit (ICU) were included. Main outcome measures The primary outcome was 90-day mortality while the secondary outcomes were in-hospital mortality, length of ICU stay and length of hospital stay. Results The median age (IQR) was 78 (66–84) years old, and 77.0% patients were older than 65 years. The 90-day mortality was 47.2%. The in-hospital mortality was 41.6%, and the length of ICU stay and hospital stay were 12 (5–22) and 15 (9–28) days, respectively. Cox proportional hazard regression analysis identified that Sequential Organ Failure Assessment score (HR=1.229, p<0.001), Acute Physiology and Chronic Health Evaluation II score (HR=1.050, p<0.001) and BMI (HR=0.940, p=0.029) were all independently associated with the 90-day mortality. Patients were divided into four groups based on BMI (underweight 33 (18.5%), normal 98 (55.1%), overweight 36 (20.2%) and obese 11 (6.2%)). The 90-day mortality (66.7%, 48.0%, 36.1% and 18.2%, p=0.015) and in-hospital mortality (60.6%, 41.8%, 30.6% and 18.2%, p=0.027) were statistically different among the four groups. Differences in survival among the four groups were demonstrated by Kaplan-Meier survival analysis (p=0.008), with the underweight patients showing a lower survival rate. Conclusions BMI was an independent factor associated with 90-day survival in a Chinese cohort of medical patients with sepsis, with patients having a lower BMI at a higher risk of death. CI 0.72-1.10, p = 0.29) and morbidly obese (OR 0.64, 95% CI 0.38-1.08, p = 0.09) 200 patients who did not exhibit significantly reduced mortality compared with normal 201 weight patients [19] . In a large and nationally representative sample of over 1,000 202 hospitals in the US, obesity was found to be significantly associated with a 16% 203 decrease in the odds of dying among sepsis patients who were hospitalized [20] . 204 Underweight patients with sepsis may be more common in developing countries 205 than developed countries. In the present study, the percentages of underweight, 206 normal weight, overweight and obese patients were 18.4%, 55.3%, 20.1%, and 6.1%, 207 respectively, while those with sepsis in a study in Canada and the US represented 208 6.8%, 35.3%, 28.3%, and 29.0% [6] . Being underweight was found to be one of the 209 independent risk factors of mortality in a study on the correlation between surgical 210 site infection and mortality [10] . Furthermore, Lee et al [11] also reported that being 211 underweight was associated with mortality in patients with severe sepsis and septic 212 shock. 213 Consequently, previous studies on sepsis have shown that overweight and obese 214 patients have a decreased risk for mortality, and underweight patients may have a 215 higher mortality. However, BMI has not been shown to be an independent factor for 216 clinical outcomes by multivariable analyses. In our cohort of medical patients with greater capacity to cope with the inflammatory response during sepsis and 222 sepsis-associated acute lung injury [21][22][23] . Furthermore, they may be able to tolerate 223 extensive weight loss and dysfunction associated with critical illness [24] . Secondly, a 224 higher BMI can lead to an increased level of lipoproteins. High-density lipoproteins 225 may not only bind and inactivate lipopolysaccharide (LPS) or other harmful bacterial 226 products released during sepsis [25] , but also modulate adhesion molecule expression, 227 upregulate endothelial nitric oxide synthase, and counteract oxidative stress [26] . 228 Thirdly, higher BMI can lead to increased adipose tissue deposition. Adipose tissue is 229 increasingly being considered as a functional endocrine organ and associated with 230 increased renin-angiotensin system activity [27] . It appears to have protective 231 hemodynamic effects during sepsis and may decrease the need for fluid or 232 vasopressor support [18,28] . This may be the reason why the percentages of patients There were several limitations to our study. Firstly, the BMI of our patients 244 ranged from 12.11 to 32.46. Morbidly obese patients were not included in the study, 245 although morbidly obese patients are not common in this country. Secondly, the 246 present study used weight ascertained at ICU admission, rather than the patient's true 247 outpatient weight. This practice may misclassify the BMI category in as many as 248 21.9% of patients due to lack of fluid balance adjustment [31] . Lastly, although 178 249 patients were included in this prospective study, it was still difficult to avoid 250 sample-related bias, because a large proportion of our patients were older than 65 251 years. 72 Body mass index (BMI) is a simple index of weight-for-height that is commonly 73 used to classify whether adults are underweight, overweight and obese [4] . Several 74 studies have examined the effects of BMI on mortality with conflicting conclusions. 81 As the relationship between BMI and clinical outcomes of sepsis is complex, 82 which may be related partly to differences in patient characteristics, we therefore set 83 out to evaluate prospectively the impact of BMI on survival in a cohort of medical 84 patients with sepsis admitted to the medical ICU in a university hospital. Figure 1 shows the patient-selection process. In total, 178 medical patients with 150 sepsis were included in this study, with male patients accounting for 65.2% (n=116). 151 The patients who did not exhibit significantly reduced mortality compared with normal 208 weight patients [12] . In a large and nationally representative sample of over 1,000 209 hospitals in the US, obesity was found to be significantly associated with a 16% 210 decrease in the odds of dying among sepsis patients who were hospitalized [22] . 211 Underweight patients with sepsis may be more common in developing countries 212 than developed countries. In the present study, the percentages of underweight, 213 normal weight, overweight and obese patients were 18.4%, 55.3%, 20.1%, and 6.1%, 214 respectively, while those with sepsis in a study in Canada and the US represented 215 6.8%, 35.3%, 28.3%, and 29.0% [6] . Being underweight was found to be one of the 216 independent risk factors of mortality in a study on the correlation between surgical 217 site infection and mortality [10] . Furthermore, Lee et al [11] also reported that being 218 underweight was associated with mortality in patients with severe sepsis and septic 219 shock. have a higher mortality [10,11] . However, BMI has not been shown to be an independent 223 factor for clinical outcomes by multivariable analyses. In our cohort of medical 224 patients with sepsis, which mainly included the elderly and less obese patients, BMI 225 was identified as an independent factor for survival. The mechanism of the correlation 226 between BMI and mortality of sepsis is unclear. There are several potential reasons 227 that could explain this. First, higher BMI resulted in more fat reserves, and patients 228 could have a greater capacity to cope with the inflammatory response during sepsis 229 and sepsis-associated acute lung injury [23][24][25] . Furthermore, they may be able to 230 tolerate extensive weight loss and dysfunction associated with critical illness [26] . 231 Secondly, a higher BMI can lead to an increased level of lipoproteins. High-density 232 lipoproteins may not only bind and inactivate lipopolysaccharide (LPS) or other 233 harmful bacterial products released during sepsis [27] , but also modulate adhesion 234 molecule expression, upregulate endothelial nitric oxide synthase, and counteract 235 oxidative stress [28] . Thirdly, higher BMI can lead to increased adipose tissue 236 deposition. Adipose tissue is increasingly being considered as a functional endocrine 237 organ and associated with increased renin-angiotensin system activity [29] . It appears 238 to have protective hemodynamic effects during sepsis and may decrease the need for 239 fluid or vasopressor support [21,30] . 240 In general, sex has not been found to be an independent predictor for survival in 241 patients with sepsis, which is the same as the results of our current study. But in some 242 male sex may be an independent risk factor for mortality [31] . 244 As the relationship between BMI and clinical outcomes of sepsis may be related 245 partly to differences in patient characteristics, we therefore set out to evaluate the 246 impact of BMI on survival in a cohort of medical patients with sepsis, which is 247 different from surgical septic patients. Ranieri et al [32] reported that the primary sites 248 of infection in adults with septic shock were lung (43.9%), abdomen (30.0%), urinary 249 tract (12.3%), skin (5.5%) and other sites (8.3%). Scheer et al [33] found that the most 250 common primary site of infection was different between medical and surgical patients. 81 As the relationship between BMI and clinical outcomes of sepsis is complex, we 82 therefore set out to evaluate prospectively the impact of BMI on survival in a cohort 83 of medical patients with sepsis admitted to the medical ICU in a university hospital. This was a prospective cohort study, which was conducted in the medical ICU of 88 Figure 1 shows the patient-selection process. In total, 178 medical patients with 149 sepsis were included in this study, with male patients accounting for 65.2% (n=116). is still needed. 224 The mechanism of the correlation between BMI and mortality of sepsis is unclear. 225 There are several potential reasons that could explain this. First, higher BMI resulted 226 in more fat reserves, and patients could have a greater capacity to cope with the 227 inflammatory response during sepsis and sepsis-associated acute lung injury [23][24][25] . 228 Furthermore, they may be able to tolerate extensive weight loss and dysfunction 229 associated with critical illness [26] . Secondly, a higher BMI can lead to an increased , but also modulate adhesion molecule expression, upregulate endothelial nitric 233 oxide synthase, and counteract oxidative stress [28] . Thirdly, higher BMI can lead to 234 increased adipose tissue deposition. Adipose tissue is increasingly being considered as 235 a functional endocrine organ and associated with increased renin-angiotensin system 236 activity [29] . It appears to have protective hemodynamic effects during sepsis and may 237 decrease the need for fluid or vasopressor support [21,30] . 238 In general, sex has not been found to be an independent predictor for survival in 239 patients with sepsis, which is the same as the results of our current study. But in some 240 special populations, for example in liver cirrhosis patients with bloodstream infection, 241 male sex may be an independent risk factor for mortality [31] . As the relationship between BMI and clinical outcomes of sepsis may be related 243 partly to differences in patient characteristics, we therefore set out to evaluate the 244 impact of BMI on survival in a cohort of medical patients with sepsis, which is 245 different from surgical septic patients. Ranieri et al [32] reported that the primary sites 246 of infection in adults with septic shock were lung (43.9%), abdomen (30.0%), urinary 247 tract (12.3%), skin (5.5%) and other sites (8.3%). Scheer et al [33] found that the most 248 common primary site of infection was different between medical and surgical patients. 81 As the relationship between BMI and clinical outcomes of sepsis is complex, we 82 therefore set out to evaluate prospectively the impact of BMI on survival in a cohort 83 of medical patients with sepsis admitted to the medical ICU in a university hospital. This was a prospective cohort study, which was conducted in the medical ICU of 88 225 There are several potential reasons that could explain this. , but also modulate adhesion molecule expression, upregulate endothelial nitric 233 oxide synthase, and counteract oxidative stress [28] . Thirdly, higher BMI can lead to 234 increased adipose tissue deposition. Adipose tissue is increasingly being considered as 235 a functional endocrine organ and associated with increased renin-angiotensin system 236 activity [29] . It appears to have protective hemodynamic effects during sepsis and may 237 decrease the need for fluid or vasopressor support [21,30] . 238 In general, sex has not been found to be an independent predictor for survival in 239 patients with sepsis, which is the same as the results of our current study. But in some 240 special populations, for example in liver cirrhosis patients with bloodstream infection, 241 male sex may be an independent risk factor for mortality [31] . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 As the relationship between BMI and clinical outcomes of sepsis may be related 243 partly to differences in patient characteristics, we therefore set out to evaluate the 244 impact of BMI on survival in a cohort of medical patients with sepsis, which is 245 different from surgical septic patients. Ranieri et al [32] reported that the primary sites 246 of infection in adults with septic shock were lung (43.9%), abdomen (30.0%), urinary 247 tract (12.3%), skin (5.5%) and other sites (8.3%). Scheer et al [33] found that the most 248 common primary site of infection was different between medical and surgical patients. Introduction Background/rationale 2 Explain the scientific background and rationale for the investigation being reported 4 Objectives 3 State specific objectives, including any prespecified hypotheses 4
2018-09-16T08:12:25.118Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "5fd29d94071e8df5f5201bf932f9e020fe5318a2", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/9/e021979.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5fd29d94071e8df5f5201bf932f9e020fe5318a2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270607266
pes2o/s2orc
v3-fos-license
A Radiomic Approach for Evaluating Intra-Subgroup Heterogeneity in SHH and Group 4 Pediatric Medulloblastoma: A Preliminary Multi-Institutional Study Simple Summary Medulloblastoma (MB) is the most common malignant brain tumor in children and has a dismal prognosis. A challenge with MB is identifying patients who could be candidates for reduced doses of radiation therapy, but are still treated effectively, as well as those that need intensified doses. Recently, MB was classified into four molecular subgroups with distinct clinical outcomes (WNT, SHH, Group 3, and Group 4). Though two of these subgroups (SHH and Group 4) are known for their intermediate prognosis, wide disparities of outcomes have been reported within each of these subgroups. This work aims to develop a prognostic signature using radiomics (computationally derived tumor measurements), acquired on MRI scans, to risk-stratify patients within the SHH and Group 4 subgroups. Our signature includes two key attributes that capture aspects of the disease microenvironment. We believe that our signature will provide a better understanding of the disease’s heterogeneity and, hence, develop better personalized treatment plans. Abstract Medulloblastoma (MB) is the most frequent malignant brain tumor in children with extensive heterogeneity that results in varied clinical outcomes. Recently, MB was categorized into four molecular subgroups, WNT, SHH, Group 3, and Group 4. While SHH and Group 4 are known for their intermediate prognosis, studies have reported wide disparities in patient outcomes within these subgroups. This study aims to create a radiomic prognostic signature, medulloblastoma radiomics risk (mRRisk), to identify the risk levels within the SHH and Group 4 subgroups, individually, for reliable risk stratification. Our hypothesis is that this signature can comprehensively capture tumor characteristics that enable the accurate identification of the risk level. In total, 70 MB studies (48 Group 4, and 22 SHH) were retrospectively curated from three institutions. For each subgroup, 232 hand-crafted features that capture the entropy, surface changes, and contour characteristics of the tumor were extracted. Features were concatenated and fed into regression models for risk stratification. Contrasted with Chang stratification that did not yield any significant differences within subgroups, significant differences were observed between two risk groups in Group 4 (p = 0.04, Concordance Index (CI) = 0.82) on the cystic core and non-enhancing tumor, and SHH (p = 0.03, CI = 0.74) on the enhancing tumor. Our results indicate that radiomics may serve as a prognostic tool for refining MB risk stratification, towards improved patient care. Introduction One of the biggest challenges in pediatric medulloblastoma (MB), the most frequent malignant brain tumor in children, is the accurate risk stratification of patients, which serves as a key determinant in accurate treatment pathways.MB accounts for 20% of all pediatric intracranial tumors and its overall survival remains inadequate with a fiveyear survival rate of around 70-75% [1].When relapse occurs, it is nearly always fatal, making the proper selection of upfront chemotherapy and craniospinal irradiation (CSI) based on risk stratification even more important [2].The current clinical risk stratification approach is Chang's classification, which classifies patients into standard/average-and high-risk, using age and clinical parameters such as the extent of the resection and the presence of metastases [3,4].Recent efforts in molecular profiling and gene expression resulted in categorizing MB patients into four unique subgroups (WNT, SHH, Group 3, and Group 4) [5], which have been used in conjunction with Chang's classification in clinical trials for risk-adapted treatments [5].Unfortunately, with molecular classification, wide disparities among patients have been reported, revealing intra-subgroup heterogeneity for some of these subgroups [6,7].For instance, recent studies reported a dismal prognosis in patients of the SHH group with the TP53 mutation (five-year overall survival of 41% ± 9%), but more favorable outcomes were observed in younger patients of the same subgroup with TP53 wild-type tumors (five-year survival rate of 81% ± 5%) [7][8][9].Similarly, while Group 4 MB patients are considered to generally have an intermediate prognosis, studies have demonstrated that there is a wide variation in patient outcomes within this subgroup [8][9][10].In the absence of approaches that can predict the patients' outcomes precisely and resolve the extensive heterogeneity that MB tumors are known for, the need is underscored for complementary risk stratification tools that can resolve the intra-subgroup heterogeneity and identify the different risk levels within each subgroup.This may provide further insights in personalized treatment plans (therapy intensification/de-escalation) beyond and complementary to molecular profiling or Chang's stratification. Radiomics has recently emerged as a powerful tool to quantitatively analyze medical images by extracting feature attributes that capture the unique cues of the tumor microenvironment, reflecting the hallmarks of the tumor biology [11].Of note, there has been extensive work carried out that explored radiomics in adult brain tumors for several clinical problems, including diagnosis, prognosis, and predicting treatment response [12][13][14].However, these approaches have only been recently exploited in pediatric brain tumor research, with the primary focus being utilizing clinical predictors or gene expression profiling for pediatric tumors [15].Recent works in the literature exploited traditional radiomic features (textural and morphological) for survival analysis and identifying molecular subtypes in MB [1,[16][17][18][19][20][21][22][23][24][25][26][27][28][29].For instance, the studies in [16][17][18][19][20] attempted to employ statistical techniques, such as univariate and multivariate logistic regression, to predict survival in MB.Regression models (e.g., LASSO) have been incorporated in these works, followed by a Kaplan-Meier estimate for risk stratification.Similarly, the studies in [1,[20][21][22][23][24][25][26] employed several machine-learning classifiers (e.g., SVM), along with some statistical techniques, such as logistic regression and ANOVA, to predict the four molecular subgroups in MB.While the aforementioned approaches investigated some of the clinical challenges with MB, there is a lack of approaches that utilize radiomics to predict the different outcomes within the same molecular subgroup, notably SHH and Group 4. This work presents a radiomic descriptor, "medulloblastoma radiomics risk" (mRRisk), to capture the intra-subgroup heterogeneity within the subgroups that are known for wide disparities in outcomes, namely, Group 4 and SHH patients.Our goal is to provide a non-invasive prognostic tool that can capture the tumor micro-environment and its substantial biological heterogeneity on imaging.This can enable us to provide more robust, personalized prognostic insights.mRRisk will include image features pertaining to (1) 3D textural and entropy features to capture micro-architectural differences within the tumor confines which could reflect the disorderly nature and high heterogeneity of aggressive tumors, and (2) 3D surface-based topological features extracted from the tumors which provide cues regarding surface irregularities and lesion aggressiveness.In addition, our pipeline uniquely accounts for the developing anatomy of pediatric brains, by registering the subject brains to age-appropriate atlases.Our rationale in this work is that mRRisk features, encompassing the intra-and peri-tumoral confines, can provide a comprehensive analysis of the tumor heterogeneity and aggressiveness, teasing out the differences between low-and higher-risk patients.This can enable further stratification within the individual molecular subgroups and, hence, allow for personalized treatment regimens with targeted therapies. Overview Figure 1 illustrates the pipeline of our proposed work.Following pre-processing and tumor segmentation, we extract radiomic features that quantify both the morphological and textural attributes of the pediatric MB tumors.Following feature extraction, our prognostic signature, mRRisk, is constructed by concatenating the two feature families into one vector.Statistical methods were then applied on mRRisk, particularly, multivariate logistic regression (including Elastic Net, LASSO regression, and ridge regression) for feature pruning and selecting the features that are statistically significant in risk stratification while shrinking those that do not contribute to risk assessment.The top features were then employed to create a continuous survival risk score that stratifies all the patients into lowand high-risk groups, within each molecular subgroup (SHH and Group 4).personalized prognostic insights.mRRisk will include image features pertaining to (1) 3D textural and entropy features to capture micro-architectural differences within the tumor confines which could reflect the disorderly nature and high heterogeneity of aggressive tumors, and (2) 3D surface-based topological features extracted from the tumors which provide cues regarding surface irregularities and lesion aggressiveness.In addition, our pipeline uniquely accounts for the developing anatomy of pediatric brains, by registering the subject brains to age-appropriate atlases.Our rationale in this work is that mRRisk features, encompassing the intra-and peri-tumoral confines, can provide a comprehensive analysis of the tumor heterogeneity and aggressiveness, teasing out the differences between low-and higher-risk patients.This can enable further stratification within the individual molecular subgroups and, hence, allow for personalized treatment regimens with targeted therapies. Overview Figure 1 illustrates the pipeline of our proposed work.Following pre-processing and tumor segmentation, we extract radiomic features that quantify both the morphological and textural attributes of the pediatric MB tumors.Following feature extraction, our prognostic signature, mRRisk, is constructed by concatenating the two feature families into one vector.Statistical methods were then applied on mRRisk, particularly, multivariate logistic regression (including Elastic Net, LASSO regression, and ridge regression) for feature pruning and selecting the features that are statistically significant in risk stratification while shrinking those that do not contribute to risk assessment.The top features were then employed to create a continuous survival risk score that stratifies all the patients into lowand high-risk groups, within each molecular subgroup (SHH and Group 4).T, S stand for texture features and shape features, respectively. Data Curation Our analysis was conducted on a total of 70 pediatric MB subjects (48 in Group Pre-Processing and Feature Extraction All the Gd-T1w images were bias-corrected to remove the scan inhomogeneities [30].Then, the ground truth annotations for all the tumors of the pediatric MB scans were generated via consensus across two experienced board-certified neuro-radiologists (Expert 1 with nine years of experience, and Expert 2 with eight years of radiology experience) using 3D Slicer [31].Each MB tumor was segmented into the enhancing tumor region, the edema region, and the non-enhancing tumor region + cystic core.This was followed by registering the scans to age-appropriate atlases to account for the changing anatomical structures during the different developmental stages [32].Finally, intensity standardization was conducted using the approach described in [33]. Feature extraction then followed, which involved 214 texture features as well as 18 morphological features that were extracted from the different tumor regions as well as the tumor habitat that encompasses all the tumor regions.Namely, the texture features [34] encompass gradient, Haralick, Laws, Gabor, and COLLAGE (gradient entropy) features [35] and are computed for every voxel of every tumor region.We utilize these per-voxel measurements to compute first-order statistics (mean, median, standard deviation, skewness, and kurtosis) per feature for every tumor region.This resulted in 1070 textural features for every tumor region, as well as for the tumor habitat.Additionally, 18 morphological features were extracted from the different tumor regions.Namely, four local features that capture surface-based irregularities (Curvedness, Sharpness, Shape Index, and Total Curvature) were computed from each region.The four local features were computed from the constructed isosurfaces of the tumor regions, followed by computing the first and second fundamental forms of the surfaces [36].Gaussian and mean curvatures were then computed from the fundamental forms per voxel.Finally, the local features were derived from the Gaussian and mean curvatures.In addition, 14 morphological features capturing the global contour characteristics of the tumors were computed from each tumor region and the tumor habitat.Extracted features are based on an Insight Segmentation and Registration Toolkit (ITK) implementation (www.itk.org).The features are volume, major axis length, minor axis length, eccentricity, elongation, orientation, perimeter, roundness, equivalent spherical radius, equivalent spherical diameter, flatness, elongation shape factor, compactness, and integrated intensity.Similar to the scheme for the textural features, five statistics were calculated for each of the four extracted surface-based features, and then concatenated with the 14 global features, resulting in a 34 × 1 vector, per each tumor region as well as the tumor habitat.All the extracted features were then concatenated to construct mRRisk for every patient. Regression Analysis All the extracted feature attributes were then employed within logistic regression analysis models with α representing the regularization parameter.Specifically, Least Absolute Shrinkage and Selection Operator (LASSO) (L1-regularization), α = 1; ridge regression (L2 regularization), α = 0; and elastic net (combining the penalty terms of both LASSO and ridge regression (0 < α < 1)) were employed within the regression models to conduct survival risk assessment [37].We utilized the models to conduct feature selection, and then create a continuous survival risk score for each subject.Based on the fitted risk models, a threshold was identified to risk-stratify patients into low-and high-risk groups [38], within SHH and Group 4 molecular subgroups, individually.Log-rank test along with Kaplan-Meier (KM) survival analysis were then performed to see how the survival rate varies between the two identified risk groups.Performance metrics were computed to assess the efficacy of our survival prognostication models, such as hazard ratios (HRs), risk of experiencing the event of interest at a time point [39], 95% Confidence Interval (CI), level of uncertainty about the point estimates [40], and Concordance Index (Cindex), a measure of the probability of concordance between the predicted and the observed survival [41].All the computations were conducted in-house using RStudio (V.4.3.1). We compared the performance of our prognostic models to risk-stratify MB patients within Group 4 and SHH molecular subgroups, with the following strategies: (1) shape features alone, (2) texture features alone, and (3) Chang's stratification. Results Our analysis was conducted using three different data combinations to assess the robustness of our approach and the resilience of the extracted feature sets.Specifically, data from each site was used once for testing while combining the data from the other two sites for training. Employing Shape Features Alone for Risk Stratification When employing shape features on Group 4 subgroup subjects, significant differences were observed between the non-enhancing tumor + cystic core subcompartments of the subjects, resulting in two risk groups (Figure 2a,b).The differences were observed when using Site 1 as a test set (p = 0.0035, C-index = 0.5), using LASSO regression, as well as when using Site 3 as a test set (p = 0.025, C-index = 0.74), using Elastic Net model (alpha = 0.5).The top features selected using our regression models across the different experiments included the perimeter, elongation, and minor axis length, as well as some surface features, such as the median of curvedness, skewness of sharpness, and kurtosis of shape index. Employing Texture Features Alone for Risk Stratification When employing texture features on Group 4 subgroup subjects, significant differences were observed between the tumor habitat, the edema, and the non-enhancing tumor + cystic core subcompartments of the subjects, resulting in two risk groups (Figure 2c,d).The differences were observed on (a) the tumor habitat when using Site 3 as a test set (p = 0.0017, C-index = 0.69), using ridge regression; (b) the tumor habitat using Site 2 as a test set (p = 0.04, C-index = 0.5), using the Elastic Net model (alpha = 0.38); (c) the edema subcompartment using Site 2 as a test set (p = 0.05, C-index = 0.58), using ridge regression; and (d) the non-enhancing tumor + cystic core subcompartment using Site 1 for testing (p = 0.04, C-index = 0.82), using LASSO regression.The top features selected using our regression model across the different experiments included the skewness of Laws features and median of Collage feature (information measure of correlation). were observed between the non-enhancing tumor + cystic core subcompartments of the subjects, resulting in two risk groups (Figure 2a,b).The differences were observed when using Site 1 as a test set (p = 0.0035, C-index = 0.5), using LASSO regression, as well as when using Site 3 as a test set (p = 0.025, C-index = 0.74), using Elastic Net model (alpha = 0.5).The top features selected using our regression models across the different experiments included the perimeter, elongation, and minor axis length, as well as some surface features, such as the median of curvedness, skewness of sharpness, and kurtosis of shape index. Employing mRRisk Signature for Risk Stratification Interestingly, when combining both feature families (shape and texture) into the mR-Risk signature to risk-stratify the Group 4 subgroup, the C-indices for the risk stratification results improved.For instance, when using Site 3 for testing, significant differences between the two risk groups were observed on the tumor habitat (C-index = 0.7 vs. 0.69 when using texture features alone, while the p-values were the same (0.0017) for both experiments (Figure 3)).Similarly, when using Site 2 for testing, the tumor habitat and edema exhibited significant differences across the two risk groups with improved C-indices when using mRRisk (0.52, 0.6 vs. 0.5, 0.58 when using texture features alone), but the p-values were similar for both experiments (p = 0.04 for habitat and 0.05 for edema).Figure 4 shows heatmaps that illustrate the qualitative differences between the two risk groups identified within the Group 4 subgroup using our radiomic features. Employing Texture Features Alone for Risk Stratification When employing texture features on Group 4 subgroup subjects, significant differences were observed between the tumor habitat, the edema, and the non-enhancing tumor + cystic core subcompartments of the subjects, resulting in two risk groups (Figure 2c,d).The differences were observed on (a) the tumor habitat when using Site 3 as a test set (p = 0.0017, C-index = 0.69), using ridge regression; (b) the tumor habitat using Site 2 as a test set (p = 0.04, C-index = 0.5), using the Elastic Net model (alpha = 0.38); (c) the edema subcompartment using Site 2 as a test set (p = 0.05, C-index = 0.58), using ridge regression; and (d) the non-enhancing tumor + cystic core subcompartment using Site 1 for testing (p = 0.04, C-index = 0.82), using LASSO regression.The top features selected using our regression model across the different experiments included the skewness of Laws features and median of Collage feature (information measure of correlation). Employing mRRisk Signature for Risk Stratification Interestingly, when combining both feature families (shape and texture) into the mRRisk signature to risk-stratify the Group 4 subgroup, the C-indices for the risk stratification results improved.For instance, when using Site 3 for testing, significant differences between the two risk groups were observed on the tumor habitat (C-index = 0.7 vs. 0.69 when using texture features alone, while the p-values were the same (0.0017) for both experiments (Figure 3)).Similarly, when using Site 2 for testing, the tumor habitat and edema exhibited significant differences across the two risk groups with improved C-indices when using mRRisk (0.52, 0.6 vs. 0.5, 0.58 when using texture features alone), but the p-values were similar for both experiments (p = 0.04 for habitat and 0.05 for edema).Figure 4 shows heatmaps that illustrate the qualitative differences between the two risk groups identified within the Group 4 subgroup using our radiomic features. Employing Shape Features Alone for Risk Stratification When employing shape features on SHH subgroup subjects, significant differences were observed between the non-enhancing tumor + cystic core subcompartments as well as the tumor habitat of the subjects, resulting in two risk groups (Figure 5a,b).The differences were observed on the tumor habitat when using Site 2 as a test set (p = 0.04, CI = 0.7), using ridge regression, as well as on the non-enhancing tumor + cystic core subcompartments when using Site 1 as a test set (p = 0.01, CI = 0.8), using ridge regression.The top features selected using our regression model included the perimeter, roundness, minor and major axes lengths, and compactness, as well as some surface features, such as the median of sharpness, kurtosis of total curvature, variance of curvedness, and median of shape index. Employing Shape Features Alone for Risk Stratification When employing shape features on SHH subgroup subjects, significant differences were observed between the non-enhancing tumor + cystic core subcompartments as well as the tumor habitat of the subjects, resulting in two risk groups (Figure 5a,b).The differences were observed on the tumor habitat when using Site 2 as a test set (p = 0.04, CI = 0.7), using ridge regression, as well as on the non-enhancing tumor + cystic core subcompartments when using Site 1 as a test set (p = 0.01, CI = 0.8), using ridge regression.The top features selected using our regression model included the perimeter, roundness, minor and major axes lengths, and compactness, as well as some surface features, such as the median of sharpness, kurtosis of total curvature, variance of curvedness, and median of shape index. Employing Texture Features Alone for Risk Stratification When employing texture features on SHH subgroup subjects, significant differences were observed between the enhancing tumor subcompartment of the subjects, resulting in two risk groups (Figure 5c).The differences were observed when using Site 1 as a test set (p = 0.03, CI = 0.74), using the Elastic Net model (alpha = 0.54).The top features selected using our regression model included the Laws and Collage features with its different types of statistics.Figure 6 shows heatmaps that illustrate the qualitative differences between the two risk groups identified within the SHH subgroup using our radiomic features. Employing Texture Features Alone for Risk Stratification When employing texture features on SHH subgroup subjects, significant differences were observed between the enhancing tumor subcompartment of the subjects, resulting in two risk groups (Figure 5c).The differences were observed when using Site 1 as a test set (p = 0.03, CI = 0.74), using the Elastic Net model (alpha = 0.54).The top features selected using our regression model included the Laws and Collage features with its different types of statistics.Figure 6 shows heatmaps that illustrate the qualitative differences between the two risk groups identified within the SHH subgroup using our radiomic features. Interestingly, similar to the Group 4 subgroup experiments, the results improved when combining the texture and shape features into mRRisk.For instance, when using Site 1 as a test set, significant differences were observed on the enhancing tumor between the two risk groups, with C-index = 0.8 compared to 0.74 using texture features alone (Figure 5d). Since the SHH subgroup is known to have wide disparities in outcomes that are associated with age [42], with younger patients having better survival outcomes, we attempted to see if there were any age-wise significant differences across our two identified risk groups using mRRisk.Interestingly, the subjects that exhibited significant differences in risk levels when employing textural features of the enhancing tumor also exhibited significant differences in age (p = 0.09) [43,44]. Risk-Stratifying MB Patients in SHH and Group 4 Subgroup Using Chang's Stratification When employing the current clinical classification criteria (Chang's classification) for risk stratification within the two subgroups (SHH and Group 4), differences across the survival risk categories were not observed to be significant between the subjects from the same subgroup.For instance, in the Group 4 subgroup, performance metrics of p = 0.1 and CI = 0.5 were obtained when using Site 3 for testing, p = 0.47 and CI = 0.5 when using Site 2 for testing, and p = 0.54, CI = 0.5 when using Site 1 for testing.Similarly, significant differences were not observed when employing Chang's classification to risk-stratify patients Interestingly, similar to the Group 4 subgroup experiments, the results improved when combining the texture and shape features into mRRisk.For instance, when using Site 1 as a test set, significant differences were observed on the enhancing tumor between the two risk groups, with C-index = 0.8 compared to 0.74 using texture features alone (Figure 5d). Since the SHH subgroup is known to have wide disparities in outcomes that are associated with age [42], with younger patients having better survival outcomes, we attempted to see if there were any age-wise significant differences across our two identified risk groups using mRRisk.Interestingly, the subjects that exhibited significant differences in risk levels when employing textural features of the enhancing tumor also exhibited significant differences in age (p = 0.09) [43,44]. Risk-Stratifying MB Patients in SHH and Group 4 Subgroup Using Chang's Stratification When employing the current clinical classification criteria (Chang's classification) for risk stratification within the two subgroups (SHH and Group 4), differences across the survival risk categories were not observed to be significant between the subjects from the same subgroup.For instance, in the Group 4 subgroup, performance metrics of p = 0.1 and CI = 0.5 were obtained when using Site 3 for testing, p = 0.47 and CI = 0.5 when using Site 2 for testing, and p = 0.54, CI = 0.5 when using Site 1 for testing.Similarly, significant differences were not observed when employing Chang's classification to riskstratify patients within the SHH subgroup (p = 0.7, CI = 0.6 when using Site 1 for testing). We show the feature importance graphs for all the conducted experiments in the supplementary material.In Figure S1, we show the feature importance graphs for mRRisk descriptor features for both SHH and Group 4 subgroups, whereas the y-axis represents the feature, and the x-axis represents the F-score for each feature.Similarly, Figure S2 illustrates the feature importance graphs for shape features for both SHH and Group 4 subgroups.Finally, Figure S3 illustrates the feature importance graphs for texture features for both SHH and Group 4 subgroups. Discussion In this study, we presented a radiomic prognostic signature, "medulloblastoma radiomics risk" (mRRisk), that risk-stratifies medulloblastoma (MB) patients within the individual molecular subgroups, namely, the SHH and Group 4 subgroups.mRRisk combines hand-crafted textural and morphological features that capture the tumor heterogeneity and disorderly nature within its confines, and, hence, may offer additional insights to resolve the intra-subgroup heterogeneity of MB tumors.MB is widely recognized as having four molecular subgroups with correlated clinical outcomes and prognosis; however, it is also reported that there is a wide disparity of outcomes within the individual subgroups [7,9,45].This underscores the need for quantifying the tumor heterogeneity, which can help identify patients, within the same subgroup, with low risk that can benefit from de-escalated therapy from those with high risk and in need of intensified treatment strategies.While there are many radiomic approaches in the literature that aimed to carry out molecular subgroup classification [46], our study uniquely seeks to utilize those radiomic tools to further substratify patients within the individual subgroups.Further optimization of mRRisk with rigorous validation on large multi-institutional cohorts would allow for an enriched risk stratification.Specifically, our prognostic signature can help reduce the long-term toxicities within the children with MB that are recognized as average-risk by providing additional prognostic insights to tailor their treatment intensity.Additionally, the signature can also enable the identification of patients that are true candidates for therapy intensification.This can lead to incorporating the signature in clinical trials that aim to carry out therapy de-escalation as well as therapy intensification, which could improve patients' outcomes and treatment planning. Our preliminary results showed that morphological radiomic attributes that capture the surface-based irregularities of the tumor, as well as its global contour characteristics, yielded two distinct survival risk groups within both the SHH and Group 4 molecular subgroups.Specifically, significant differences were observed across the non-enhancing tumor + cystic core compartments in Group 4 patients (p = 0.0035 on the test set) (Figure 2a).Interestingly, the same top features emerged when using different sites for testing (sites 1 and 3), individually, yielding significant results, indicating the robustness of our radiomic features.Similarly, the morphological attributes identified two risk levels within SHH group patients when employing those features on the tumor habitat (p = 0.04 on the test set) (Figure 5a), and on the non-enhancing tumor + cystic core compartments (p = 0.019 on the test set) (Figure 5b).Employing textural features also allowed for the identification of two statistically significant risk groups for both the SHH and Group 4 subgroup.For instance, entropy-based features capturing the frequency content in localized regions (e.g., Gabor) and features capturing the degrees of match of the voxel neighborhoods (e.g., Laws) helped sub-stratify patients from both subgroups when employed on the tumor subcompartments (e.g., p = 0.0017 on the tumor habitat of Group 4 (Figure 2c), and p = 0.03 on the enhancing tumor of SHH subgroup) (Figure 5c).Interestingly, combining the texture and morphological feature families into our prognostic signature, mRRisk, improved the performance metrics, specifically, the Concordance Index, obtained for risk stratification across the different test sets.These results suggest that combining the curvature local changes on the tumor surface with global contour attributes, in addition to the textural and gradient entropy changes, may provide surrogate quantitative attributes to quantify tumor heterogeneity, offering additional insights into risk assessment.Our approach may provide complimentary biomarkers that aid towards a more reliable risk assessment in pediatric MB. There are many works in the literature that have attempted to employ radiomics in either predicting outcomes [16][17][18][19][20]28,29] or classifying molecular subgroups [1,[21][22][23][24][25][26][27] for MB patients.However, there is a dearth in radiomic approaches that attempted to delve deeper into the molecular characteristics for MB and enable risk stratification within those subgroups, which would allow for the interpretation of the wide disparities among those patients.Very few studies attempted to sub-stratify patients within the individual molecular subgroups and utilized molecular profiling and other clinical approaches for this purpose.For instance, in a study by Schwalbe et al. [15], the authors attempted to quantify the substantial heterogeneity within each molecular subgroup in MB, by conducting molecular profiling, including DNA methylation analysis.Interestingly, the authors were able to identify seven subgroups within the four subgroups, two for SHH (stratified based on age), two for Group 3 (high-and low-risk), and two for Group 4 (high-and low-risk), while the WNT subgroup remained unchanged.Interestingly, when we attempted to assess if there were any significant differences in age between our two risk-stratified groups within the SHH subgroup [7,15,45], significant differences in age (p = 0.09) were observed between the two risk-stratified groups, for one of our conducted experiments.We could not identify any other age-wise significant differences within our performed experiments on the SHH subgroup, which is likely due to the small sample size of our dataset of patients with SHH (n = 22). Our study did have limitations.First, while multi-institutional, our sample size is limited for this study, with 48 subjects in Group 4 and 22 in SHH.The limited sample size partly limited the statistical power of the study and affected our results, where significant differences could not be observed on all three test sets in the different data combinations, for the two subgroups.Pediatric MB is a rare disease, so the curation of large multiinstitutional studies is often challenging.Our goal is to continue curating multi-institutional MB studies with known molecular subgroup information so we can validate our approaches on larger external cohorts.We are also working on curating data from clinical trials (e.g., ACNS0331), where subjects were all uniformly treated and had known molecular subgroup characteristics.This will allow us to conduct a survival analysis on larger cohorts to possibly sub-stratify patients with similar molecular characteristics.Secondly, we did not have all MRI modalities available for this analysis (i.e., T2w and FLAIR) due to either the unavailability of the scan, or because of the poor quality of the scans leading to their exclusion from our analysis.Our current efforts with data curation consider the scans with all three available sequences. Conclusions This study presents a radiomic prognostic signature to risk-stratify medulloblastoma patients within the individual molecular subgroups.With the reported disparities in outcomes within the molecular subgroups and the substantial heterogeneity in medulloblastoma, our approach attempts to address the need for additional tools, besides clinical approaches, to quantify those differences, towards a more reliable risk assessment.Our results show promise in radiomic tools to identify different risk levels within patients that share the same molecular characteristics. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cancers16122248/s1, Figure S1: Feature importance graphs for mRRisk descriptor features for both SHH and Group 4 subgroups.The y-axis represents the feature name whereas the x-axis represents the F-score for each feature; Figure S2: Feature importance graphs for shape features for both SHH and Group 4 subgroups.The y-axis represents the feature name whereas the x-axis represents the F-score for each feature; Figure S3: Feature importance graphs for texture features for both SHH and Group 4 subgroups.The y-axis represents the feature name whereas the x-axis represents the F-score for each feature. Figure 1 . Figure 1.Pipeline of the proposed framework for risk stratification of SHH and Group 4 subgroups.T, S stand for texture features and shape features, respectively. Figure 1 . Figure 1.Pipeline of the proposed framework for risk stratification of SHH and Group 4 subgroups.T, S stand for texture features and shape features, respectively. 2. 2 . Data Curation Our analysis was conducted on a total of 70 pediatric MB subjects (48 in Group 4 and 22 in SHH molecular subgroup).Studies were retrospectively collected from three independent sites: Site 1: Cincinnati Children's Hospital Medical Center (CCHMC) (n = 22, Group 4 (n = 14), SHH (n = 8)), Site 2: Children's Hospital Los Angeles (CHLA) (n = 31, Group 4 (n = 18), SHH (n = 13)), and Site 3: Children's Hospital of Philadelphia (n = 17, Group 4 (n = 16), SHH (n = 1)).The scans were performed from 2000 up to the date of IRB approved data (5/16/2019).Scans were acquired with 1.5 T and 3 T MRI Philips (Ingenia, Achieva) and Siemens scanners.The inclusion criteria used for our datasets were: (a) availability of Gd-T1w axial-view MRI scans; (b) patients with only MB tumors; (c) known molecular status, Chang's classification status, and overall survival; and (d) acceptable diagnostic quality of the MRI scans, as identified by the collaborating radiologists.Details of patient demographics and MRI acquisition information are listed in Figure 2 . Figure 2. Kaplan-Meier curves for the statistically significant results when survival analysis is conducted using shape (a,b) and texture (c,d) features, individually, for risk stratification within Group Figure 2 . Figure 2. Kaplan-Meier curves for the statistically significant results when survival analysis is conducted using shape (a,b) and texture (c,d) features, individually, for risk stratification within Group 4 molecular subgroup.(a,b) show the KM curves when employing shape features on the non-enhancing tumor + cystic core subcompartments on datasets 3 and 1 as test sets, respectively.(c,d) show the KM curves when employing texture features on the tumor habitat of dataset 3 and the non-enhancing tumor + cystic core of dataset 1, as test sets, respectively.The x-axis represents the survival time in days, whereas y-axis represents the survival probability. Cancers 2024 , 16, x FOR PEER REVIEW 6 of 13 4 molecular subgroup.(a,b) show the KM curves when employing shape features on the non-enhancing tumor + cystic core subcompartments on datasets 3 and 1 as test sets, respectively.(c,d) show the KM curves when employing texture features on the tumor habitat of dataset 3 and the non-enhancing tumor + cystic core of dataset 1, as test sets, respectively.The x-axis represents the survival time in days, whereas y-axis represents the survival probability. Figure 3 . Figure 3. Kaplan-Meier curves for the statistically significant results when survival analysis was conducted using mRRisk signature, combined, for risk stratification within Group 4 molecular subgroup.(a) shows the KM curve when employing the features on the tumor habitat on dataset 3 as the test set.(b,c) show the KM curves when employing the features on the tumor habitat and the Figure 3 . Figure 3. Kaplan-Meier curves for the statistically significant results when survival analysis was conducted using mRRisk signature, combined, for risk stratification within Group 4 molecular subgroup.(a) shows the KM curve when employing the features on the tumor habitat on dataset 3 Figure 4 . Figure 4. Heatmaps illustrating texture and shape features for high-(a) and low-risk (b) cases from Group 4 subgroup.The heatmaps shown are for the Collage (entropy) feature and the curvedness feature. Figure 4 . Figure 4. Heatmaps illustrating texture and shape features for high-(a) and low-risk (b) cases from Group 4 subgroup.The heatmaps shown are for the Collage (entropy) feature and the curvedness feature. Figure 5 . Figure 5. Kaplan-Meier curves for the statistically significant results when survival analysis is conducted using shape (a,b) and texture (c) features, individually, and (d) mRRisk, within SHH molecular subgroup.(a,b) show the KM curves when employing shape features on the tumor habitat and the non-enhancing tumor + cystic core subcompartments on datasets 2 and 1 as test sets, respectively.(c) shows the KM curves when employing texture features on the enhancing tumor of dataset 1 as a test set, and (d) shows the curves when employing mRRisk on the enhancing tumor of dataset 1 as a test set.The x-axis represents the survival time in days, whereas y-axis represents the survival probability. Figure 5 . 13 Figure 6 . Figure 5. Kaplan-Meier curves for the statistically significant results when survival analysis is conducted using shape (a,b) and texture (c) features, individually, and (d) mRRisk, within SHH molecular subgroup.(a,b) show the KM curves when employing shape features on the tumor habitat and the non-enhancing tumor + cystic core subcompartments on datasets 2 and 1 as test sets, respectively.(c) shows the KM curves when employing texture features on the enhancing tumor of dataset 1 as a test set, and (d) shows the curves when employing mRRisk on the enhancing tumor of dataset 1 as a test set.The x-axis represents the survival time in days, whereas y-axis represents the survival probability.Cancers 2024, 16, x FOR PEER REVIEW 9 of 13 Figure 6 . Figure 6.Heatmaps illustrating texture and shape features for high-(a) and low-risk (b) cases from the SHH subgroup.The heatmaps shown are for the Collage (entropy) feature and the sharpness feature. Author Contributions: Conceptualization, M.I.; methodology, M.I. and H.U.; software, M.I. and H.U.; validation, M.I.; formal analysis, M.I. and H.U.; investigation, M.I., F.H., R.A., R.S., P.T. and P.d.B.; resources, M.I. and P.T.; data curation, H.U.; writing-original draft preparation, M.I.; writing-review and editing, P.T. and M.I.; visualization, H.U.; supervision, P.T.; project administration, P.T. and M.I.; funding acquisition, M.I. and P.T.All authors have read and agreed to the published version of the manuscript.Funding: This research was funded by NIH/NCI/ITCR: 1U01CA248226-01, NIH/NCI R01CA277728-01A1, NIH/NCI: 1R01CA264017-01A1, NIH/NCI: 3U01CA248226-03S1, DOD/PRCRP Career Development Award: W81XWH-18-1-0404, The Dana Foundation David Mahoney Neuroimaging Program, The V Foundation Translational Research Award, The Johnson & Johnson WiSTEM2D Award, Musella Foundation Grant, R&D Pilot Award, Departments of Radiology and Medical Physics, University of Wisconsin-Madison, and WARF Accelerator Oncology Diagnostics Award.Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board of University of Wisconsin-Madison (ID# 2022-1683, 1 December 2023).The study was determined to meet the criteria for exempt human subjects.Informed Consent Statement: Patient consent was waived due to the study's retrospective nature. Table 1 . Patient demographics and MRI acquisition information across our multi-institutional data.
2024-06-20T15:07:26.214Z
2024-06-01T00:00:00.000
{ "year": 2024, "sha1": "6ec2f148e7fbb458f211981a738dfd257be70165", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/16/12/2248/pdf?version=1718692442", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3f423d780af21e34314d9bdfbda927e2473c4741", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
4337163
pes2o/s2orc
v3-fos-license
Androidal Fat Dominates in Predicting Cardiometabolic Risk in Postmenopausal Women We hypothesized that soy isoflavones would attenuate the anticipated increase in androidal fat mass in postmenopausal women during the 36-month treatment, and thereby favorably modify the circulating cardiometabolic risk factors: triacylglycerol, LDL-C, HDL-C, glucose, insulin, uric acid, C-reactive protein, fibrinogen, and homocysteine. We collected data on 224 healthy postmenopausal women at risk for osteoporosis (45.8–65 y, median BMI 24.5) who consumed placebo or soy isoflavones (80 or 120 mg/d) for 36 months and used longitudinal analysis to examine the contribution of isoflavone treatment, androidal fat mass, other biologic factors, and dietary quality to cardiometabolic outcomes. Except for homocysteine, each cardiometabolic outcome model was significant (overall P-values from ≤.0001 to .0028). Androidal fat mass was typically the strongest covariate in each model. Isoflavone treatment did not influence any of the outcomes. Thus, androidal fat mass, but not isoflavonetreatment, is likely to alter the cardiometabolic profile in healthy postmenopausal women. Introduction Menopause is associated with an increase in intra-abdominal fat [1], which is considered a major risk factor of atherosclerotic cardiovascular disease (CVD) [2]. However, whether an increased risk of CVD in postmenopausal women is due to altered body composition, changes in reproductive hormones, or some other physiological process associated with menopause has not been clearly established. It is also uncertain as to what extent androidal fat mass may influence CVD risk factors in healthy nonobese women. The risk of chronic disease is perceived to be considerably higher in obese compared to normal weight adults. However, Gautier et al. [3] recently reported that the effect of waist circumference, reflecting androidal fat mass, in increasing the risk for diabetes was more profound among men and women with a normal body mass index (BMI) of less than 25 kg/m 2 than among those with above normal BMI. Thus, it appears that efforts to reduce abdominal fat accumulation would be beneficial regardless of BMI category. This study was ancillary to the Soy Isoflavones for Reducing Bone Loss (SIRBL) study, a randomized, doubleblind, placebo-controlled multicenter (Iowa State University (ISU) and University of California at Davis (UCD)) clinical trial funded by the National Institutes of Health (NIH) [4]. We examined contributors (androidal fat mass, duration of menopause, isoflavone treatment, family history of CVD, and dietary quality) to cardiometabolic risk in primarily normal and overweight healthy postmenopausal women. Cardiometabolic outcomes included circulating low-density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), triacylglycerol, glucose, homeostatic model assessment (HOMA) of insulin resistance, uric acid, C-reactive protein (CRP), fibrinogen, and homocysteine (Hcy). We hypothesized that consumption of soy isoflavones in tablet form would attenuate the anticipated increase in central fat (androidal) mass during the 36 months of treatment, which in turn may favorably modify circulating concentrations of cardiometabolic risk factors. We also hypothesized that androidal fat mass, other biologic factors, and dietary quality would influence these cardiometabolic risk factors, but that androidal fat mass would predominate. Overall Study Design. The parent study examined the effect of two doses (80 versus 120 mg/day) of soy proteinderived isoflavone versus placebo tablets for 36 months on bone loss in healthy postmenopausal women (45 to 65 years of age) who were at risk for osteoporosis. The parent study included power analysis on the primary outcome (lumbar spine bone mineral density) in the methodology and has been previously reported [4]. This ancillary project focused on the relationship between body composition and cardiometabolic risk factors. Many of these cardiometabolic risk factors were tested throughout the study (LDL-C, HDL-C, TAG, glucose, and uric acid) whereas some were only tested through the 12 month time point (HOMA, CRP, fibrinogen, and Hcy). The respective Institutional Review Boards (IRB) at ISU (ID no. 02-199) and at UCD (ID no. 200210884-2) approved our study protocol, consent form, and all participant-related materials. Approvals for the dual-energy X-ray absorptiometry (DXA) procedures were obtained from each institution's IRB and State Department of Public Health in Iowa and California. At prebaseline, each participant received a detailed explanation of the study verbally and in writing before signing an informed consent form. Participant Recruitment, Screening, and Selection. We recruited (2003 to 2005) women throughout the state of Iowa and in the greater Sacramento and Bay Area regions in northern California primarily through direct mailing lists, stories in local newspapers, and local/regional radio advertisements. Responders (N = 5, 255) were initially screened via telephone to identify healthy women (without diseases or conditions, not taking hormones or medications) ≤65 years who had undergone natural menopause (cessation of menses 1 to 10 years), were not experiencing excessive vasomotor symptoms, nonsmokers (not currently smoking and who had not smoked in the past 6 months), and had a BMI between 18.5 through 29.9 (except for 9 women from UCD who did not meet the range of inclusion criterion: 8 women had BMI values that ranged from 30 to 32.7 and one woman had a BMI of 17.8, but were enrolled because they were deemed healthy). The parent SIRBL project established the inclusion/exclusion criteria. We excluded vegans and high alcohol consumers (>7 servings/week), as well as those who were diagnosed with chronic disease, had a first-degree relative with breast cancer, or who chronically used medication (current: cholesterol-lowering and/or antihypertensive; past 3 months: antibiotics; past 6 months: estrogen/progestogen creams, calcitonin; past 12 months: oral hormones/estrogen or selective estrogen receptor modulators; ever: bisphosphonates). Women who met the initial screening criteria (N = 677) were invited to the clinic for further eligibility screening, including BMD assessment using DXA. The SIRBL project focused on disease prevention rather than treatment; thus, women with BMD lumbar spine (L1-L4) and/or proximal femur T-scores that were low (>1.5 SD below young adult mean) or high (>1.0 SD above mean) or with evidence of previous or existing spinal fractures were excluded. Once each woman qualified based on BMD, fasting blood was drawn for a clinical chemistry profile. We excluded women with evidence of diabetes mellitus (fasted glucose ≥6.93 mmol/L (126 mg/dl)), abnormal renal, liver (elevated enzymes), and/or thyroid function, or elevated lipids (LDL-cholesterol >4.10 mmol/L (160 mg/dl); triacylglycerol >2.25 mmol/L (200 mg/dl)). Based upon our entry criteria, we randomized 255 women to treatment in the parent trial. We excluded 13 women at UCD from this analysis because they did not meet the entry criteria (11 had thickened endometrium, 1 had breast cancer, 1 could not provide a baseline blood sample). We excluded an additional 19 women who did not have body composition data at either 12, 24, or 36 months because they dropped out of the study, resulting in a sample size of 224 women based upon androidal fat as our primary covariate of interest in this ancillary project. Randomization to Treatment and Tablet Formulation. To meet the objectives of the parent project, participants at each location (ISU, UCD) were stratified according to baseline proximal femur BMD (high, medium, and low) [4] based upon NHANES III database population values [5] and randomly assigned to one of three treatment groups: placebo control, 80 mg isoflavones, or 120 mg isoflavones. Tablets were provided by The Archer Daniels Midland Co. (Decatur, IL); tablet composition has been described previously [4]. An independent researcher (Patricia Murphy) at ISU confirmed that the actual isoflavone doses (mean ± SD, mg/d) were similar to those formulated and tested by Archer Daniels Midland, respectively: control = 0 compared with 0.3 ± 0.4; 80 mg = 89.5 ± 5.0 compared with 84.3 ± 4.5; 120 mg = 124.0 ± 7.7 compared with 122.5 ± 3.4. Participants in each group were instructed to take three compressed tablets/d. To preserve the double-blind nature of the study, bottles did not indicate treatment assignment. Body Size and Composition Measures and Blood Sample Measures. For this study, we used anthropometric and body composition measurements that included weight, height, whole body lean and fat mass, and androidal fat mass, as well as fasting concentrations of lipids/lipoproteins, glucose, and uric acid. These outcomes were assessed at baseline, 12, 24, and 36 months. In addition, circulating insulin, HOMA (calculated as fasting glucose (mg/dL) × fasting insulin (µU/mL)/405), CRP, fibrinogen, Hcy, and red blood cell (RBC) folate concentrations were assessed at baseline and 12 months. Trained researchers obtained anthropometric measurements according to standard protocols. Standing height was taken twice (average value recorded) with a wall-mounted stadiometer (Model S100; Ayrton Corp., Prior Lake, MN) and weight was measured at ISU using a balance beam scale (ABCO Health-o-Meter; Bridgeview, IL) and at UCD using an electronic scale (Circuits and Systems Inc; E. Rockaway, NY). Women wore hospital scrubs or shorts and a t-shirt, and removed their shoes, belts, watches, and jewelry for the duration of assessment. To ensure standardized data collection, body composition measurements were obtained by certified cross-trained DXA operators using matching DXA instruments (Delphi W Hologic Inc; Bedford, MA) at each site that were calibrated daily. To further ensure quality control, one operator assessed overall composition from the whole body DXA scans for both sites. Regional adiposity analysis was performed by one analyzer (LR) who sectioned each whole body DXA scan into waist, hip, and thigh regions based on bone landmarks [6] using special software (Discovery Version 12.3:7). The waist region included the first lumbar through the fourth lumbar vertebrae. The hip region began below the fourth lumbar vertebrae and extended to the tip of the greater trochanter of the femur. Androidal fat mass (kg) for each participant was the sum of waist and hip fat mass. Phlebotomists collected fasted (9 h) blood samples between 7:00 and 8:00 am. We separated serum (allowed to clot for 30 min prior to centrifugation) and plasma from whole blood and centrifuged samples for 15 minutes (4 • C) at 1000 × g, storing aliquots at −80 • C until analyses. Certified clinical laboratories (LabCorp; Kansas City, KS for ISU and UCD Medical Center; Sacramento, CA for UCD) performed a chemistry panel (including serum glucose, lipid profile, and uric acid) on each participant at each time point. We measured the remaining analytes (serum insulin, serum CRP, plasma fibrinogen, plasma Hcy, and RBC folate) for each participant in batch from ISU and UCD samples at baseline and 12 months in duplicate at ISU. We used sufficient in-house sera/plasma as qualitycontrol samples (frozen at −80 • C) to run with each kit to calculate interassay coefficient of variation (CV); we used duplicate samples to calculate intraassay CV. The lowto-normal and normal-to-high controls for each kit were well within the acceptable ranges. Serum insulin (µU/mL) concentration was determined with a radioimmunoassay kit (Linco Research, St Charles, MO, USA) using a Cobra II series autogamma counting system (Packard Instrument Company; Meriden, CT, USA). The intra-and interassay CV for insulin were 3.0% and 4.0%, respectively. Serum CRP (mg/L) concentration was determined with a highsensitivity sandwich enzyme-linked immunosorbent assay kit (ALPCO Diagnostics; Salem, NH) and plasma (heparinized) fibrinogen (mg/mL) concentration was determined with a sandwich enzyme-linked immunosorbent assay kit (AssayPro; St. Charles, MO) using a microtiter plate reader (ELx808; Bio-Tek Instruments, Inc., Winooski, VT). The intra-assay CVs for CRP and fibrinogen were 3.7% and 2.7%, respectively; the interassay CVs for CRP and fibrinogen were 6.0% and 2.3%, respectively. Total Hcy (µmol/L) concentration was determined using a high-performance liquid chromatography (HPLC) method adapted from Araki and Sako [7] and Ubbink et al. [8]. The total Hcy in plasma consists of free Hcy (i.e., reduced plus oxidized Hcy in the nonprotein fraction of plasma) and protein-bound Hcy [9]. N-Acetylcysteine (1 mM) was added as an internal standard to the plasma samples prior to derivatization. The fluorescence intensities were measured with excitation at 385 nm and emission at 515 nm, using a JASCO FP-1520 fluorescence detector. Further assay details have been previously published [9]. The intra-and interassay CV for plasma Hcy were 3.8% and 6.3%, respectively. Intracellular (RBC) folate (ng/mL) was measured using a radioactive immunoassay kit (MP Biomedicals; Irvine, CA). Because four hematocrit values were missing, RBC folate could not be calculated for these four samples. In addition, three baseline samples were incorrectly processed; thus, RBC folate values have been presented for 217 women. The intraand interassay CV for RBC folate were 3.2% and 11.8%, respectively. Interviewer-Administered Questionnaires. During the enrollment phase, trained interviewers administered three questionnaires to participants: a health and medical history [10][11][12], a reproductive history [13], and a nutrition history [10,11]. We also gathered data on prescription and overthe-counter medications at each time point, as well as previous and/or current use of herbal therapies or dietary supplements (which they were asked to discontinue prior to baseline testing). We assessed dietary intake at each time point using a semiquantitative food frequency questionnaire from Block Dietary Data Systems (Berkeley, CA). A Healthy Eating Index (HEI) score, with higher scores representing greater adherence to federal dietary guidelines, was calculated for each participant based on this questionnaire and included in the statistical analyses as a covariate. Statistical Analysis. We performed statistical analyses using version 2.10.1 of the R software, including version 3.1.96 of the nlme package software and considered results statistically significant (two sided) at P ≤ .05. Our 3-year longitudinal analyses of LDL-C, HDL-C, triacylglycerol, and glucose included women with complete data at all time points, baseline through 36 months. Our 1-year longitudinal analyses of fibrinogen, insulin, HOMA, Hcy, and CRP included women with complete data at baseline and 12 months. We reported descriptive statistics for 224 women using median and interquartile range. We constructed longitudinal models to identify significant contributors to each cardiometabolic risk factor (LDL-C, HDL-C, triacylglycerol, uric acid, glucose, insulin, HOMA, CRP, fibrinogen, and Hcy). Each final longitudinal model included these obligatory variables: treatment (control versus combined treatment with 80 mg or 120 mg of isoflavones or, in other words, no treatment versus treatment), time point (baseline versus 12, 24, or 36 months), treatment by time point interaction, and site (ISU versus UCD), as well as potential covariates that included androidal fat mass (kg) adjusted for height, time since last menstrual period (TLMP) (yr) (calculated for each woman: baseline test date-date of her last menstrual period), family history of CVD coded as a categorical variable (none versus positive or none versus unknown), and HEI score. Additionally, the insulin model included glucose as a covariate, and the Hcy model included RBC folate as a covariate. Independent variables in modeling the outcomes of interest included those variables that were biologically plausible. Preliminary models also included dietary fat intake (determined using the Block Food Frequency Questionnaire) and physical activity (determined using the Paffenbarger physical activity recall [14]), but those variables did not emerge as remotely significant contributors to any of the cardiometabolic risk factors and thus were not included in the final models. Separate height adjustments of fat and fat-free mass have been suggested for children by Wells and Cole [15]; accordingly, we found a significant impact of height on androidal fat mass among the participants in our study based on a log-log regression analysis (parameter estimate 0.8286, P = .0058). This suggested the use of heightadjusted androidal fat mass (androidal fat mass (g)/height (cm)) (exponent of 0.8286 for height changed to 1 for ease of interpretation). In other words, we adjusted androidal fat mass for height because taller women typically had greater waist circumferences due to their larger frame size and thus appeared to be at higher risk for CVD compared with shorter women, whereas this is not necessarily the case. We performed log transformation of variables with skewed distributions: triacylglycerol, HOMA, CRP, Hcy, and RBC folate. Restricted maximum likelihood (REML) estimation was used to obtain estimates of variances and correlations between repeated measures. Model selection was guided by a stepwise backwards selection based on model diagnostics, such as Akaike's information criterion and Bayes information criterion. An overall model fit was obtained based on a likelihood ratio test of the (maximum likelihood fitted) model at hand and the more parsimonious model of only obligatory covariates. Significance indicates a failure to accept the null hypothesis of covariates contributing to the model in a random fashion. More specifically, significance indicates that the full model with all of the variables included explains a greater proportion of the variability in the outcome than the parsimonious model. Cardiometabolic Risk Factors Assessed during a 36-Month Period. Baseline values for body composition and cardiometabolic outcomes are summarized in Table 1. Body composition measurements, including androidal fat mass, did not change significantly during the course of the study. The isoflavone treatment did not have an effect on any of the body composition outcomes. Treatment compliance was verified using urinary isoflavone concentrations. Compliance was excellent [4]. The results of longitudinal analysis showed that each cardiometabolic outcome model was highly significant, with overall P-values ≤ .0001 for all but the LDL-C model (P = .002). Significant covariates for each analyte are shown in Table 2. Androidal fat mass and site (ISU versus UCD) were consistent predictors of all analytes (36 month data) assessed, including lipids/lipoproteins, glucose, and uric acid. In general, androidal fat mass was typically the strongest covariate (positive) in each model (except the model with HDL-C as the outcome, where it was significantly and negatively associated). Time point also emerged as a significant covariate in the HDL-C and glucose models, indicating that the median concentrations of these analytes increased with time; however, the analyte concentrations at subsequent time points were not significantly different compared to baseline. The median glucose concentration was 85 mg/dL at baseline and 88 mg/dL at 36 months. The median HDL-C concentration was 64 mg/dL at baseline and 65 mg/dL at 36 months. Family history of CVD had a significant association with glucose: women who were unaware of their family history ("do not know," n = 9), taking into account key factors in the model, had on average a 6.8 mg/dL higher glucose concentration compared with women who indicated that they had a positive family history. Isoflavone treatment, TLMP, or HEI did not influence any of the 36 month analytes. Cardiometabolic Risk Factors Assessed during a 12-Month Period. Similar to the 36 month outcomes, each 12 month outcome model was highly significant (overall model P-values ≤ .0001), except for Hcy (P = .23, data not shown). Androidal fat mass was positively associated with each outcome (Table 3) and was the strongest covariate (positive) in each model. The parameter estimate for site was negative, indicating that the women from UCD had lower values than the women at ISU. In addition, TLMP (P = .0020) and TLMP-by-site interaction (P = .0021) contributed significantly, and a positive family history of CVD contributed marginally (P = .070) to fibrinogen concentration. As TLMP increased, the difference in fibrinogen concentration between the sites decreased. Glucose and time were significant covariates in the insulin model. As expected, a higher glucose concentration was associated with a higher insulin concentration. The unadjusted mean insulin concentration was significantly greater (2.86 µU/mL; P ≤ .0001) at 12 months compared with baseline. Isoflavone treatment or HEI did not emerge as significant predictors in any of the 12 month models. Discussion The main objective of this study was to identify significant contributors to cardiometabolic risk in primarily normal and overweight (BMI < 30) healthy postmenopausal women. In agreement with our hypothesis, androidal fat mass was the strongest and most consistent predictor of all cardiometabolic outcomes (except for Hcy) examined in this study. Our participants, as a group, showed no a Baseline data are reported for the entire sample. Treatment groups did not differ significantly in any of the outcomes at baseline. b The number of observations available at baseline c Time since last menstrual period d Assessed using dual-energy X-ray absorptiometery (DXA) e Homeostatic model of insulin resistance significant changes in body composition measurements, including androidal fat mass, during the course of the study. It should be noted, however, that the women were instructed to maintain their weight throughout the course of the study by following their usual diet and physical activity patterns, and hence, we did not expect to document overall or androidal fat mass change from baseline to 36 months. Similarly, with the exception of insulin, none of the cardiometabolic risk factors changed significantly between baseline and the time of final assessment (either 12 mo or 36 mo). Nevertheless, strong associations between androidal fat and cardiometabolic risk factors suggested that even a small increase in androidal fat mass may have considerable health consequences. Indeed, Biggs et al. [16] reported that the incidence of type 2 diabetes was 70% higher (a hazard ratio of 1.7) in adults 65 years of age and older who gained at least 10 cm in waist circumference over several years compared with those who remained within 2 cm of their baseline waist circumference. Although the Biggs's study did not specifically target postmenopausal women, it provided some insight into the magnitude of the impact of central adiposity on the incidence of type 2 diabetes. Certain factors such as duration of menopause, isoflavone treatment, and diet quality that are recognized in the literature as influential with respect to cardiovascular health did not emerge as significant predictors of the cardiometabolic outcomes examined in this study. Some of our findings are consistent with previous reports. For instance, DeNino et al. [17] found that the relationship between age and blood lipids in nonobese women was abolished after controlling for visceral fat. Similarly, in our study, TLMP did not emerge in any model (except for fibrinogen) as a significant covariate, because time point took precedence over TLMP. Results also suggested that androidal fat mass in combination with other factors predominated in these cardiometabolic risk models. On the other hand, our findings for isoflavone treatment and dietary quality conflict with some of the previously published reports. The lowering effect of soy protein rich in isoflavones on blood lipids/lipoproteins, albeit modest (≤6% reduction), has been documented [18,19]. The effect is more profound in hypercholesterolemic individuals and in males. Similarly, the effect of diet quality on CVD outcomes is well known: a healthier diet (high in fiber and low in fat and sodium) is associated with a lower risk of CVD [20,21]. We included both a HEI score as a measure of diet quality and isoflavone treatment in our analysis. None of the models retained either covariate. The lack of association between HEI and cardiometabolic outcomes in our study could be in part explained by a majority of women who had relatively healthy diets (median HEI of 67 out of 100). Very few studies have examined the association between HEI and cardiometabolic outcomes with mixed results. For instance, Kant and Graubard [22] reported that HEI emerged as a negative predictor of serum Hcy, CRP and plasma glucose (P < .05), whereas Fung and colleagues [23] did not find an association between HEI and CRP. In a recent study of 125 multiethnic overweight and obese women in early postpartum, the HEI scores were negatively associated with LDL-C and total cholesterol and positively related to HDL-C after adjustment for energy intake, body weight, and lactation status [24]. Based on the results of our study, HEI does not appear to be a predictor of cardiometabolic risk factors. On the other hand, the effects of dietary factors as well as isoflavone treatment on cardiometabolic outcomes may be mediated by androidal fat mass. We previously reported that soy isoflavone treatment for 12 months did not exert an effect on body composition, including androidal fat, in our sample of women [25]. With respect to diet, Fox et al. [26] determined that premenopausal women who participated in a 24 week diet and/or exercise program showed reductions in weight (∼7 kg) and total percentage body fat (although it remained greater than 35% for all groups), but no significant improvements in blood lipids, glucose, or insulin concentrations. Further analysis revealed a lack of change in the waist-to-hip ratio, which in turn indicated that body fat distribution was not influenced by the intervention. To summarize, dietary interventions that are not sufficiently potent to reduce androidal fat mass are not likely to produce changes in cardiometabolic outcomes. In addition to androidal fat mass, the other most common independent predictors of cardiometabolic outcomes in our study included time and site. Although concentrations of HDL-C and glucose did not change significantly during the study, both analytes showed an upward trend from baseline to 36 mo. The effect of time could be related to biological and/or behavioral factors that were not included in our models. In fact, TLMP was no longer significant when time was included, indicating that time was more important than TLMP. The site variable also emerged as a significant covariate in all models: compared with women at ISU, women at UCD had lower concentrations of triacylglycerol, HDL-C, glucose, insulin, HOMA, CRP, and fibrinogen and higher concentrations of LDL-C and uric acid. The site effect may be in part explained by baseline characteristics: ISU women were slightly (albeit not significantly) heavier and younger compared with UCD women. It is also quite possible that the site effect was due to some inherent differences between the two geographic locations. We did not document a relationship between androidal fat mass, as well as other factors, and Hcy likely because the women in our study were well below the clinical cut-off of 15, and Hcy values did not indicate great variability. Some of the strengths of our study were that we followed a relatively large number of women and monitored longitudinal changes (baseline to 12 or 36 mo) in cardiometabolic risk factors. Limitations were that these women were free-living (both a strength and limitation), lacked ethnic diversity, and underwent infrequent measurements (yearly). Women in this study were mainly nonobese based upon the BMI definition. However, BMI does not adequately reflect body composition. Thus, some of our women may have been identified as obese using other criteria, such as percentage body fat (35-40% = overweight, >40% = obese [27]). In conclusion, although cardiometabolic outcomes examined in this study were not affected by isoflavone treatment, each outcome (except for Hcy) had a strong, significant association with androidal fat mass. Thus, even small changes in androidal fat are likely to alter the cardiometabolic profile in healthy postmenopausal women. content: Alekel, Bhupathiraju, Hofmann, Matvienko, Reddy. Final approval of paper: Alekel, Bhupathiraju, Hofmann, Matvienko, Perry, Reddy, Ritland, Van Loan. The SIRBL study team would like to thank all of our participants, since without their dedication, our study could not have been completed. The authors would like to acknowledge our phlebotomists and students (graduate and undergraduate alike) who reported early and steadfastly for testing at our clinic sites. The authors thank the James R. Randall Research Center, Archer Daniels Midland Company (Decatur, IL) that supplied free-of-charge the ingredients, using certified good manufacturing procedures, for the treatment tablets (Novasoyâ), as well as Atrium Biotechnologies Inc. that compressed the ingredients into tablets. The authors thank GlaxoSmithKline (Moon Township, PA) for donating the calcium and vitamin D supplements (Os-Cal). The
2017-07-17T02:53:26.759Z
2010-12-19T00:00:00.000
{ "year": 2010, "sha1": "2ba35e49f2962a6bc2dc9060777fe4288971941a", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crp/2011/904878.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "998381e17c0d64473fbfc35274c7a8a1e1015cf9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6164865
pes2o/s2orc
v3-fos-license
Histological identification of muscular sarcocystis : A report of two cases Sarcocystis is an apicomplexan protozoan belonging to same phylum as toxoplasma. The parasite encysts inside striated muscles of its intermediate host. Humans are accidental host infected by eating food or water contaminated with oocysts or sporocysts of an infected definitive host. The infection is increasing in Southeast Asia and may be overlooked in histological sections if one is not aware of the histomorphological features. The size and shape of the bradyzoites and the appearance of the cyst wall are the reliable features to distinguish this parasite from other parasites of the same phylum. The incidence of human infection is rising in Southeast Asia and histopathology is an important method for the diagnosis of muscular infection. It is important to recognize the histomorphology of this parasite and its differentiation from similar parasites. INTRODUCTION Sarcocystis gets its name from 'sarcomere' as it was first reported as a thread like cyst in striated muscles of a house mouse and was initially named Miescher's tubules after its discoverer. [1,2]It has now been recognized as a protozoan of the phylum apicomplexa as the protozoa posses an apical complex structure involved in penetrating the host cells.Other apicomplexan protozoans include toxoplasma, babesia, plasmodium, Cryptosporidium parvum, Isospora belli. [3,4]The parasite is present widely in livestock and occurs in a large number of mammals and birds. [3]There are a large number of morphologically similar species, which occur in range of intermediate and definitive hosts. [3]The incidence of human infection is rising in Southeast Asia and histopathology is important method for the diagnosis of muscular infection in a intermediate host. [5,6]Hence, the importance of recognizing the histomorphology of the parasite and its differentiation from similar parasites.The paucity of reports from India may be due to lack of recognition of the parasite on histological sections. Case 1 A 20-year-old boy from Patna (Bihar, India) presented with a lump in the left arm for about 2-3 months.The lump was, firm, mobile, slightly tender but not painful.The illdefined lump measured up to 4 cm in greatest dimension.The clinical impression was that of an old rupture and healed muscle or tendon.No radiological study was performed and a muscle biopsy was done. Case 2 A 50-year-old man from Varanasi (Uttar Pradesh, India) underwent right hemimandibulectomy and modified neck dissection (type II) for ulcerated squamous cell carcinoma in the lower right alveolar sulcus.The patient had multiple small lymph nodes in the neck largest measuring 1 cm in diameter, however, never complained for pain or stiffness in the neck.He was asymptomatic prior to the oral complaints. The parasite was not identified on gross examination, which showed unremarkable pieces of skeletal muscle. Case 1 showed 4-5 encysted worms in one plane of section from the muscle and these were the only significant abnormality identified. In case 2, the parasite was seen on microscopy as an additional incidental f i n d i n g i n o n e o f t h e r a n d o m sections from the middle one-third of sternocleidomastoid muscle.Sections from the skeletal muscle of both the cases show oval cysts measuring 1-2 mm in length.The cysts were lined by an outer wall, which on higher magnification showed characteristic striations and hair like projections or radiating processes known as cytophaneres [Figure 1].The cyst wall, however, was thin and smooth in case 2 [Figure 2].These striations could be The cyst typically showed internal septations in case 2. The septations were not clearly identified, however, the formation of groups by the bradyzoites suggested thin irregular septations not identified at this magnification [Figures 3 and 4]. Cysts from both the cases showed crowded hematoxylin stained organisms like a "swarm of worms" [Figure 2].Although these bradyzoites were identifiable on high magnification, examination under oil immersion lens showed their 'banana shape', somewhat resembling gametes of plasmodium falciparum with which the organisms shares taxonomical phylum.Metrocysts, which appear larger than bradyzoites, were not seen in both the cases. [7]ypically both the cases did not show any myocyte necrosis or significant inflammation. DISCUSSION The exact incidence of human infestation is not known but the prevalence was found to be 21% in an autopsy study from Southeast Asia. [5,6]Although these infections have been considered to be incidental findings, some investigators have suggested that Sarcocystosis may be emerging as a significant food-borne zoonosis in Southeast Asia. [6]rity of detection in histological specimens, despite such high prevalence in Southeast Asia is probably due to the lack of clinical disorder (like in case 2) and consequently many infections go undetected.A similar association of squamous cell carcinoma in the head and neck region was observed by Larbcharoensub et al. [8] in a recent report. Histopathological differentials include the commoner toxoplasma and newer recognized neospora species.Toxoplasma is also an apicomplexan encysted parasite found as an end stage accidental infestation of humans from feline definitive host.The parasite is smaller (<1 mm) than those of sarcocystis, which can even be larger enough to be noticed with the naked eye as a thread like organism. [3]The cyst wall of toxoplasma is thin and does not show the striations as seen in one of our cases.Although, this is a useful feature when present (like in case 1), variability in staining amongst various species is expected.Histochemical (periodic acid Schiff and phosphotungstic acid hematoxylin) stains can be helpful in cases where hematoxylin and eosin stained section do not reliable conclude the presence or absence of these striations. The appearance of cyst wall, size, and appearance of the bradyzoites were found to be the most useful in differentiating the two organisms. [9]The bradyzoites of toxoplasma appear as dot like structures on light microscopy at ×400 magnification, while the bradyzoites of Sarcocystis are larger and appear as cresenteric 'banana' shaped structures on ×400 magnification like in our cases. [3,7]The cyst wall is also thicker in cases of sarcocystis and some species show striations (cytophaneres) while that of toxoplasma is thin and the details are not discernible on light microscopy.Neospora is a more recently described parasite and has been recognized in neural tissue.The encysted stage is rather indistinguishable from toxoplasma. [3]mans carrying muscular infections of Sarcocystis may be asymptomatic as in the case 2 where the parasite was detected as an incidental finding in a mandibulectomy done for squamous cell carcinoma of the lower alveolus. [8]However, patients may present with subcutaneous swellings like in case 1.Other clinical symptoms include musculoskeletal pain, fever, rash, cardiomyopathy and bronchospasm. [1]stological sections from both our cases did not show any inflammatory response as is often been observed in literature. [1]his is due to the encysted nature of the parasite. Figure 1 :Figure 2 :Figure 3 :Figure 4 : Figure 1: A cyst of Sarcocyst.This longitudinal section demonstrates the wall showing radial processes on the outer surfaces (cytophaneres).The internal mass is separated into many compartments.(Hematoxylin and Eosin stain, ×400)
2018-04-03T05:01:02.327Z
2012-10-01T00:00:00.000
{ "year": 2012, "sha1": "a6d67f7509a1e1032844208eb953886acf7109cc", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0377-4929.107813", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a6d67f7509a1e1032844208eb953886acf7109cc", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
231799247
pes2o/s2orc
v3-fos-license
Ultrasound/computerized tomography guided fine needle aspiration cytology of liver lesions Introduction: Malignancy in the liver, primary or metastatic, is usually inoperable at the time of diagnosis and as such, portends an ominous prognosis. A diagnostic modality such as FNAC, which offers accuracy with minimal complications and requires minimal intervention at low cost, warrants consideration early in the investigation sequence. Objectives: The study has been undertaken to evaluate the diagnostic efficacy of ultrasonography (USG)/ computerized tomography (CT) guided FNAC in the diagnosis of liver lesions, to correlate FNAC diagnosis with histopathology wherever possible, to correlate FNAC diagnosis with radiology diagnosis and to study the cytological patterns of liver lesions. Materials and Methods: A prospective study of 60 patients was conducted between November 2008 and October 2010. After obtaining the detailed clinical and radiological data, patients were subjected for FNAC under USG or CT guidance. Results: Cytodiagnosis of 60 cases were categorized into 6 (10%) Non-neoplastic lesions, 53 (88.33%) malignant neoplastic lesions and 1 (1.66%) as suspicious of carcinoma. The different neoplastic lesions were 21 (39.62%) hepatocellular carcinoma, 21 (39.62%) metastatic adenocarcinoma, 6 (11.32%) metastatic poorly differentiated carcinoma and 5cases (9.43%) of unclassified malignancy. Histopathological correlation was available in 11 malignant neoplastic lesions which confirmed the diagnosis. Nineteen cases of multiple lesions described by ultrasonography and suggested differential diagnosis of metastasis and HCC proved to be metastatic in 10cases (52.63%) and HCC in 9 cases (47.36%) by cytological examination. Overall diagnostic accuracy of the FNAC of liver to detect malignant lesions was 93.75%. Conclusion: USG/CT guided FNAC of liver permits the categorization of more frequent non-neoplastic lesions and neoplastic primary and secondary metastatic malignancy in a simple and rational manner which is helpful for the management of hepatic lesions. Ultrasound guided fine needle aspiration of liver has a promising role to play in the diagnosis and classification of liver disease than ultrasonography alone, as it requires greater degree of precision to reach diagnostic accuracy. Introduction The various types of space occupying lesions of the liver are metabolic, infectious and neoplastic (malignant or benign). They present radiologically either as a focal lesion or as a diffuse involvement. 1 Although a differential diagnosis can be made clinically, biochemically and radiologically, histopathological examination of the tissue from the lesion is often required for a definite diagnosis. 1 Most of these lesions are easily assessable by fine needle aspiration cytology (FNAC). Further, it is important to establish if the lesion is malignant, and in that case, whether primary or metastatic in nature. An FNAC aids in making a quick diagnosis, thus saving valuable time and enables early initiation of treatment. 2 FNAC is gaining popularity as a diagnostic technique for space occupying lesions of the liver, because it is quick, inexpensive and minimally invasive, when compared to core-needle or open biopsy. 1,3 However, blind aspiration has the inherent drawback of poor lesion localization and lower diagnostic accuracy. This has led to the usage of various radiologically guided FNAC, which is done either using ultrasonography (USG) or computerized tomography (CT). 1,4 The present study has been conducted to evaluate the efficacy of ultrasound guided percutaneous FNAC in the diagnosis of liver lesions and to assess the feasibility of using this technique as a routine diagnostic procedure for liver lesions. Aims and Objectives The objectives of the study were to categorize the lesions of liver observed at FNAC -Inflammatory or non-inflammatory, malignant or benign, primary or secondary, to establish the various cytological patterns of the lesion of liver, to correlate the radiological findings with cytology, to correlate histopathologically in available cases and to correlate findings of radiology with combined cytology and imaging alone. Materials and Methods The present study was conducted on subjects with a radiologically confirmed hepatic mass, who were inpatients as well as those visiting the outpatient department of Victoria hospital and Bowring and Lady Curzon hospital from November 2008 to October 2010. After obtaining a detailed clinical and radiological data, the patient was subjected to FNAC under USG or CT guidance. Written informed consent was obtained from the patient before undertaking the procedure. The study was approved by the Institutional Ethics Committee. The area (based on the clinical examination and radiological finding) was sterilized with spirit. The length of the needle used was 15-20cm. A 22-23 cm G disposable needle was fixed on a 10 mL disposable syringe that was pre-fixed to the FNAC gun. Under USG guidance, the needle was introduced and its position was checked before aspiration. The tissue specimen was collected, expressed on to a glass slide and then, spread. Two dry smears and two smears fixed in alcohol 95% were prepared. Excess specimen was fixed in formalin and preserved for cellblock, which were later processed and sections taken for histopathology examination. Alcohol fixed smears were stained with Papanicolaou (PAP)/ hematoxylin and eosin (H&E) and dry slides were stained with May-Grünwald Giemsa (MGG) /Leishman's stain. Statistical data of age and sex, cytological diagnosis, histopathological correlation, radiological correlation and diagnostic accuracy was studied. Results A total of sixty cases were included in the study. On all these subjects FNAC of the liver was carried out under USG/CT guidance, and cytological analysis was done. Of these, core needle biopsy/cell block preparations were available for histopathological examination in 16 cases. A correlation between FNAC and histopathological study findings was done. The age of the patients ranged from 28 to 90 years with a mean age of 57.28 years. The majority were male patients in their fifth decade. Males accounted for 43 cases (72%) and females 17 cases (28%) with a male to female ratio of 2.5:1. Pain abdomen in the right hypochondrium was the most common presenting symptom. The other frequently reported symptoms were mass per abdomen, loss of appetite, loss of weight, vomiting, jaundice and fever. Of the 60 cases, in 53 (88.83%), the lesion was neoplastic, in 6 (10.16%) non-neoplastic and in one (1.66%), it was inconclusive, but suspicious of carcinoma. The diagnosis based on FNAC findings are described in (Table 1). Ultrasonography showed a solitary lesion in all subjects, with a maximum size of 9 x 8 cm. Aspiration yielded thick purulent material. The smear showed plenty of neutrophils accompanied by necrotic cells and debris. Few macrophages, reactive fibroblasts and degenerating hepatocytes were also present. In one subject who was diagnosed to have granulomatous hepatitis, there was moderate hepatomegaly. Ultrasonography showed diffuse parenchymal lesion. The smears were moderately cellular and showed epithelioid histiocytes in singles and small tight clusters along with foreign body and Langhan's-type of giant cells. A few benign hepatocytes and bile duct epithelial cells were present. Background showed hemorrhage. Neoplastic Lesions: Malignant lesions of the liver constituted 53 cases (89.33%) of the 60 aspirates. Primary, metastatic and unclassified malignancies constituted 21 (39.62%), 27 (50.94%) and 5 (9.43%) cases respectively. Of the malignant tumors, majority were metastatic (50.94%), followed by hepatocellular carcinoma (HCC; 39.62%). Of the 21 cases of HCC diagnosed on cytologic examination, there were 17 males (80.95%) and 4 females (19%) with a male to female ratio of 4.2:1. The age of the patients ranged from 31 to 90 years with a mean age of 59.71 years. The patients presented with pain abdomen in the right hypochondrium, mass per abdomen, loss of appetite and loss of weight. Ultrasonographically, 12 cases showed solitary lesions and 9 showed multifocal lesions. The largest lesion measured 14 x 14 cm and the smallest measured 5x 4 cm. Cytologically, HCC was graded into welldifferentiated HCC (WDHCC), moderately differentiated HCC (MDHCC) and poorly differentiated HCC (PDHCC). Out of the 21 cases, 10 cases were diagnosed as WDHCC; smear showed increased cellularity with cells resembling normal hepatocytes. The tumor cells were arranged in thick trabecular, acinar, transgression of blood vessels in cell clusters, bare atypical nuclei, large polygonal cells with abundant eosinophilic granular cytoplasm, increased nucleus to cytoplasm ratio, central round nucleus and intranuclear inclusions. Further, several naked nuclei were noted (Fig. 1a & 1b). Five cases of MDHCC also had many features of WDHCC. It was found that endothelial rimming or transgressing of cell clusters, eccentric nuclei, multinucleation, multiple nucleoli and macronucleoli were associated more with this type of HCC (Fig. 1c & 1d). Six cases were diagnosed as PDHCC showed cells in sheets, small groups and singles. Transgressing endothelium was seen. Anisocytosis, anisonucleosis, irregular nuclear chromatin, hyperchromasia, multiple nuclei, macronuclei and bare atypical nuclei were seen (Fig. 2a & 2b). Twenty one cases of metastatic adenocarcinoma were diagnosed in our study with 13 males and 8 females with a male to female ratio of 1.6:1. The age ranged from 42 to 80 years with a mean age of 55.42 years. The primary tumor was colorectal in 5, ovary in 1, breast in 2, pancreas in 1, gall bladder in 1, stomach in 2 and cervix in 2 patients. In 7 patients, the primary tumor was not established. Similar to HCC, the common symptoms were pain abdomen followed by mass per abdomen, loss of appetite, loss of weight and fever. USG showed multifocal lesions in 25 patients and solitary lesion in 2 patients. The largest lesion measured 8 x 6 cm and smallest measuring 2x 2 cm. The smear studied revealed high cellularity. Cells were columnar, cuboidal or round to oval and arranged in flat monolayered sheets, palisade forms, acinar pattern and in singles having vacuolated or granular and eosinophilic cytoplasm. The cells showed mild to moderate anisonucleosis with central or eccentrically placed nucleus and fine to coarsely dispersed chromatin pattern. Some cases showed multinucleation (2-3 nuclei) irregular hyperchromatic nuclei with prominent nucleoli. Altered N: C ratio was noted (Fig. 2c & 2d). Mitotic figures were often present. In many cases normal hepatocytes were present and inflammation, necrosis and fibrosis were prominent in some cases. There were 6 cases of metastatic poorly differentiated carcinoma (PDC) in our study. The FNA smears were moderate-to-highly cellular with the cells arranged in dicohesive sheets, clusters, palisades and in singles. The nuclei showed mild to marked pleomorphism with finely-to-coarsely granular chromatin. Normal and reactive hepatocytes were also noted. One case of liver aspirate was suspicious of malignancy as smear showed well differentiated hepatocytes and absence of necrosis; it was difficult to differentiate into hpatocellular adenoma or HCC. Cyto-Histopathological Correlation: In 16 cases, the cytological diagnosis was correlated with core needle biopsy/cell block histopathological diagnosis. The histopathological diagnosis was taken as standard for comparison ( Table 2). Of the 5 cases reported as non-neoplastic lesions on FNAC, histopathological examination confirmed the non-neoplastic nature in all cases. Of the 10 cases diagnosed as malignant on cytological examination, all the cases were reported as neoplastic (Fig. 3 a-d). In one case which was inconclusive on cytology, smear showed well differentiated heptocytes and absence of necrosis. It was difficult to differentiate into hepatocellular adenoma or HCC. On histopathology, the features were conclusive of HCC. Correlation of ultrasound findings with USG-guided FNAC Findings: Ultrasonographic findings of liver were correlated with cytological findings. Five cases of solitary lesions described by ultrasonography as abscess were proved as such in cytology in 100% cases. A single case of diffuse parenchymal lesion revealed cytological findings as granulomatous hepatitis. Nineteen cases of multiple lesions described by ultrasonography with a suggested diagnosis of metastasis or HCC proved to be metastatic in 10 cases (52.63%) and HCC in 9 cases (47.36%) by cytological examination. A total of 15 cases of multiple lesions which were suggested to be metastatic lesions in ultrasonography were proven as such in cytology. Ten cases of solitary lesions suggested as HCC by ultrasonography was proven to be the same. Of the nine cases of solitary lesion suggested as neoplastic on ultrasonography, 5 cases were of unclassified malignancy, 2 cases of metastasis and 2 cases of HCC on FNAC. Discussion Lundquist was the first to report the reliability of FNAC in the diagnosis of intrahepatic malignant tumors in a large study population of 2611 subjects. 5 USG-guided FNAC offers accuracy with minimal invasiveness and cost-effectiveness, without major complications. 6 In our study, all 60 cases were subjected to USG-guided FNAC. The age of the patients ranged from 28 to 90 years with a mean age of 57.28 years. The males accounted for 43 cases (71.66%) and females 17 cases (28.33 %) with a male to female ratio of 2.2:1. In a study by Dhameja et al., 1 the age of subjects ranged from 2 to 70 years, and the male to female ratio was 4:1. In another study by Swami et al., 7 the age of subjects ranged from eight months to 90 years and the male to female ratio was 2:1. In our study, there were a total of 6 cases (10%) of non-neoplastic lesions, 53 cases (89.33%) of neoplastic lesion and 1 case (1.66%) which was suspicious of carcinoma. Similar findings of high incidence of malignant lesions were seen in study done by Rasaniaet al. 6 and Rosenblatt et al. 9 In the study by Swamy et al., 7 neoplastic lesions (68.06%) were more common than non-neoplastic lesions (30.56%). However, in the study by Dhameja et al., 1 94.7% of cases were neoplastic lesions while 3 (5.3%) were non-neoplastic. Of the neoplastic lesions, we found no benign lesions, similar to the findings of the study by Dhameja et al. 1 However, Rosenblatt et al. 9 report 47 of 59 FNACs to be malignant. Rosenblatt et al. 9 observed 5 HCCs (8.47%) and 47 (71.8%) of metastatic carcinoma. In the present study there were 21 HCCs (35%), and 27 metastatic tumors (50.94%) and 5 (8.33%) cases of unclassified malignancy and one case (1.66%) suspicious of carcinoma. Majority of cases had metastasis 27 (50.94), followed by HCC 21 (39.62) similar to the study done by Rasania et al.,7 where there was a higher prevalence of metastatic lesions in liver (70.4% cases), while study conducted by Kuo et al. 10 showed a higher prevalence of HCC (81.64%) among the malignant liver lesions. We classified the HCC cases into WDHCC, MDHCC and PDHCC similar to the description by Rasania et al. 6 as grades I, II, and III. In the present study, WDHCC and PDHCC (30.09% each) were more common in comparison to the study by Rasania et al. 6 which showed higher number of MDHCC or Grade II cases (56.2%). Previous studies have shown that about 75% of the FNAC cases are metastatic cancers. 2 In our study 50.94% (27 of 53 cases of malignancy) were metastatic cancers. In the study by Barbhuiya et al., 2 the most common primary tumors were GIT adenocarcinoma (44.2%) followed by gallbladder adenocarcinoma (15.9%). In our study, colon adenocarcinoma was the most common source of liver metastasis. Metastatic adenocarcinoma was the most common metastatic malignancy in the present study (Table 3). Similar observations have been made by Rasania et al. 6 and Kuo et al. 10 Although imaging techniques (USG and CT) have helped greatly with the early and correct diagnosis of liver abscess, the appearances are often non-specific. 11 There is some overlap between the US and CT features of liver abscesses, HCC and metastases. Two situations may occur. (1) Tumor masses, primary or secondary, undergo extensive necrosis, with the resultant radiologic image of the cavitary neoplasm mimicking abscesses; (2) abscesses are accompanied by proliferative reactive changes, making radiologic differentiation from a neoplastic process almost possible. Here aspiration cytology or FNAC plays an essential complementary role. 11 In the present study, 5 cases of pyogenic liver abscesses were seen. The smears showed plenty of neutrophils and nuclear debris along with few degenerating hepatocytes. Histopathology was available for all cases which confirmed the cytological diagnosis. There was one case diagnosed as granulomatous hepatitis in the present study (1.69%) in which the smears showed epithelioid histiocytes in singles and small clusters along with multinucleated giant cells and few benign hepatocytes. Caseous necrotic material or acid-fast bacilli were not seen in our case. Radhika et al. 12 reported the findings of 4 patients with tuberculosis of the liver diagnosed by FNAC. The aspirates showed epithelioid granuloma admixed with benign hepatocytes and bile duct cells. Studies by Herszenyi et al. 13 and Shah et al. 14 have demonstrated the sensitivity of FNAC in diagnosing hepatic malignancy to be ranging from 75.34 to 93%. Other studies have reported a specificity ranging from 69% to 100%. 2,9,10,[15][16][17] In our study, we observed a 93.75% accuracy in the diagnosis as there was one case which was inconclusive on FNAC, and was proven to be HCC on histopathology. With high diagnostic accuracy of 93.75% for liver lesions in our study, upholds the unquestionable value of guided FNAC as a mandatory diagnostic procedure in the assessment of hepatic lesions. Thus, FNAC is a valuable method that allows rapid diagnosis. However, its sensitivity depends on the site and depth of the lesion and the skill of the person performing the procedure, as well as the experience of the pathologist. We did not observe any complications in our study, similar to the observations by Ramdas and Chopra 16 and Barbhuiya et al. 2 However, in studies by Mingoli et al. 18 and Patel and Shapiro 19 complications like fatal bleeding have been reported in a case of chronic liver disease, needle tract tumor seedling and biliary-venous fistula. Lundquist 5 reported only one significant complication, an intrahepatic hematoma among the 2611 cases studied. Conclusion Ultrasound-guided fine needle aspiration cytology of liver is a safe, simple, cost effective and accurate method for cytological diagnosis of hepatic lesions. In the present study, USG/CT-guided FNAC was useful in distinguishing non-neoplastic from neoplastic lesions, and further assessing the type of neoplastic lesion. This is particularly helpful because malignant tumors of the liver are very common, and an early diagnosis is quintessential for improving the treatment outcomes. Therefore, USG/CT-guided FNAC is a promising technique for early diagnosis of hepatic lesions, with a very high diagnostic accuracy.
2021-02-03T18:51:35.043Z
2020-12-15T00:00:00.000
{ "year": 2020, "sha1": "4b5f36186ec56e36d4e77bef43436ee4d0ee462c", "oa_license": "CCBYNCSA", "oa_url": "https://www.jdpo.org/journal-article-file/6638", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "95c15ca8c92c7811e7695141eeae961d5f660e43", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
204129044
pes2o/s2orc
v3-fos-license
Modeling the Architecture of Depolymerase-Containing Receptor Binding Proteins in Klebsiella Phages Klebsiella pneumoniae carries a thick polysaccharide capsule. This highly variable chemical structure plays an important role in its virulence. Many Klebsiella bacteriophages recognize this capsule with a receptor binding protein (RBP) that contains a depolymerase domain. This domain degrades the capsule to initiate phage infection. RBPs are highly specific and thus largely determine the host spectrum of the phage. A majority of known Klebsiella phages have only one or two RBPs, but phages with up to 11 RBPs with depolymerase activity and a broad host spectrum have been identified. A detailed bioinformatic analysis shows that similar RBP domains repeatedly occur in K. pneumoniae phages with structural RBP domains for attachment of an RBP to the phage tail (anchor domain) or for branching of RBPs (T4gp10-like domain). Structural domains determining the RBP architecture are located at the N-terminus, while the depolymerase is located in the center of protein. Occasionally, the RBP is complemented with an autocleavable chaperone domain at the distal end serving for folding and multimerization. The enzymatic domain is subjected to an intense horizontal transfer to rapidly shift the phage host spectrum without affecting the RBP architecture. These analyses allowed to model a set of conserved RBP architectures, indicating evolutionary linkages. INTRODUCTION Klebsiella pneumoniae is a Gram-negative bacillus. In spite of being part of the natural human and animal flora, K. pneumoniae is also the widespread cause of both nosocomial and community acquired infections. Since 2013 K. pneumoniae has been marked as a prominent member of the carbapenem-resistant Enterobacteriaceae (CRE), featured by a multidrug-resistant phenotype and labeled as a class of antibiotic-resistant bacteria for which novel ways of therapy are most urgent (Weiner et al., 2016;Calfee, 2017). As natural bacterial predators, bacteriophages have since long been proposed as promising alternatives to antibiotic therapy. The large majority of phages is highly specific with a host spectrum defined at the species/strain level. This high specificity necessitates the selection of a phage sur-mesure for a personalized treatment or the use of a phage cocktail that covers a broader host range. Major determinants of host specificity are the phage receptor binding proteins (RBPs) that mediate the initial contact with the receptor on the host cell envelope (Williams et al., 2008). This initial contact can be based on a direct binding of long tail fibers or shorter tailspikes to the cell surface receptor. Some RBPs possess a depolymerase activity to degrade bacterial exopolysaccharides comprising the capsule (CPS), lipopolysaccharides (LPS) or biofilm matrix (Majkowska-Skrobek et al., 2016Olszak et al., 2017). Interaction of RBPs with their cell wall receptors are essential to initiate the infection process (Andres et al., 2010;Broeker et al., 2018). The primary receptor targeted by RBPs of many Klebsiella specific phages is the thick polysaccharide capsule, which is a hallmark feature of K. pneumoniae. The capsule is a crucial virulence factor as it forms a physical barrier to some antibiotics and host immune mechanisms, enabling bacteria to avoid phagocytosis or complement-mediated killing (Cortés et al., 2002;Lee et al., 2017;Majkowska-Skrobek et al., 2018). Differences in sugar composition, the specific ratio of various sugar components as well as variation in the locus organization are the base to distinguish at least 79 capsular serotypes called K antigens and 134 capsular loci (KL) among Klebsiella species (Pan et al., 2015;Wyres et al., 2016;Wick et al., 2018). This capsular diversity correlates to a correspondingly high variation of Klebsiella phage RBPs that contain a specific polysaccharide-depolymerizing domain (Schmid et al., 2015;Latka et al., 2017). Such domains cleave the O-glycosidic bond of capsular polysaccharides following either a hydrolase or a lyase mechanism. Hydrolases (e.g., sialidases, rhamnosidases, levanases, dextranases, and xylanases) involve a water molecule for cleavage, whereas lyases [e.g., hyaluronate lyases (hyaluronidases), pectin/pectate lyases, alginate lyases, K5 lyases] cleave by β-elimination with introduction of new double bond (Davies and Henrissat, 1995;Sutherland, 1995;Pires et al., 2016). In spite of a high diversity in enzyme specificity and primary amino acid sequence, many known depolymerases contain an elongated, highly interwoven β-helical domain that forms the specific catalytic pocket. In addition, this β-helical domain contributes to a high protein stability in harsh environments (Yan et al., 2014;Majkowska-Skrobek et al., 2016). An overview of (experimentally confirmed) RBPs with depolymerase activity has been recently reported . Receptor binding protein with depolymerase activity have a modular structure with the enzymatic domain located in the central part ( Figure 1C). The C-terminus of the RBP may comprise a chaperone that assists in a proper folding and trimerization followed by autoproteolytic removal or an additional domain involved in host cell recognition (Weigele et al., 2003;Cornelissen et al., 2011;Schwarzer et al., 2012;Seul et al., 2014;Yan et al., 2014). Autocleavage of the C-terminal chaperone was also reported as a common feature among endosialidases and other tail spikes and tail fibers, necessary to increase the unfolding barrier and to trap the mature trimer in a more kinetically stable conformation (Schwarzer et al., 2007). The N-terminal dome-like domain attaches the RBP to the phage particle by a flexible connector. A modular architecture of RBPs allows for rapid evolution via horizontal gene transfer leading to host range modification. Whereas structural domains responsible for attachment to the tail apparatus are repeatedly present in many phylogenetically related phages, the domains for host cell receptor recognition/degradation are subjected to intense exchanges across phylogenetic borders. In addition, the latter RBP domains undergo further constant modification through vertical transfer and accumulation of mutations (Stummeyer et al., 2006;Barbirz et al., 2008;Leiman and Molineux, 2008;Schwarzer et al., 2012;Latka et al., 2017). The tail fibers of E. coli phage T7 and its relative K1F are type examples of a horizontal transfer of the C-terminal RBP domain. These tail fibers share a conserved N-terminal domain of ∼140 resides that anchors the tail fiber to the phage particle (Figure 1). However, T7 has a C-terminal domain that recognizes and binds lipopolysaccharide, whereas K1F produces an endosialidase specific for recognition and cleavage of E. coli K1 capsular polysaccharide (Steven et al., 1988;Stummeyer et al., 2005). Phages with a single RBP such as T7 and K1F are most frequently described in the literature. However, several phages belonging to Podoviridae have also acquired two different RBPs corresponding to a dual receptor-specificity. E.g., K. pneumoniae podoviruses K5-2, K5-4, and KP32 possess two RBPs with a depolymerase domain with different enzymatic specificity (Hsieh et al., 2017;Majkowska-Skrobek et al., 2018). In the last decade, an increasing body of knowledge about the genetic and structural organization of RBPs of such bispecific phages has been acquired, particularly for different T7-like phages such as K1-5 and SP6 (Stummeyer et al., 2006;Leiman et al., 2007;Gebhart et al., 2017;Tu et al., 2017). These phages use a small trimeric adapter protein of approximately 300 amino acids, sharing a high N-terminal sequence identity to T7 and K1F tail fibers (Figure 1). In addition, phage K1-5 encodes a K5 lyase (gp46) and an endosialidase (gp47), which are specific for E. coli K5 and K1 capsule, respectively. CryoEM studies and bioinformatics suggest that K5 lyase binds through a heptapeptide (MAKLTKP) to a specific site in the middle of the K1-5 adapter protein, whereas the second tailspike (endosialidase) binds to a different specific site in its C-terminal part through an undecapeptide (MIQRLGSSLVK) (Leiman et al., 2007). The heptapeptide, undecapeptide, and adapter sequences are conserved among other T7-like phages that infect different bacterial species and that carry two different RBPs on the phage particle (e.g., SP6), demonstrating a conserved mechanism for attachment of two RBPs (Figure 1). Notably, domains recognizing the same host receptor can have highly similar amino acid sequence but can be incorporated into a different RBP architecture. For example, the K1F and K1-5 endosialidase domains specific to K1 capsule show 72% identity with a coverage of 86%, but in phage K1F the endosialidase domain is present in a single RBP with anchor domain, whereas in phage K1-5 the endosialidase is connected to the phage particle via an intermediate adapter protein. A homolog of the endosialidase domain of podovirus K1F is also present in the multivalent E. coli myovirus phi92 (EndoN92; 53% identity with a coverage of 83%), demonstrating exchange of the domain across members of the Podoviridae and Myoviridae families with highly different tail structures (Schwarzer et al., 2012(Schwarzer et al., , 2015. More recently, a different organization of two types of RBPs in a single phage particle has been reported based on structural, FIGURE 1 | Anchor and anchor-branched receptor binding protein (RBP) complexes confirmed by structural experiments. (A) The modular genetic organization of RBPs in single (T7 and K1F) and double RBP systems (K1-5 and G7C phages). (B) Schematic modeling of four different RBP systems in the virion structure. The T7 tail fiber (gene 17, T7p52) and K1F tail fiber (gene 17, CKV1F_gp36) have only an N-terminal anchor domain; K1-5 uses an adapter protein (gp37 with T4gp10-like domain) interacting with K5 lyase (gp46) and K1 endosialidase (gp47) via a conserved hepta-and undecapeptide, respectively; Phage G7C produces an anchor-branched complex with one anchored RBP (gp66) having a T4gp10-like domain and the second RBP connected via a conserved peptide to theT4gp10-like domain. (C) Modular structure of the model tail spike of Salmonella phage P22 (PDB ID 2XC1), illustrating a typical modular structure of RBPs. A N-terminal dome-like structure domain, a central β-helical domain for host recognition and enzymatic activity and a C-terminal domain responsible for protein trimerization or/and receptor recognition are shown (Berman et al., 2000;Seul et al., 2014;Rose et al., 2018). genetic and biochemical studies of the RBPs of E. coli N4like podovirus G7C (Prokhorov et al., 2017). G7C carries two RBPs -a longer G7Cgp66 and a shorter G7Cgp63.1 protein. The specificity of the longer G7cgp66 protein is unknown, but the shorter G7Cgp63.1 RBP was shown to deacetylate the O-antigen of E. coli 4S while leaving the backbone of the sugar intact. G7Cgp63.1 does not interact with the phage particle directly. Instead, it binds to G7Cgp66, which is attached to the phage particle with its N-terminal anchor domain ( Figure 1B). The gp63.1 binding region of G7Cgp66 (residues 138-294) is homologous to subdomains D2 and D3 of phage T4 gp10. In phage T4, these subdomains of gp10 serve as an attachment site for two proteins -gp11, which interacts with the long tail fiber RBP or short tail fiber RBP, depending on the state of the phage particle, and gp12, the short tail fiber RBP (Taylor et al., 2016). This protein complex represents a bona fide branched structure involved in the transmission of the signal of reversible host binding, culminating in irreversible binding, sheath contraction and DNA ejection. The T4gp10-like domains are prevalent in RBPs of unrelated phages across Podoviridae and Myoviridae, which may reflect its ancient evolutionary role in the transduction from reversible to irreversible binding during phage adsorption (Prokhorov et al., 2017). Interestingly, the T4gp10-like region of G7Cgp66 covers both subdomain D2 and D3 of T4gp10 to which T4gp11 and T4gp12 are attached. Though, G7Cgp66 and G7Cgp63.1 form a 1:1 complex, suggesting that G7Cgp63.1 occupies only one of the two RBP binding sites on G7Cgp66. Notably, orthologs of G7Cgp66 in some G7C-like viruses do not contain a putative enzymatic domain but nevertheless retain the N-terminal particle-binding domain and the T4gp10-like domains. As such their attachment apparatus becomes similar to the adapter system of phage K1-5. The N-terminal part of G7Cgp63.1 that interacts with the T4gp10-like domain of G7Cgp66 is also found at the N-terminus of other tail spikes that have a branched structure, such as CBA120 phage tail spike 1 (Chen et al., 2014) and other putative tail spikes of Vil-like phages (Adriaenssens et al., 2012). CBA120 encodes four tail spikes (TSP1-4) from which two (TSP2 and TSP4) are equipped with T4gp10-like domains D2 and D3. These domains provide side or off-axis attachment sites for TSP1 and TSP3. The conserved N-terminal part of TSP4 attaches the whole branched structure composed of four TSPs to the baseplate of the virion (Plattner et al., 2019). Klebsiella jumbo viruses may also have a multitude of RBPs resulting accordingly in a broader host spectrum. The highest variation of depolymerases has been described for the jumbo K64-1 phage, which is able to infect K. pneumoniae of 10 different capsular serotypes and for which 11 different polysaccharide depolymerases have been identified (Pan et al., 2017). Also, the jumbo vB_KleM-RaK2 phage encodes a multitude of putative depolymerases (Simoliūnas et al., 2013). Electron microscopy images of such jumbo phages typically show an elaborated tail fiber apparatus with a high structural complexity, but for which structural insights are currently lacking. In this study we present an extensive bioinformatic analysis of the structural and genetic organization of depolymerase-containing RBPs in Klebsiella phages. Nextgeneration sequencing technologies have recently led to a large number of sequenced phage genomes in public databases including Klebsiella viruses (n = 97). In a large proportion of these phages (59/97; 61%) we could predict an RBP with depolymerase activity. The observed large diversity of depolymerase domains accommodates the high diversity of capsular serotypes among Klebsiella strains. Based on an integrated analysis, we propose diverse RBP architectures in Klebsiella phages. MATERIALS AND METHODS At first, Klebsiella phages were collected from the GenBank database (retrieved at 15.08.2018). A number of 59 phages were finally analyzed (Supplementary Table S1). From these phages proteins annotated as tail fibers or tail spikes were analyzed with BlastP 1 (Altschul et al., 1990), Phyre2 2 (Kelley et al., 2015), SWISS-MODEL 3 (Bordoli et al., 2009;Bordoli and Schwede, 2012), HMMER 4 (Finn et al., 2011) and HHPred 5 (Zimmermann et al., 2018) to identify phages that encode RBPs with putative depolymerase activity (Supplementary Table S2). If neither a tail fiber nor a tail spike gene was found in the genome, we analyzed all genes located in the vicinity of annotated structural genes. BlastP (protein-protein Blast) was performed against the non-redundant protein sequences (nr) database using standard parameters (expect threshold: 10, word size: 6, MATRIX: BLOSUM62, Gap cost: existence 11, extension 1, conditional compositional score matrix adjustment). HMMER was used in the quick search mode against: Reference Proteomes, UniProtKB, SwissProt, and Pfam with significance E-values: 0.01 (sequence) and 0.03 (hit). For Phyre2 the normal modeling mode was used. HHPred homology detection structure prediction was run using the PDB_mmCIF70 database and the following parameters [MSA generation method: HHblits uniclust30_2018_08; Maximal no. Criteria for the prediction of putative depolymerase activity were (Supplementary Table S2): (1) the protein must be longer than 200 residues; (2) the protein must be annotated as tail fiber/tail spike/hypothetical protein in the NCBI database; (3) the protein must show homology to domains annotated as lyase [hyaluronate lyases (hyaluronidases), pectin/pectate lyases, alginate lyases, K5 lyases] or hydrolase (sialidases, rhamnosidases, levanases, dextranases, and xylanases) with a confidence of at least 40% in Phyre2 or the enzymatic domain should also be recognized by at least SWISS-MODEL, HMMER, or BlastP; (4) the length of homology with one of these enzymatic domains should span at least 100 residues; (5) a typical β-helical structure should be predicted by Phyre2. These RBP depolymerases are indicated without additional labeling in the tables. Proteins possessing experimentally confirmed depolymerizing activity were marked in the tables with (a). When the RBP was only partially fulfilling the above-mentioned criteria, it was indicated with label (b). These putative depolymerases that could only be predicted with a lower probability were fulfilling criteria 1 and 2, but the confidence of the Phyre2 prediction was below 40% or only SWISS-MODEL, HMMER or BLASTP gave Phage First RBP (protein 2, Figure 2) Second RBP (protein 3, Figure 2) The different RBP systems of KP32viruses are visualized in Figure 2. a RBP for which the depolymerase activity has been experimentally verified (Hsieh et al., 2017;Majkowska-Skrobek et al., 2018). b RBP with a lower probability on depolymerizing activity. c RBP without enzymatic activity. d The N-terminal part of this protein is lacking under the corresponding accession code, yet the full protein is encoded by nucleotide positions 38449-40114 and 1-1388 of the full genome (NC_028688.1). BLASTp was used as computational alignment algorithm and pairwise alignments were performed against the corresponding first and second RBP from phage KP32, respectively. The accession number of each RBP is given, along with its length and alignment characteristics (cover-coverage, E-value,% identity, identity range-number of identical amino acids/length) of the region over which identical amino acids are found by Blastp, starting from the N-terminus (amino acid 1 a positive prediction. In addition, the homologous domain only spans between 50 and100 amino acids and no β-helical structure could be predicted with Phyre2 (for details see Supplementary Table S2). All selected Klebsiella phages were then grouped based on gene homology and a conserved gene synteny into KP32viruses, KP34viruses, and KP36viruses and into groups containing only Klebsiella-specific phages similar to phage JD001 (belonging to Jedunavirus), similar to phage Menlow (belonging to Ackermannviridae), similar to phage K64-1 (belonging to Alcyoneusvirus). Within each group, further subdivisions were proposed for the purpose of this study, based on the organization of the RBP gene cluster (number of RBPs, length of different genes, presence of anchor, or branching domains). When there was one RBP, a domain in the N-terminus of a RBP was annotated as 'anchor' when there was at least an identity of 39% (BLASTP) over at least 166 residues starting from the N-terminus of the corresponding protein among phages belonging to the same group. These parameters were set empirically based on the shortest identity region found among all RBPs, specifically in the first RBP of phage IL33, belonging to KP32viruses group B (166 amino acids) and the identity% of the first RBP of phage Kp1. When more than one RBP was present, the anchor domain was annotated in the RBP in which also a T4gp10-like domain was detected. In the other RBP(s) the N-terminal conserved sequence was called 'conserved peptide, ' which was also generally shorter than the anchor domains. To define consensus sequences of the anchor domains and conserved peptides, multiple sequence or pairwise alignment were used, since these structures are highly conserved among phages from the same group. To identify domains involved in the branching of RBPs, the sequences were analyzed by HHPred performing protein structure prediction 5 (Zimmermann et al., 2018) in search for domains homologous to T4gp10 domain 2 and 3 as experimentally confirmed attachment sites (Prokhorov et al., 2017). WebLogos of the anchor domains and conserved peptides were created with the online available tool 6 (Crooks et al., 2004). RESULTS Taxonomically closely related phages are characterized by a synteny of conserved structural genes interrupted by divergent RBP genes, which are subject to intensive horizontal transfer. We therefore inspected the region of structural genes across different Klebsiella phages within specific phage genera to identify potential RBPs based on a broken synteny. Subsequently, we analyzed the presence of putative enzymatic domains within the identified RBPs. Based on homology, protein size and structure, we looked for conserved domains (anchor domain, T4gp10-like domain) that may explain the RBP architecture of the particular phage. To further refine this architecture, we analyzed the sequence for the presence of conserved peptides that may mediate attachment to putative T4gp10-like domains. We integrated all these data to model the RBP apparatus of an extensive and diverse set of Klebsiella phages with (predicted) depolymerase activity. KP32viruses KP32viruses belong to Podoviridae and have tail fibers attached to a short, non-contractile tail. A similar synteny of highly conserved structural genes is observed across twenty-one KP32viruses (Supplementary Table S1A). Yet, one or two nonconserved genes of different lengths interrupt this synteny after the gene encoding the internal virion protein D. They were identified as putative RBPs and in a few cases also experimentally verified (Hsieh et al., 2017;Majkowska-Skrobek et al., 2018;Solovieva et al., 2018) (Table 1). We found four different RBP organizations (groups A, B, C, and D; Figure 2). The N-termini of the first RBPs are shared with high sequence identity (46-72%) across all KP32viruses. Specifically, residues 1-154 of the first RBPs are highly similar to the N-terminal domain of the phage T7 tail fiber (pfam03906). In group A of KP32viruses, this conserved N-terminal domain (Supplementary Figure S1A) also contains a region that is similar to a fragment of a T4gp10 branching domain, offering a potential attachment point for a secondary tail fiber. The other domain(s) of these 744-903 aa long first RBPs do not share identity with the corresponding domain of the group A model phage KP32. All central domains are predicted to possess enzymatic activity (hydrolase, lyase) but with different specificity. In addition, they all are predicted to possess a characteristic β-helical structure (Supplementary Table S2). In phage KP32, there is an additional C-terminal domain with predicted chaperone activity, which is absent in all other RBPs of the group A KP32viruses. The second RBP that interrupts the gene synteny in group A KP32viruses is recently demonstrated to have depolymerase activity against capsular serotype K21, whereas the first RBP has 6 weblogo.berkeley.edu depolymerase activity against capsular serotype K3 (Majkowska-Skrobek et al., 2018). These specificities correspond to the host spectrum of phage KP32. Other phages of group A KP32viruses also possess this second putative RBP. The second RBP has no conserved N-terminal anchor domain but has a peptide sequence that is conserved across group A KP32viruses with a consensus sequence over the first 29 amino acids (Supplementary Figure S1B). Similarly to the phage G7C RBP system this conserved peptide may be responsible for attachment to the T4gp10-like domain present in the first RBP. Also for the second RBPs, there is a high diversity in the central sequence with a few exceptions. E.g., in phage K5 and KP32, a highly similar sequence is observed, which hints that the second RBP of phage K5 also targets capsular serotype K21. No chaperone is predicted in any second RBP. Integrating these elements, we model the structural organization of group A KP32viruses as depicted in Figure 2B with a conserved anchor-branched attachment mode but with swapped enzymatic domains for specific capsule/host recognition. Group B KP32viruses (Table 1) have a simpler RBP organization with a single anchor-based RBP. Six out of seven analyzed phages have an RBP with a putative enzymatic domain, while the seventh phage (IME321) apparently lacks enzymatic activity and might rather encode a tail fiber. The N-terminal conserved anchor domain is shorter (166 amino acids) compared to the corresponding domain in group A KP32viruses (307 amino acids). The RBP also lacks a T4gp10-like domain, which is consistent with the absence of a second RBP in group B KP32viruses (Figure 2). Phage KpV767 (Table 1) represents another variant of KP32viruses (coined group C). The phage has a first anchorbased RBP, including a fragment of a T4gp10-like domain, but the second RBP is largely truncated to only 69 amino acids, including the conserved N-terminal 29 amino acids for attachment to the T4gp10-like domain (Supplementary Figure S1B). KpV767 appears to result from a retrograde evolution, having lost the potential to infect hosts belonging to two different serotypes. Finally, phage 2044-307w (group D) is as an opposite example of truncation. The first RBP lacks an enzymatic or receptor binding domain but contains an N-terminal anchor including a fragment of a T4gp10-like domain, while the second tail fiber is a full-featured RBP that contains a conserved N-terminal peptide and a depolymerase domain (Supplementary Figure S1B). KP34viruses Seventeen phages from the genus of KP34viruses were analyzed ( Table 2). Potential proteins involved in host cell recognition could be clearly identified as two genes interrupting the synteny of highly conserved structural genes and genes required for phage particle maturation. Interestingly, both genes are not clustered as in KP32viruses, but are separated by five to eight intervening genes encoding DNA maturases, hypothetical proteins and endolysins, depending on the specific phage. Three different groups (A, B, and C) can be categorized based on differences in length of both genes. Ten group A phages have a short first protein of approximately 300 amino acids annotated as tail fiber. This protein does not encode a putative enzymatic Phage First RBP (protein 2, Figure 3) Second RBP (protein 10, Figure 3) Accession The different RBP systems of KP34viruses are visualized in Figure 3. a RBP for which the depolymerase activity has been experimentally verified (Lin et al., 2014;Solovieva et al., 2018). b RBP with a lower probability on depolymerizing activity. c RBP without enzymatic activity. d Protein is not annotated in the genome, but the nucleotide positions of the open reading frame are indicated instead. BLASTp was used as computational alignment algorithm and pairwise alignments were performed against the corresponding first RBP from phage KP34. The accession number of each RBP is given, along with its length and alignment characteristics (cover-coverage, E-value,% identity, identity range-number of identical amino acids/length) of the region over which identical amino acids are found by Blastp, starting from the N-terminus (amino acid 1 The RBP system of phages belonging to the JD001 group is visualized in Figure 4. c RBP without enzymatic activity. BLASTp was used as computational alignment algorithm and pairwise alignments were performed against the corresponding first and second RBP from phage JD001, respectively. The accession number of each RBP is given, along with its length and alignment characteristics (cover-coverage, E-value,% identity, identity range-number of identical amino acids/length) of the region over which identical amino acids are found by Blastp, starting from the N-terminus (amino acid 1). domain, but its N-terminal domain shows high homology to the N-terminus of the phage T7 tail fiber (pfam03906, aa 14-142), similar to the first RBPs of KP32viruses. In addition, the protein contains a fragment of a T4gp10-like domain located at its C-terminus (aa 186-242), which may serve as the attachment point for the second RBP. This protein is highly conserved among all phages of group A KP34viruses (at least 74% identity) (Supplementary Figure S1C). The second RBP sequence encodes a putative enzymatic domain with most such domains forming a β-helical structure. The N-terminal heptapeptide of these proteins contains universally conserved hydrophobic residues (MALxxLV) (Supplementary Figure S1D). These observations suggest that the organization of the RBP apparatus of group A KP34viruses is similar to the system of phage 2044-307w (group D KP32viruses), albeit with a much shorter conserved peptide (Figure 3). Similar short conserved peptides (heptapeptide and undecapeptide) for interaction with the anchor protein have been observed for E. coli phages K1E and K1-5 and Salmonella phage SP6 (Leiman et al., 2007). Group B KP34viruses contain large first RBPs with a size between 530 and 948 aa. Four out of five RBPs encode an enzymatic domain in the C-terminal or central part of the protein. The corresponding gene in the fifth virus (phage KpV74) contains no predicted enzymatic domain. Group B KP34viruses also encode a second RBP with a predicted enzymatic activity and the same conserved heptapeptide motif as in the second RBP of group A KP34viruses (MALxxLV). The organization of the RBPs in group B KP34viruses is thus similar to that of group A KP32viruses. We found two incongruences in this genus, specifically viruses KP-Rio/2015 and myPSH1235. They both share the gene synteny of KP34viruses but no RBPs were annotated in their genomes. Further genome analysis revealed two open reading frames that presumably fulfill the role of RBPs. We found that phage myPSH1235 follows the RBP organization of group A KP34viruses, while phage KP-Rio/2015 encodes a large first RBP with a predicted enzymatic activity (and a fragment of a T4gp10-like domain) and a second protein that is only 61 aa long, which likely represents a truncated, non-functional RBP. Therefore, phage KP-Rio/2015 forms a different group C with an RBP organization analogous to KP32viruses group C. RBPs From Selected Klebsiella Myoviruses Myoviruses have a contractile tail with a baseplate at the headdistal end of the tail. The tail fibers are directly connected to this baseplate. In addition, there is often a central spike (sometimes annotated as 'fiber') protruding from the baseplate. Nine Klebsiella phages analyzed in this study belong to three different myovirus groups (JD001 group, Menlow group, and K64-1 group) with the latter two groups having a potentially broad host spectrum since they encode between five and nine (Menlow group; phage RaK2) (Hsu et al., 2013;Simoliūnas et al., 2013) or even 11 different depolymerases (phage K64-1) (Pan et al., 2017) (Supplementary Tables S1C-E), necessitating elaborated structural organizations for RBP attachment. We should note that the JD001, Menlow, and K64-1 phages are no taxonomic groups but were grouped in this study for their genetic similarities in the RBP genes. In addition, viruses belonging to the Menlow group have been recently reclassified from Myoviridae to Ackermannviridae (Adriaenssens et al., 2018). Ackermannviridae are characterized by a conserved genome organization and have typical morphology of myoviruses (long contracting tail) but with a different distal end of the tail, which ends with "stars" or "prongs, " being identified as tailspikes (Day et al., 2018). Viruses Belonging to the JD001 Group The putative RBP genes of the viruses of the JD001 group (Table 3) were identified in a region of hypothetical proteins, preceding the DNA polymerase gene. Both genes are located at separate sites with two (JD001, KpV52) or three (KpV79) intervening genes. They all encode a single putative depolymerase, annotated as gluconolaconase, putative tail fiber family protein or tail fiber protein/pectate lyase superfamily protein, respectively. This RBP with depolymerase activity is most likely attached to the anchor protein via a conserved N-terminal domain of about 70 aa, which is distinct from the conserved peptides/domains found in both KP32-and KP34viruses. The anchor protein has no T4gp10-like domain, indicating a different mechanism of interaction ( Figures 4A,B). Viruses Belonging to the Menlow Group The viruses of the Menlow group encode, amid a conserved synteny of structural genes, four non-conserved putative RBPs and one conserved RBP, all with putative depolymerase activity ( Figure 5A). Phages KpS110 and 0507-KN2-1 encode an additional sixth RBP with a predicted depolymerase domain ( Table 4). The first two non-conserved RBPs (protein 2 and 3 in Figure 5) have N-terminal domains of 412 and 195 aa long, respectively, which is conserved among the four members of the Menlow group. The following two non-conserved RBPs have a shorter domain/peptide of 38 and 67 aa, respectively, conserved among all members of the Menlow group ( Supplementary Figures S1E-H). To explore how this high number of putative RBPs might be structurally organized, we searched for homology to T4gp10like domains 2/3 and N-terminally conserved domains/peptides as they suggest branching points. Two domains homologous to T4gp10 were located in the N-terminal part of RBP 2 (RBP with anchor domain) and RBP 3, whereas RBPs 3, 5, 7 ( Figure 5A) contain conserved peptides in their N-terminus. A fifth RBP (protein 8 is present and highly identical) in all members of the Menlow group, while a sixth RBP with putative depolymerase activity is only present in phage KpS110 and phage 0507-KN2-1. Integrating the presence/absence of these structural elements ( Figure 5B) a possible model implies that the first RBP (protein 2, Figure 5A) is directly attached to the tail via a conserved N-terminal anchor and that its T4gp10-like domain probably provides an attachment site for at least two RBPs (3 and 5 or 7 or 8, Figure 5B). Subsequently, the second RBP (protein 3) provides attachment sites via its T4gp10-like domains for two more RBPs (proteins 5 or 7 or 8). Together they constitute a unit of branched tail fibers. The highly conserved fifth RBP (protein 8) may be the central tail fiber that protrudes from below the plane of the baseplate (Nobrega et al., 2018). More structural and genetic studies will be needed for an improved understanding of the elaborated RBP system in viruses from the Menlow group. Viruses Belonging to the K64-1 Group Klebsiella phages belonging to K64-1 group ( K61-1 and RaK2; Supplementary Table S1E) have likely evolved the most elaborate RBP apparatus ( Table 5). K64-1 encodes 11 proteins recognized as putative depolymerases, while in the genome of RaK2 10 putative depolymerases are predicted. The middle and C-terminal regions of five RBPs are different between the corresponding genes of K61-1 and RaK2, reflecting the diversity of capsular serotypes that can be recognized by putative depolymerases of these two phages, whereas the middle and C-terminal parts of other RBPs show more than 75% identity between both phages, suggesting an overlap in the host spectrum (Pan et al., 2017). We found in this study that these proteins also contain a slew of structural elements found in other complex tail fiber machineries such as one N-terminal anchor domain, four short conserved peptides at the N-terminus and five T4gp10-like domains ( Supplementary Figures S1I-M), indicating that phages of the K64-1 group also re-use standardized units to build up a highly complex RBP apparatus (Figure 6). KP36viruses All 12 identified Klebsiella siphoviruses belong to the KP36viruses. They are also featured by a synteny of genes encoding structural proteins such as the tail length tape-measure protein, minor tail proteins and a putative tail assembly protein. This synteny is disrupted by one or two genes, depending on the phage. Three groups can be categorized with the majority of phages belonging to group A, while phage PKP126 (group B) and phage 1513 (group C) represent exceptions from the general structure of group A ( Figure 7A and Table 6). Members of group A KP36viruses (including the reference phage KP36) have a single predicted RBP with putative depolymerase activity. It has been demonstrated that the RBP of KP36 is enzymatically active against capsular serotype K63 (Majkowska-Skrobek et al., 2016). The modular structure of this RBP is similar to that of the RBP of group B KP32viruses, having an N-terminal anchor domain, a highly variable central domain with enzymatic activity, and a C-terminal chaperone. KP36viruses belonging to group B and group C also have an RBP with a similar N-terminal anchor domain (Supplementary Figure S1N). Phage PKP126 RBP (group B) has a predicted enzymatic activity in the central domain in contrast to the truncated RBP of phage 1513 (group C). The chaperone domain is missing in the RBP of both groups B and C. No T4gp10-like domain was found in the N-terminal region of KP36gp50 (RBP). Instead, a small domain (residues 4-63) homologous to domain B (92-155 aa) of the distal tail protein (Dit or T5pb9) of siphovirus T5 has been detected. Dit is located in the T5 tail tip at the junction between the tail tube and the ultimate conical structure and is composed of two domains. Domain A forms a hexameric structure and connects to the end of the tail tube, whereas domain B constitutes the attachment site for three L-shaped tail fibers (Flayhan et al., 2014). These L-shaped tail fibers initially bind reversibly to polymannose containing O-antigens (Heller and Braun, 1982). Remarkably, domain A of T5pb9 has not been found in KP36gp50 but is instead present in the KP36 minor tail protein (residues 22-77 corresponding to amino acids 27-85 of T5pb9) that is encoded four genes upstream of KP36gp50. This horizontal transfer event indicates that in KP36viruses the conserved minor tail protein only comprises domain A, which is located at the junction of the tail tube and the conical tip of the tail. Domain A of the minor tail protein is proposed to interact with the RBP via its N-terminal domain B. This RBP may thus represent the side tail fibers similar to the L-shaped tail fibers in phage T5. In other words, the distal tail protein has been split into two separate elements in KP36viruses. Phage PKP126 and 1513 (groups B and C, respectively) have an additional RBP with putative depolymerase activity. Its exact role is difficult to predict and typical elements Table 4. (A) The modular composition of the RBP genes is shown relative to the broken gene synteny of Menlow. Annotations are given according to GenBank or according to their modeled function as annotated in this study (between brackets): (1) -putative tail protein; (2) -tail spike protein (anchor with depolymerase); (3) -tail spike protein (depolymerase with conserved peptide); (4) -hypothetical protein; this protein is not present in all Menlow group phages; (5) -tail spike protein (depolymerase with conserved peptide); (6) -hypothetical protein; this protein is not present in all Menlow group phages; (7) -tail spike protein (depolymerase with conserved peptide); (8) -hypothetical protein (depolymerase); (9) -neck protein. (B) Schematic model of the RBP system in Menlow with an anchor-mulibranched attachment mode. hinting at a specific structural organization such as a conserved peptide or anchor domain are missing. We hypothesize that those enzymes are not incorporated in the phage particle, but rather are produced as soluble proteins. Upon cell lysis the neighboring cells are sensitized for infection through enzymatic removal of the capsule by the soluble, diffusible depolymerase. This mechanism would be especially beneficial for phages lacking depolymerase activity in the their first RBP (e.g., group C phage 1513). An additional preceding RBP (Figure 7A; protein 2) is highly conserved across all analyzed KP36viruses, except in phage 1513. The role of this RBP is unclear. One possibility is that it is a second side RBP as observed in some T5viruses (DT57C and DT571) (Golomidova et al., 2016;Nobrega et al., 2018). An alternative possibility is that this protein represents the central tail fiber. Given its ambiguous role and location, this RBP was not included in the model depicted in Figure 7B. DISCUSSION In this work we have performed an extensive in silico analysis of the RBPs of Klebsiella phages genomes with a focus on RBPs with depolymerase activity. The tripartite relationship between depolymerase specificity, capsular serotype and phage host spectrum has now extensively been demonstrated for Klebsiella phages (Hsu et al., 2013;Lin et al., 2014;Majkowska-Skrobek et al., 2016;Hsieh et al., 2017;Pan et al., 2017;Solovieva et al., 2018). Podovirus KP32 possesses two experimentally confirmed depolymerases, which are enzymatically active against capsule serotype K3 and K21, respectively. Correspondingly, all strains infected by phage KP32 have either a K3 or K21 serotype (Majkowska-Skrobek et al., 2018). Podovirus KpV71 infects strains with serotype K1, which perfectly matches the specificity of its experimentally verified depolymerase. However, podovirus KpV74, which has also a single RBP, infects strains with serotype K2 and K13. Based on the structural knowledge of RBPs of mainly E. coli phages such as T7, K1F, K1-5, G7C and T5, we have identified structurally conserved building blocks to model the RBP apparatus of Klebsiella phages. The modularity of RBPs in combination with intensive horizontal transfer of genes Second RBP with conserved peptide (protein 3, Figure 5) The RBP system of phages belonging to the Menlow group is visualized in Figure 5. a RBP for which the depolymerase activity has been experimentally verified (Hsu et al., 2013). b RBP with a lower probability on depolymerizing activity. c RBP without enzymatic activity. BLASTp was used as computational alignment algorithm and pairwise alignments were performed against the respective RBP from phage Menlow. The accession number of each RBP is given, along with its length and alignment characteristics (cover-coverage, E-value,% identity, identity range-number of identical amino acids/length) of the region over which identical amino acids are found by Blastp, starting from the N-terminus (amino acid 1 or gene domains (Casjens and Molineux, 2012) allows for a maximum re-use of conserved, evolutionary optimized elements. Simultaneously, the possibility to rapidly shift the host spectrum based on an exchange of the depolymerase domain is retained. Indeed, specific RBP domains, sometimes in pair with their cognate chaperone, are present in each RBP system. This is well-illustrated by the high similarity of the experimentally verified depolymerase domains of KP36gp50 and KP34gp57. Both proteins target capsular serotype K63, but have either an N-terminal anchor or conserved peptide, respectively. The high adaptability of Klebsiella phage RBPs is essential since K. pneumoniae is featured by a high capsular diversity. The gene syntheny of phages belonging to the K64-1 group is visualized in Figure 6. a RBP for which the depolymerase activity has been experimentally verified (Pan et al., 2017). b RBP without enzymatic activity. BLASTp was used as computational alignment algorithm and pairwise alignments were performed against the respective RBP from phage K64-1, respectively. The accession number of each RBP is given, along with its length and alignment characteristics (cover-coverage, E-value,% identity, identity range-number of identical amino acids/length) of the region over which identical amino acids are found by Blastp, starting from the N-terminus (amino acid 1 Table 5. The modular composition of RBP genes is shown relative to the broken gene synteny. Annotations are given according to GenBank or according to their modeled function in this study (between brackets): (1) -putative tail fiber protein (depolymerase); (2) -tail spike protein (depolymerase with conserved peptide); (3)tail spike protein (depolymerase); (4)-putative tail fiber protein (depolymerase with conserved peptide); (5) -putative tail fiber protein (anchor with depolymerase); (6)putative tail fiber protein (depolymerase); (7) -putative structural protein (depolymerase); (8) -putative tail fiber protein (depolymerase with conserved peptide); (9)putative tail fiber protein (depolymerase with conserved peptide); (10) -putative structural protein (depolymerase); (11) -putative tail fiber protein (depolymerase). Consequently, Klebsiella phages have often a very narrow spectrum limited to strains from one or two capsular serotypes. Colonization of new niches occupied by K. pneumoniae isolates with a different capsular serotype thus necessitates a flexible system for rapid adaptation. In addition, the same flexibility is needed to respond to phenotypic serotype switches of K. pneumoniae strains (Pan et al., 2015;Wyres et al., 2015). In this study, we propose that RBPs of Klebsiella phages are organized according to several distinct systems (Figure 8). The simplest mechanism is similar to the anchor-based system The RBP system of KP36viruses is visualized in Figure 7. a RBP for which the depolymerase activity has been experimentally verified (Majkowska-Skrobek et al., 2016). b RBP with a lower probability on depolymerizing activity. c RBP without enzymatic activity. BLASTp was used as computational alignment algorithm and pairwise alignments were performed against the respective RBP from phage KP36. The accession number of each RBP is given, along with its length and alignment characteristics (covercoverage, E-value,% identity, identity range-number of identical amino acids/length) of the region over which identical amino acids are found by Blastp, starting from the N-terminus (amino acid 1). FIGURE 7 | Receptor binding protein systems of the KP36viruses. Phages and their RBPs that are proposed to follow this system, including their grouping into groups A, B and C, are summarized in Table 6. (A) The modular composition of the RBP genes is shown relative to the broken gene synteny of the reference phage KP36. Annotations are given according to GenBank or according to their modeled function in this study (between brackets): (1) -minor tail protein; (2) -tail fiber protein; (3) -putative tail fiber protein (3A,B -anchor with depolymerase; 3C -anchor); (4) -hypothetical protein (4B,C -depolymerase); (5) -putative single-stranded DNA binding protein. Proteins 1, 2, and 5 are present in all KP36viruses. (B) Schematic model of the RBP system in phage particles of KP36viruses group A with a split T5 distal tail protein. Domain A is encoded by short minor tail protein forming a ring at the end of the phage tail tube and offers an attachment point for domain B, which is incorporated in the anchoring part of the RBP that represents the putative side tail fiber. described for phage T7 and K1F. In phages from KP32viruses group B and KP36viruses, the single RBP is directly connected with the phage particle via its conserved N-terminal anchor domain. Other phages (KP32viruses group A; KP34 viruses group B) that produce two RBPs encode the structural elements for an anchor-branched mechanism as reported for phage G7C. Here, the first RBP contains a conserved N-terminal anchor serving for attachment to the virion, followed by a specific fragment of a T4gp10-like domain providing the docking site for a second RBP. Notably, the fragment encoding the T4gp10-like docking site in those Klebsiella phages is shorter compared to the corresponding domain in T4 and may therefore correspond to a single attachment site. The second RBP is presumably attached via a conserved peptide (KP32viruses, KP34viruses, JD001 group, Menlow group, K64-1 group). This conserved peptide is different for each group of phages, varies in length and can be as short as seven amino acids. Such attachment via a short peptide is in line with the RBP complex of K1E/K1-5/SP6-like phages where both RBPs carry either a 7-or 11-residue conserved peptide at their respective N-terminus. In the case of E. coli phage G7C the shorter G7Cgp63.1 RBP carries a positively charged surface that binds to the T4gp10-like domain of G7Cgp66, yet, the conserved peptides in Klebsiella phages lack this positive charge, inferring that different interacting forces take place between the first and second RBP. Similar to the RBPs of phage K1-5, the two experimentally verified depolymerases of phage KP32 target two different capsular serotypes. In both cases the double RBP system thus expands the host spectrum. The presence of either an anchor-or an anchor-branched system is not directly linked to the taxonomic organization. In addition, there is no sequence homology between functionally similar, structural building blocks across those phage groups. Table 1), KP34viruses groups A, B, C ( Figure 3 and Table 2), phages belonging to the Menlow group ( Figure 5 and Table 4) and phage belonging to the K64-1 group ( Figure 6 and Table 5) have been studied in this work. E.g., the first 140 amino acids of the N-terminal anchor domains of RBPs encoded by Klebsiella phages belonging to Podoviridae show similarity with the well-characterized N-terminal domain of T7 tail fiber and their first 300 amino acids are conserved across different podoviruses analyzed in this study. The T7 tail fiber attaches with its N-terminal anchor domain to the region where the adaptor (gp11) interacts with nozzle (gp12) of the short tail complex (Cuervo et al., 2013). The corresponding proteins of phage KP32 share 62 and 61% sequence identity with the adaptor and nozzle protein of phage T7, respectively, while in the case of KP34 the identity is lower (29% identity with a coverage of 67% for the adaptor protein and 23% identity with a coverage of 98% for the nozzle protein). There is no amino acid similarity between the conserved N-terminal anchor domains in RBPs from different taxonomic groups of Klebsiella phages indicating that also the interacting partner in the tail structure has also evolved accordingly. In the case of KP36viruses (Siphoviridae), a remarkable horizontal transfer event has taken place between the distal tail protein and the tail fiber of KP36viruses when comparing to the siphovirus T5 model. Domain B of the distal tail protein has been transferred to the N-terminus of the tail fiber protein in KP36viruses. Whereas in phage T5 proteinprotein interaction occurs between the N-terminus of the RBP and domain B of the distal tail protein, novel interactions between domain A of the minor tail protein and domain B embedded in the tail fiber must have been evolved to compensate for the loss of interaction by a direct covalent bond as in phage T5. Interestingly, several phages with a single enzymatic RBP do not follow the anchor system as described for phage T7, but use the anchor-branched system of G7C with either the first (KP32virus group D; KP34virus group A; JD001 group) or second RBP (KP32virus group C; KP34virus group C) being truncated. In the case of KP34viruses it is even the predominant RBP system. The occurrence of these intermediate RBP systems suggests evolutionary linkages between the different RBP architectures. Starting from the simplest organization with a single RBP (T7, K1F, and KP32viruses group B), the acquisition of a fragment of a T4gp10-like domain allowed for the attachment of a second RBP (KP32viruses group A and KP34viruses group B). The first RBP from E. coli phage G7C has acquired a full T4gp10-like domain (similarity to both subdomain D2 and D3 of T4gp10), offering a potential second attachment site for a different RBP. This second site is not occupied in phage G7C, whereas the E. coli model phages K1-5, SP6, and K1E have effectively two RBPs attached to the same intermediate protein that also comprises both subdomain D2 and D3. In K1-5, SP6 and K1E, this intermediate protein with the full T4gp10-like domain has lost its C-terminal receptor-binding domain, resulting in an 'adapter' system -a short protein with two sites for binding two different RBPs and no domain beyond these two domains (Figure 8). It should be noted that a simple adapter protein that provides attachment sites for two RBPs as described for E. coli phage K1-5 and Salmonella phage SP6 is not observed in the case of Klebsiella podoviruses. Obviously, an opposite evolutionary trajectory of RBP systems (from adapter to anchor) cannot be excluded as well. The success of the modular build-up of the RBP apparatus and the extensive number of horizontal transfer events have obscured possible insight in the direction of this evolution. The assumption that evolution generally takes place from simple to more complex systems, hints at the first direction (from anchor to adapter). KP32viruses group C and D, and KP34viruses groups A and C may have lost a second intact RBP by retrograde evolution when thriving in a new environment that is dominated by a single serotype Klebsiella strain. Having a truncated second RBP may provide a fitness advantage in such a situation. The truncated RBP may remain as a temporal docking site to acquire a new RBP for host range expansion by horizontal transfer when moving to a niche with different Klebsiella serotypes. Phages belonging to the Menlow group and K64-1 group carry multiple RBPs that obviously recycle established structural elements such as a conserved N-terminal domain, short conserved peptides and a T4gp10like domain (or fragments thereof). Yet, more experimental (structural, genetic, biochemical) studies are required to make a plausible prediction on the structural organization of these elaborated RBP systems. In summary, we have modeled the organization of diverse RBP systems in Klebsiella phages. The modular composition and re-use of established structural domains for anchoring and branching provide the phages the full potential to rapidly shift capsular serotype specificity or to expand the spectrum. We expect that the increasing amount of (meta)genome sequencing data will reveal further evolutionary relationships between some of the groups we describe in this analysis, but the main groups will remain in place. The data available to us today clearly show that the architecture of RBP systems is dominated by horizontal transfer events of modules that can be as small as short peptides to as large as multiple domains. Although our analysis was based on experimentally confirmed interactions of E. coli phage RBPs (Leiman et al., 2007;Prokhorov et al., 2017), further experimental validation of the presented models is needed and has already been initiated by our team. To analyze the interactions between the T4gp10-like domains and conserved peptides, protein-protein interactions can be studied by various techniques such as isothermal calorimetry (ITC), twohybrid systems, surface plasmon resonance (SPR), cryoEM, or an enzyme-linked immunosorbent assay (ELISA). This work also adds improved functional annotations to genes to which previously no specific function has been assigned, but which are putative tail fibers/spikes with depolymerizing activity. The high number of newly predicted depolymerases in this study can be verified by their recombinant production followed by activity tests against strains with particular capsular serotypes. DATA AVAILABILITY STATEMENT The datasets analyzed for this study can be found in the GenBank. All genome and protein accession numbers are listed in the tables and Supplementary Material.
2019-11-15T14:10:31.501Z
2019-11-15T00:00:00.000
{ "year": 2019, "sha1": "7b65a238bd404f1ff0c2b34c4cc281e59d301443", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.02649/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eafb3eff1a77c474e4ab15e878434cd781b57072", "s2fieldsofstudy": [ "Biology", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
236662286
pes2o/s2orc
v3-fos-license
Mathematical Model of VOCs Emission in Three-layer Building Materials A simple mathematical model is proposed to account for emissions of Volatile Organic Compounds (VOCs) from three-layer building materials. The model considers both the diffusion within three layer building materials and the mass transfer resistance through the air boundary layer. A general solution method based on Laplace transform is presented. Compared to other models capable of accounting for emissions of VOCs from multi layer building materials, the present model is fully analytical instead of being numerical. The present model was validated by the experimental data from the specially designed test. The results indicated that there was a good agreement between the model predictions and the experimental data. It can also be seen from calculation that model ignoring the boundary layer resistance cannot fully reflect the real situation. Introduction Volatile Organic Compounds (VOCs) emitted by building materials such as floor coverings and wood products are considered to be one of the main threats to human health. Therefore, it is necessary to have a deep understanding of VOCs emission characteristics and their propagation mechanism in building materials and indoor. In order to accurately describe the emission characteristics of VOCs, several mathematical models have been proposed. Little et al. (1994) pioneered the emission model of VOCs in single-layer dry materials. However, since this model ignores mass transfer resistance in the air, the concentration of VOCs in the air at the initial stage of emission is overestimated [1]. Shin et al. (2003) used Little's model to simulate the divergence process in the carpet and obtained the model parameters through corresponding experimental data [2]. Yang et al. (2001) developed a numerical model to simulate the divergence process of VOCs from a single layer of dry material, and conducted an experimental study [3]. Zhao et al. (2002) developed an analytical model that can study instantaneous pollution sources based on the ideas of Little et al. (1994) [4]. Huang and Haghighat (2002) developed a numerical model that introduced a gas-phase mass transfer coefficient to describe the mass transfer resistance [5], which can be solved by the finite difference technique. Xu and Zhang (2003) proposed an improved mass transfer model, including the analytical solution of one-dimensional diffusion equation in the model proposed by Huang and Hagighat (2002) [6]. However, the model proposed by Xu and Zhang (2003) is not a complete analytical solution. Because it is related to the concentration of VOCs in the air, which is an unknown function of time. Therefore, the concentration in the material and the mass balance equation in the air must be solved simultaneously by finite difference technology. That is to say, this model still belongs to the category of numerical solution, and it is not convenient to compare this model with Little's fully analytical solution model. Deng and Kim (2004) obtained the analytical solution of Huang and Haghighat (2002) model through Laplace transform [7]. The above models can only be used to predict the divergence process in a single-layer building material with uniform physical properties. However, many composite building materials in reality are multi-layer materials composed of several different substances. Each layer may have a different VOCs diffusion coefficient, and their material-air distribution coefficient may also be different. The inner layer may be the "source" or "sink" of VOCs to the outer layer, which may affect the concentration of VOCs in the air. Therefore, it is necessary to develop a model that can describe the phenomenon of VOCs emanating from multilayer materials. Kumar and Little (2003) proposed a two-layer material VOCs divergence model [8]. The concentration distribution of VOCs in the double-layer material and the concentration of VOCs in the air mainstream are obtained, but mass transfer resistance in the air boundary layer is ignored. Zhang and Niu (2004) studied rooms using multi-storey materials. The model proposed by them takes into account mass transfer resistance in the air boundary layer, and they use the finite difference method to solve the governing equation [9]. However, finite difference method is too time-consuming to solve, which brings inconvenience to the application. This paper presents a simple mathematical model which can easily predict the dispersion of VOCs in three-layers building materials. This model considers the mass transfer resistance inside the material and the air boundary layer. The general solution of VOCs concentration in air is obtained by Laplace transform. Model export The interior has three layers of structural building materials :( 4 ). To simplify the problem, the following assumptions are made: • The authors, in the form: initials of the first names followed by last name (only the first letter capitalized with full stops after the initials), • Each layer of building material has uniform physical properties and the same initial VOCs concentration. • During emission process, vapor pressure of VOCs maintains a thermal balance at the interface between the material and the air. • Although mass transfer may be caused by the coupled effects of temperature gradient, pressure gradient, external force and concentration gradient, only the concentration gradient of VOCs is used as the driving force for mass transfer in this analysis. • No chemical reaction occurs when VOCs is generated or consumed inside building materials. • VOCs diffusion in each layer is a one-dimensional problem. Based on the above assumptions, the governing equations of mass transfer and equilibrium of VOCs in three-layer material are given. The instantaneous governing equation of VOCs diffusion in each layer of material is: C m is the concentration of VOCs in the m-th layer of material. D m is the diffusion coefficient in the m-th layer of material. τ is time. x is the diffusion direction of VOCs in the material; i = 1 indicates the bottom layer of the material, i = 3 represents the top layer of material adjacent to air. The initial conditions for each layer of equation are as follows. ,0 C m,0 is the initial concentration of VOCs in the m-th layer of material. Since the diffusion between adjacent layers of materials is constrained by the conservation of mass, the mass diffusion flux at the interface of two adjacent layers is continuous, so the following formula can be derived. q is the mass diffusion flux of the two adjacent layers. Because the diffusion coefficient of each layer is different, the concentration of VOCs is discontinuous, so according to Henry's law, there is following formula. Equation (4) shows that VOCs concentration on both sides of the material's adjacent interlayer interface is discontinuous. The introduction of gas phase equilibrium concentration C e = C m /K m , a can keep the concentration of VOCs continuous on the interface between different layers. Equations (1) ~ (4) can be rewritten as: ,0 , Assuming that three layers of building material are placed on the stainless steel floor of the room, there is no mass flux at its lower boundary. The material -air interface is the third type of mass transfer boundary condition, which is constrained by the following relation. 4 4 ( ) x e x a q h C C   (11) h is gas-phase mass transfer coefficient. C a is the concentration of VOCs in the air. An equation for C a can be obtained from the conservation of mass of VOCs in the room. Since the inlet VOCs concentration is zero, the mass conservation equation can be rewritten as the following formula. L is load ratio and N is room air exchange rate. Generally speaking, the finite difference method can be used to solve the equations (5) -(12), but the application is inconvenient because solution process is too time-consuming. In this study, Laplace transformation is used to solve the problem. According to the method of Carslaw and Jaeger (1986) [10], considering the initial concentration field in the material, the Laplace conversion of the m-th layer is as follows. Laplace transform of VOCs concentration and diffusion flux in the (m+1)th layer, respectively. δ m = x m+1 -x m is the thickness of the m-th layer of material. Applying equation (13) to the third layer and performing matrix operations, the following matrix form relational expressions in the third layer material adjacent to air can be obtained. The following equation can be obtained by combining equations (10) The solution of equations (28) and (29) can be obtained using the inversion theorem. r k is the positive root of this equation. Through equations (30) -(34), VOCs concentration in the air and VOCs emission at the material-air interface can be easily obtained by means of computer programming. Because these two series of numbers converge very quickly, usually the first 200 terms can be calculated with sufficient accuracy. The existing model considers four key parameters: gas-phase mass transfer coefficient h, material-air distribution coefficient, mass diffusion coefficient in the material D m , and the initial VOCs concentration in the material C m,0 . The last three items can be determined through experiments. The gas-phase mass transfer coefficient can be obtained by some empirical correlations. Model verification Since the experimental results of VOCs emission in three-layer building materials have not been found in published literature, only the experimental results of onelayer building materials can be used to verify the model proposed in this paper. Yang et al. (2001) conducted VOCs divergence experiment in an experimental chamber with a volume of 0.5m×0.4m×0.25m. They studied the distribution of VOCs from two different particleboards (PB1 and PB2) [3]. The geometric dimension of PB1 and PB2 is 0.212 m × 0.212 m × 0.0159 m. The concentration of VOCs in the air at different times was measured in the experiment. Since the particleboard in the experiment is a single layer, in order to verify the model proposed in this paper, it can be assumed that the particleboard is composed of three layers of materials with the same physical properties. The initial concentration and mass transfer diffusion coefficient of VOCs in each layer are the same. The material-air distribution coefficients between different layers are also the same, namely C 10 = C 20 = C 30 , D 1 = D 2 = D 3 , K 1,a = K 2,a = K 3,a . Meanwhile, it is assumed that the thickness of three layers is δ 1 =0.012, δ 2 =0.002, and δ 3 =0.0019, respectively. Based on the above assumptions, the model proposed in this paper can be used to predict the divergence of VOCs in particleboard experiments. It can be seen that there is a good agreement between the two. As the experimental results of VOCs emission in three-layer building materials have not been reported, it cannot be compared with the model proposed in this paper. Experiments with three layers of materials are proposed to further verify the model. It can be seen that VOCs concentrations at the interface and the air mainstream are different. It also proves that the model ignoring the mass transfer resistance of the boundary layer cannot fully reflect the real situation. Conclusion In this paper, a simple mathematical model is proposed to predict VOCs emission process in three-layer building materials. The model takes into account mass diffusion coefficient in the material and mass transfer resistance of air boundary layer, and a general solution based on the Laplace transform is obtained. The concentration of VOCs in the air can be expressed by two rapidly converging infinite series. Therefore, the use of this model can easily predict the concentration of VOCs emitted from three-layer building materials in the air. The accuracy of model prediction is verified by experimental result in the literature, and the predicted value is basically consistent with the experimental data. It can also be seen from calculation that model ignoring the boundary layer resistance cannot fully reflect the real situation.
2021-08-03T00:06:01.500Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "fa876ce30443b3c8edf0445be0310db283b4c59d", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/33/e3sconf_aesee2021_03047.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cc359c28950d438276d617f6b32326f2bff3a571", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Environmental Science" ] }
1648414
pes2o/s2orc
v3-fos-license
Communities of microbial eukaryotes in the mammalian gut within the context of environmental eukaryotic diversity Eukaryotic microbes (protists) residing in the vertebrate gut influence host health and disease, but their diversity and distribution in healthy hosts is poorly understood. Protists found in the gut are typically considered parasites, but many are commensal and some are beneficial. Further, the hygiene hypothesis predicts that association with our co-evolved microbial symbionts may be important to overall health. It is therefore imperative that we understand the normal diversity of our eukaryotic gut microbiota to test for such effects and avoid eliminating commensal organisms. We assembled a dataset of healthy individuals from two populations, one with traditional, agrarian lifestyles and a second with modern, westernized lifestyles, and characterized the human eukaryotic microbiota via high-throughput sequencing. To place the human gut microbiota within a broader context our dataset also includes gut samples from diverse mammals and samples from other aquatic and terrestrial environments. We curated the SILVA ribosomal database to reflect current knowledge of eukaryotic taxonomy and employ it as a phylogenetic framework to compare eukaryotic diversity across environment. We show that adults from the non-western population harbor a diverse community of protists, and diversity in the human gut is comparable to that in other mammals. However, the eukaryotic microbiota of the western population appears depauperate. The distribution of symbionts found in mammals reflects both host phylogeny and diet. Eukaryotic microbiota in the gut are less diverse and more patchily distributed than bacteria. More broadly, we show that eukaryotic communities in the gut are less diverse than in aquatic and terrestrial habitats, and few taxa are shared across habitat types, and diversity patterns of eukaryotes are correlated with those observed for bacteria. These results outline the distribution and diversity of microbial eukaryotic communities in the mammalian gut and across environments. INTRODUCTION A rich understanding of the distribution of microbial diversity across environments has emerged from high-throughput sequencing studies in the past decade. These studies have described many spatial and temporal patterns of variability within environments and have defined the major divisions in microbial community composition (Nemergut et al., 2013). Salinity represents the primary division among environmental samples for bacterial and archaeal communities (Lozupone and Knight, 2007;Auguet et al., 2010;Wang et al., 2011), while the vertebrate gut has the most distinct bacterial communities (Ley et al., 2008b). Studies characterizing microbial diversity deeply across hundreds to thousands of samples are now common for bacteria (e.g., the Human Microbiome Project, the Earth Microbiome Project, MetaHIT), but are just beginning for microbial eukaryotes (Tara Oceans, ICOMM, BioMarks). As a result, progress characterizing the distribution of protist diversity lags behind our knowledge of bacteria, but morphological surveys (Larsen and Patterson, 1990;Patterson, 1996;Foissner, 2006;Weisse, 2008) combined with recent molecular data (Amaral-Zettler et al., 2009;Caron, 2009;Baldwin et al., 2013;Bates et al., 2013) provide a foundation of knowledge on the biogeography of protists across environments. Our understanding of the diversity and function of hostassociated microbial communities has grown exponentially in recent years, fueled by high-throughput sequencing and motivated by the realization that microbes have a profound influence on their host (McFall-Ngai et al., 2013;Sommer and Backhed, 2013). There are many commonalities in the bacterial taxa that comprise the microbiota across mammals, with the phyla Bacteroidetes and Firmicutes being predominant components (Ley et al., 2008b;Muegge et al., 2011). Overall, the mammalian gut harbors lower bacterial diversity and fewer phyla-level taxa than other environments (Ley et al., 2006). Across mammals, microbiota composition varies according to host phylogeny and diet (Ley et al., 2008b;Russell et al., 2014), and the composition of the human microbiota resembles that of our primate relatives (Ley et al., 2008b). Within humans gut microbiota is influenced by diet, health status, and age Lozupone et al., 2012). In addition, adoption of a western lifestyle, characterized by diets rich in processed food, antibiotic usage, and hygienic habits, has a particularly strong influence on the microbiota (De Filippo et al., 2010;Yatsunenko et al., 2012;Ursell et al., 2013). Diversity of the human bacterial microbiota has clearly declined in Western populations compared to populations with traditional agrarian lifestyles (De Filippo et al., 2010;Cho and Blaser, 2012;Lozupone et al., 2012;Yatsunenko et al., 2012). Progress characterizing the eukaryotic component of the mammalian microbiome lags behind bacteria because highthroughput sequencing based investigations into the diversity of the mammalian microbiota have focused almost exclusively on bacteria Andersen et al., 2013). The mammalian intestinal tract is home to many eukaryotes, including animals (e.g., helminths) and protists (e.g., amoebae and flagellates), and these taxa have been investigated for decades from a parasitological point of view with microscopy and targeted molecular approaches (Bogitsh et al., 2005). Studies of the eukaryotic component of the mammalian microbiota from a community perspective are beginning to come online, though many questions remain to be investigated (Andersen et al., 2013). Although sample sizes are generally small to date, these studies have shown that anaerobic fungi are dominant in mice (Scupham et al., 2006). Western human fecal communities include Blastocystis (Scanlan and Marchesi, 2008) and fungi (Dollive et al., 2012), while a survey of a single African individual revealed higher microbial eukaryote diversity (Hamad et al., 2012). The diversity of the eukaryotic microbiota in the human gut has not yet been systematically investigated from a community perspective in nonwestern populations. These populations provide an important perspective for understanding the eukaryotic microbiota that humans have co-evolved with over millions of years. Eukaryotic microbes in the gut are generally considered parasites, and have long been recognized to contribute to host morbidity and mortality (Bogitsh et al., 2005). However, many are commensal (Bogitsh et al., 2005), or play beneficial roles as probiotics (McFarland and Bernasconi, 1993) or cellulose degraders (Kittelmann and Janssen, 2011). Further, increasing evidence suggests that eliminating the diverse microbial community that co-evolved with mammals over millions of years is detrimental to host health (Cho and Blaser, 2012;Lozupone et al., 2012), in support of the Old Friends Hypothesis (or hygiene hypothesis) (Rook, 2012). Eukaryotic microbes were part of our ancestral gut community and intestinal helminths were nearly universal (Goncalves et al., 2003). In humans, the transition to modern lifestyles is associated with dramatically lower diversity and prevalence of intestinal helminths, and with a rise in the prevalence of autoimmune disease (Rook, 2012). Yet, we know little about their role in healthy people. Recent analyses of common protists in the gut suggests that they may be part of the healthy microbiota in humans (Petersen et al., 2013). Here, we use high-throughput sequencing to characterize eukaryotic communities found in the vertebrate gut from a diverse collection of mammalian fecal samples, including humans from the US and from remote communities in Malawi. To provide a broader context for understanding of the diversity of microbial eukaryotes in the gut, we also characterized a collection of samples from a wide range of other environments, including human skin, marine water, freshwater, soil, and air. The bacterial communities in these samples were also characterized to enable comparison of eukaryotic and bacterial biodiversity. In order to gain deeper insight into the distribution of eukaryotic diversity, we curated the SILVA reference database (Pruesse et al., 2007) so that both the taxonomy assigned to reference sequences and the phylogenetic tree constructed from these reference sequences reflects current knowledge. Eukaryotic environmental sequences are placed within this explicit phylogenetic context and assess the distribution of eukaryotic clades across environments. SAMPLE SET We selected 185 samples that span a wide range of environments in order to assess broad patterns in eukaryotic communities (Table S1). The dataset analyzed here was chosen to include individuals from geographically diverse populations with contrasting lifestyles to enable testing the hypothesis that the transition to modern, highly hygienic lifestyles are correlated with low levels of diversity of eukaryotic microbes. We included samples from 23 individuals that reside in agrarian communities in Malawi that follow traditional lifestyles and 16 samples from 13 individuals residing in the US (Boulder, CO and Philadelphia, PA) and follow modern lifestyles (Table 1). Three individuals from Boulder were sampled at two time points 2 months apart (Costello et al., 2009). The US populations live in urban or suburban areas, consumed typical western diets, and did not report any health problems at the time of sampling (Costello et al., 2009;Yatsunenko et al., 2012). Individuals from populations in Malawi ate diets rich in maize, legumes, and other plants (Table S1 from Yatsunenko et al., 2012) and were healthy and well-nourished at the time of sampling (Yatsunenko et al., 2012;Smith et al., 2013). These samples have been described in detail previously and bacterial diversity was previously reported (Costello et al., 2009;Yatsunenko et al., 2012;Smith et al., 2013). In addition, we included 22 samples from other mammals, also previously described and characterized for bacteria (Ley et al., 2008a;Muegge et al., 2011), to gain insight into the diversity of eukaryotic human microbiota relative to other mammals. Collection of the human fecal samples for these previously published studies was done according to protocols approved by Human Research Committees at the institutions involved which allow samples to be used for further research. De-identified DNA was sent to the University of Colorado for amplification. Collection of skin and oral samples was approved by the University of Colorado Human Research Committee (protocol 0109.23), which allows the samples to be used for further research. Finally, we included samples from wide variety of environments, many of which have been previously characterized for bacterial or fungal communities (Table S1). These include air sampled over terrestrial environments (Bowers et al., 2011a(Bowers et al., , 2012, soil Ramirez et al., 2010;Eilers et al., 2012), freshwater (Shade et al., 2012), marine water, lichens (Bates et al., 2011), leaf litter (McGuire et al., 2012, and human oral and skin samples (Costello et al., 2009;Verhulst et al., 2011). The sequence data and MiMARKs (Yilmaz et al., 2011) compliant metadata is available for this study at the QIIME database http://www.microbio.me/qiime/: study #1519 for eukaryotes and #1517 for bacteria and at EBI (accession numbers ERP006039 and ERP005135). MICROBIAL COMMUNITY CHARACTERIZATION Sequences were PCR amplified with primers 515f and 1119r (Bates et al., 2012). The forward primer 515f (5 GTGCCAGCMGCCGCGGTAA 3 ) is 3-domain universal and 1119r (5 GGTGCCCTTCCGTCA 3 ) is targeted toward eukaryotes. Primer specificity to eukaryotes and predicted amplification efficiency of eukaryotic lineages was assessed with the taxa coverage module in PrimerProspector . This program assesses the complementarity between the primer sequence and a reference database, in this case SILVA 111, and assigns a score based on the number of mismatches or gaps between the primer sequence and the reference, and mismatches as the 3 end of the primer are more heavily penalized (http:// pprospector.sourceforge.net/tutorial.html). Taxa coverage was assessed at three thresholds corresponding to three levels of specificity (Table S2). A threshold of 0.5 is predicted to generate efficient amplification and allows up to one mismatch at the 5 end of the primer. The threshold of 1 allows one mismatch at the 3 end of the primer or two mismatches in other primer regions, and threshold 2 allows 2-5 mismatches at the 3 or 5 ends of the primer respectively and amplification is expected to be poor or non-existent. This primer pair has high predicted specificity to eukaryotes, matching 86-90% of eukaryotic sequences but less than 0.5% of bacterial and archaeal sequences at a threshold of 0.5 and 1, respectively (Table S2). Many of the taxa expected to be in the mammalian gut based on parasitological studies are predicted to amplify well, including Dientamoeba, Entamoeba, Blastocystis, Balantidium, parabasalids, and nematodes (Table S2). However, there are two mismatches between the Giardia 18S sequence and the reverse primer suggesting a low efficiency (Table S2). DNA was extracted with the MoBio PowerSoil kit following EMP standard protocols. PCR amplification was done in triplicate with an annealing temperature of 50C for 40 cycles. These permissive conditions were used to amplify the broadest range of eukaryotic taxa. Quantitation and pooling were done according to EMP standard protocols. The final pool was sent to Roche Core Facility. The libraries were amplified, sequenced and processed at the Roche Core Facility. Amplification was done according to the emPCR Amplification Method Manual-Lib-A LV GS FLX Titanium Series with the following edits for long amplicons. Using the Titanium Lib-A emPCR kit, the emulsions were made with A beads and A amp primers only and the following reagents: 1050 µL MBGW, 1500 µL emPCR additive, 860 µL 5× amplification mix, 300 µL Primer (A), 200 µL Enzyme mix, and 5 µL PPiase. The cycling conditions were 4 min at 94C followed by 50 cycles of 30 s at 94C and 10 min at 60C, ending with a hold at 10C. The library was then run as a standard XL+ run. This FLX+ run was sequenced with the standard flow order (400 cycles of TACG nucleotide flows), following the instructions in the Sequencing Method Manual-GS FLX+ Series-XL+ kit, as can be found on the www.my454.com website. DATA PROCESSING AND QUALITY FILTERING Data processing was done at the Roche Core Facility according to the GS FLX System Software Manual modified to optimize performance for metagenomic amplicon sequences. In order to generate high quality data for amplicons metagenomic applications, the default pipeline was tuned to meet the data quality requirements of the QIIME pipeline. The data was processed using 26amp_sl1000 pipeline which has the following tuning steps modified: (1) vfScanLimit was increased from the default of 700 to 1000, (2) the valley filter setting vfTrimBackScaleFactor was increased from the default value by a factor of 0.5, and (3) the quality filter setting QscoreTrimFactor was modified from the default value to a more stringent value. The Amplicon pipeline template was used to generate the modified pipeline XML file with the rCAFIE algorithm turned on. Usearch version 6.1 was used to screen sequence for chimeras (Edgar, 2010). Sequences were additionally filtered for quality using split_libraries within QIIME version 1.5.0 (Caporaso et al., 2010b). Quality filtering excluded sequences with an average quality score of 25 or lower, reads longer than 1200 bp or shorter than 200 bp and reads with more than 5 ambiguous bases. We found that sequence quality dropped off significantly toward the end of the read, so we employed a strategy truncating sequences when quality scores that fell below 25 in a sliding window of 50 bp. These truncated reads were retained as long as they passed other quality filters and these averaged 444 bp in length. In order to quantify concordance in the diversity patterns of bacterial and eukaryotic communities we sequenced the bacterial communities as well as the eukaryotic communities. Bacteria were sequenced with the 515f/806r primers on the Illumina GAIIx platform at Washington University. Bacterial data was processed using standard protocols within the QIIME database (www.microbio.me/qiime). Archaea are also amplified with this primer set, but were excluded from the analysis in order to focus on the eukaryote to bacteria comparison and because there were too few Archaea OTUs for meaningful comparison. Low abundance OTUs, those containing less than 0.05% of the total reads in the dataset, were filtered out as recommended for Illumina sequence data (Bokulich et al., 2013). The samples were filtered to only include those 113 samples that had at least 150 sequences per samples in the eukaryotic data, and of these, samples with fewer than 3000 sequences were excluded from the analysis. The full dataset was used for taxon-based analyses and all samples were rarefied to 3000 sequences per sample for diversity analyses. OTU PICKING AND TAXONOMY ASSIGNMENT Eukaryotic sequence reads from the 454 FLX+ run were clustered into OTUs with a 97% similarity threshold, which was chosen to minimize the impact of sequencing error in inflating OTU numbers (Stoeck et al., 2010;Bates et al., 2013). Reads were clustered into OTUs according to the open reference protocol (http://qiime.org/tutorials/open_reference_illumina_processing. html) using UCLUST (Edgar, 2010) within QIIME. This involves first clustering reads against the curated SILVA 108 eukaryotic database clustered at 97%, and these OTUs inherited the reference taxonomy. Sequences that failed to assign to the reference dataset were then clustered at 97% de novo with UCLUST. Taxonomy was assigned to these de novo sequences in one of two ways in order to maximize the taxonomic information and reliability. First, taxonomy was assigned using BLAST against the SILVA 108 97% reference database with an e-value cutoff of e-100. In cases where the e-value was less than e-100 taxonomy was assigned using the RDP classifier trained with the SILVA 108 97% reference set at genus level. Taxonomy assignments were also confirmed in using the PR2 reference database (Guillou et al., 2013). The resulting OTUs were filtered to exclude bacteria, archaea, vertebrates (thus removing host DNA), and plants (to exclude dietary sources) as well as non-SSU rDNA sequences. Finally, singleton sequences were excluded from the analysis to reduce the likelihood of including PCR and sequencing artifacts. After filtering, we excluded samples from further analysis that had fewer than 150 eukaryotic sequences/sample. This left 3883 OTUs from 113 samples (out of 185 total samples), corresponding to 84,576 sequences. Downstream diversity analyses used data rarefied to 150 sequences per sample, and taxonomy plots used the full dataset. In order to take full advantage of this dataset we assessed the taxonomic composition of human gut samples falling below the 150 sequences per sample threshold. In this case, a taxon (OTU) was considered present if the OTU was represented by least 5 sequences in the sample in question. Although 150 sequences per sample is a low number by high-throughput sequencing standards, this sequencing depth adequately captures the diversity present ( Figure S1). Direct comparison of numbers of bacterial and eukaryotic taxa is not possible because two different sequencing platforms were used here and the number of sequences per sample is much lower for eukaryotes. However, we can compare the relative differences in alpha diversity between sample types for eukaryotes and bacteria respectively, and sequencing depth for both domains adequately sample diversity. Rarefaction curves of Faith's Phylogenetic Diversity metric level off by 150 sequences per sample, particularly for host-associated samples (Figure S1). Similarly, we have adequate sampling of bacterial diversity and rarefaction curves are leveling off by 3000 sequences per sample for host-associated samples ( Figure S1). A phylogenetic tree reflecting the current understanding of eukaryotic relationships was constructed using the curated SILVA alignment as a template and the SILVA 108 tree as a constraint on the backbone relationships (see SILVA curation below). The representative set of sequences from this study was first aligned to the SILVA 108 97% representative set with PyNAST (Caporaso et al., 2010a). Representative sequences for each of the 3883 OTUs that aligned to the SILVA reference alignment were used to build a phylogenetic tree for diversity analysis and to assess patterns of phylogenetic groups by environment. The resulting alignment was dynamically filtered to remove the 10% most entropic positions and positions with greater than 95% gaps. This alignment was then used to build a phylogenetic tree with the topology constrained to the SILVA 108 97% tree (see below) in RAxML (Stamatakis, 2006). This tree was used for visualization in TopiaryExplorer (Pirrung et al., 2011), which allows branches to be colored according to sample metadata or taxonomy. The p-test from Martin (2002) and UniFrac test (Lozupone and Knight, 2005) were performed on the tree to assess whether the distribution of sequences from particular environments across the tree were significantly different than random, implemented in the beta_significance script within QIIME. In order to visually compare the diversity in the vertebrate gut to other environments, we filtered the tree to include equal sample numbers and equal (rarefied) sequences per sample. This was done by first filtering the OTU table to include the 32 fecal samples with more than 150 sequences per sample and a subsampled set of 32 environmental samples spanning the range of environments, and then rarefied to 150 sequences per sample for both eukaryotic 18S and bacterial 16S. This normalized OTU table was used to filter tips from the 16S and 18S trees. Diversity analyses were carried out in QIIME using data rarefied to 150 sequences per sample for eukaryotes and 3000 sequences per sample for bacteria. The differences in rarefaction level are a result of the different sequencing platforms used for these datasets. Phylogenetically informed analyses of alpha and beta diversity [phylogenetic distance and unweighted UniFrac (Lozupone and Knight, 2005), respectively] utilized the tree described above. Non-phylogenetic beta diversity metrics performed poorly because very few OTUs were found across multiple sample types ( Table 2). Unweighted UniFrac distance matrices were used in Analysis of variance tests (ANOSIM) to assess statistical differences across environments within QIIME. To assess the impact of unbalanced numbers of samples across habitat types, we randomly subsampled the dataset to include equal numbers of samples from each environment and then recalculated diversity metrics and performed ANOSIM tests. This procedure was repeated 1000 times. We visualized the differences in betadiversity across sample types with non-metric multidimensional scaling (NMDS) plots, which were constructed in the software Primer E (Clarke and Gorley, 2006). We took advantage of the long sequence reads from the 454 FLX+ to further investigate the phylogenetic position of Entamoeba and Blastocystis, the two most common taxa detected in the gut. We aligned Entamoeba and Blastocystis representative sequences to the reference taxa from the PR2 database, and then constructed maximum likelihood phylogenies with RAxML. These trees were constrained to the reference phylogeny for these clades, which was derived from the literature (Stensvold et al., 2011;Alfellani et al., 2013). The placement of Entamoeba and Blastocystis sequences was used to confirm the taxonomic identities of these OTUs (Table 1). CURATION OF THE SILVA EUKARYOTIC DATABASE The SILVA 108 ribosomal database (Pruesse et al., 2007) was downloaded from SILVA (http://www.arb-SILVA.de/). Sequences were initially filtered to remove unclassified environmental sequences. The remaining ∼55,000 sequences were dereplicated by clustering at 97% with UCLUST, resulting in ∼11,000 sequences. A representative set was then chosen for these OTUs based on the longest sequence. The filtered out environmental sequences were then clustered against the representative set of 97% OTUs using UCLUST ref within QIIME. Those sequences that did not match the reference dataset were then clustered at 97% de novo and the longest representative sequence chosen for each cluster. This resulted in a final SILVA eukaryotic 97% representative set with 14,236 sequences. The 97% reference dataset was aligned with PyNAST (Caporaso et al., 2010a) in QIIME with a threshold of 70% similarity and a template alignment from Katz et al. (2011) [TreeBase study 11336, matrix M8584; (Katz et al., 2011)]. The resulting alignment was dynamically filtered to remove the 20% most entropic positions and positions with more than 90% gaps. A phylogenetic tree was constructed with RAxML version 7.3.0 (Stamatakis et al., 2008), using the tree topology from the multigene study of Parfrey et al. (2010) with updates based on subsequent papers (e.g., Adl et al., 2012) as a constraint. The database taxonomy was curated to reflect current views of eukaryotic taxonomy and maximize the taxonomic information available for environmental sequences. Major clade information was added based on Parfrey et al. (2010) and Adl et al. (2012). To maximize the informativeness of the SILVA data set, high-level taxonomy was assigned to uncultured environmental sequences by placing these uncultured reads into the tree of SILVA representative sequences with the RAxML EPA algorithm (Berger et al., 2011) and assessing their position in a phylogenetic tree. Sequences that were nested within clades were assigned taxonomy based on that clade at a high level (e.g., Ciliate or Fungi). Sequences that were mislabeled (i.e., sequence labeled as fungi that fell within the plants) were identified in the tree, confirmed by BLAST and then removed from the representative set. The curated SILVA 108 database is available at http://qiime.org/home_static/dataFiles.html. EUKARYOTIC DIVERSITY IN THE HUMAN GUT Eukaryotic microbes are common components of the human gut microbiota in healthy individuals. Blastocystis, Entamoeba, trichomonads, and yeast were frequently detected in human gut samples (Figure 1). Closer inspection of the taxa reveals that most are likely commensal rather than pathogens. For example, Entamoeba was detected in both populations. While the genus Entamoeba includes E. histolytica, the causative agent of the deadly amoebic dysentery (Bogitsh et al., 2005), the vast majority of Entamoeba sequences detected here fall within the commensal species Entamoeba coli, E. dispar, and E. hartmanni (Table 1). Entamoeba histolytica was detected in low abundance in two individuals that also harbored E. dispar. Blastocystis was abundant in many samples (Figure 1), and represented by subtypes ST1, ST2, and ST3 (Table 1). Historically, Blastocystis has been considered a pathogen and it is associated with Irritable Bowel Syndrome (Yakoob et al., 2010;Poirier et al., 2012). However, the clinical importance of Blastocystis, its pathogenicity, and variation in pathogenicity among subtypes, is widely debated (Tan et al., 2010;Coyle et al., 2012;Scanlan and Stensvold, 2013). Some evidence suggests that Blastocystis is a normal component of the microbiota in many individualsperhaps even a beneficial component-as it has been detected at high prevalence in healthy people (Scanlan and Marchesi, 2008;Petersen et al., 2013;Andersen et al., submitted), its presence is negatively correlated with intestinal disease (Petersen et al., 2013), but see Cekin et al. (2012). High prevalence of Blastocystis has been reported in other epidemiological studies of African countries, up to 100% reported in a Senegalese cohort, half of which had no gastrointestinal symptoms (El Safadi et al., 2014). Many other taxa that populate parasitology textbooks were also detected at lower levels, including Chilomastix, nematodes, and other parabasalids. We do not detect common gut symbionts such as Dientamoeba (Parabasalia), Cryptosporidium (Apicomplexa), or Giardia (Diplomonadida). The primers used here are a poor match for Giardia (Table S2) and may have failed to amplify Giardia DNA. The primers are predicted to work well with Cryptosporidium, but our DNA extraction method (bead beating rather than freeze thaw cycles) may have been insufficient to break open the robust spores of Cryptosporidium (and similar problems may further hinder our ability to detect Giardia). Dientamoeba is also predicted to amplify with our primers (Table S2). While prevalence is generally quite high in Europe, Dientamoeba prevalence is variable worldwide and generally low (less than 5%) in Africa (Barratt et al., 2011). However, specific diagnostic assays would be necessary to rule out presence of these taxa with any confidence. We assessed eukaryotic diversity across two geographically distant populations whose inhabitants follow either traditional, agrarian lifestyles (Malawi) or modern, urban lifestyles (US). However, our ability to compare eukaryotic diversity across populations is hampered by low counts of eukaryotic sequences in US individuals and young children. Taxa presence above was calculated based on OTUs represented by at least five sequences in a given sample. In order to compare diversity across populations and across sample types more broadly, we filtered out samples with fewer than 150 eukaryotic sequences. While all but three human fecal samples had greater than 150 sequences per sample in total, 27 samples fell below this threshold after removing sequences from bacteria, host, and dietary plants. These non-target taxa account for 94-100% of the sequences from all but one US samples and most children age two and younger ( Table 1). One samples from a three-year-old US child had a large portion of sequences derived from Entamoeba coli. The primer set used here targets eukaryotic 18S has a low affinity for vertebrate 18S sequences, and successfully amplified the eukaryotic community in most samples, including environmental samples and mammalian feces (Table S1). We suspect that the high proportion of non-target sequences amplified in samples from the US and from small children reflects a lower eukaryotic biomass and/or diversity in these samples. This hypothesis requires further investigation, but is inline with other results. Previous studies report lower bacterial diversity in western populations and in young children (reviewed in Lozupone et al., 2012). Further, lower prevalence of gut symbionts is associated with the adoption of western lifestyles (Rook, 2012), and prevalence and diversity are lower in temperate regions compared to the tropics (Bogitsh et al., 2005;Harhay et al., 2010). EUKARYOTIC MICROBIOTA IN THE MAMMALIAN GUT Mammals as a whole harbor a diverse community of eukaryotic microbes in their gut, and compositional differences follow host phylogeny and diet. The human gut microbioes is similar that of other mammals, particularly of primates. Diet drives differences in bacterial community composition across mammalian species (Ley et al., 2008b;Muegge et al., 2011). We also see compositional differences according to diet in the eukaryotic communities. Herbivores make up most of our mammalian samples that successfully amplified, and are differentiated between hindgut and foregut fermenters. The presence and absence of entire lineages varies according to dietary group, for example only hindgut fermenting herbivores harbor litostome ciliates and anaerobic fungi (e.g., Neocallimastix; Figure 1). Lineages that are present in multiple host species such as Blastocystis and Entamoeba show species level divergence that tracks host phylogeny. Artiodactyls harbor Entamoeba bovis, while primates have Entamoeba coli and E. hartmanii (Table S1). Host-specificity is also observed in the distribution of Blastocystis subtypes (Table S1). We detected Blastocystis ST1, ST2, and ST3 in humans ( Table 1) and also in the primates (baboon and orangutan) (Table S1). Kangaroos, foregut-fermenting herbivores, had large numbers of Blastocystis ST8 (Figure 1; Table S1). Diversity patterns for eukaryotic microbes within the mammalian gut differ in two ways from those of bacteria. First, eukaryotic microbes show a patchy distribution across samples, such that the most abundant lineages in some samples are completely absent from others (Figure 1). In contrast, bacterial community composition at comparably high taxonomic levels is broadly consistent across individuals and across populations; e.g., Bacteroidetes and Firmicutes are generally the dominant phyla (Figure 1; Ley et al., 2008b;Consortium, 2012;Yatsunenko et al., 2012). Second, within a phylum-level lineage there is less diversity at the strain and species level for eukaryotes, even after controlling for differences in sequencing depth (Figure 2). This suggests that presence or absence of deep lineages may be more informative than variation at lower taxonomic levels for eukaryotes. DIVERSITY OF GUT MICROBIOTA COMPARED TO OTHER ENVIRONMENTS The microbial eukaryotic communities detected in the mammalian gut are quite distinct from environmental communities both at the OTU level, as seen in the low numbers of shared OTUs ( Table 2) and at higher taxonomic levels (Figures 2, 3). Just 3% of non-fungal OTUs from the gut are shared with skin, terrestrial, and aquatic environments ( Table 2). The composition eukaryotic communities in the mammalian gut is significantly different than the composition found in environmental samples (ANOSIM p = 0.001, R = 0.76), and this is true for bacteria as well (ANOSIM p = 0.001, R = 0.94). Overall, beta-diversity patterns observed for eukaryotes are significant similar to bacterial beta-diversity as assessed by Mantel tests comparing the unweighted UniFrac distance matrices (p = 001, R = 0.658; N = 113). The distinctiveness of gut communities can also be seen when the branches of the 18S and 16S trees are colored according to the environment where the sequences were detected (Figure 2). Sequences from the gut are significantly clustered in both 16S and 18S (Figure 2) as assessed by the phylogenetic test [p-test p < 0.001; (Martin, 2002)] and UniFrac significance test (p < 0.001). In accordance with previous observations, fewer lineages of eukaryotes reside in the mammalian gut than in other habitats, and those lineages that have successfully colonized the vertebrate gut have diversified as they have co-evolved with their hosts over millions of years . Similar patterns have also been observed for bacteria (Ley et al., 2006). Here, we see significantly lower levels of alpha diversity in gut communities compared to other environments for eukaryotes (t-test comparing Faith's phylogenetic distance in the gut vs. environmental samples: p < 0.001), and bacteria (p < 0.001). (Parfrey et al., 2010;Adl et al., 2012) and are roughly equal to the phyla or superphyla level of bacteria. EUKARYOTIC COMMUNITIES ASSOCIATED WITH HUMAN SKIN RESEMBLE TERRESTRIAL SAMPLES Eukaryotic communities associated with human skin are composed mostly of fungi and have low diversity overall, in line with expectations from other studies (Paulino et al., 2006;Findley et al., 2013). Skin samples group with terrestrial samples in NMDS plots of unweighted UniFrac (Figure 4). Similarity in the fungi detected in skin and terrestrial samples accounts for much of this similarity; 70% of the OTUs on skin are fungi, and of these more than 80% (113 OTUs) are shared with soil or other terrestrial samples. The low taxonomic resolution of fungi with the 18S marker may inflate the number shared OTUs to some extent (Schoch et al., 2012). Non-fungal OTUs detected on skin correspond to mites and a handful of low abundance OTUs that are commonly found in soil such as cercozoan flagellates. The overlap between skin and soil communities may reflect frequent contact between skin and soil, or with airborne microbes, which can have high abundances of soil-associated taxa (Bowers et al., 2011b). In support of this hypothesis, skin bacterial communities also frequently group with environmental samples (Figure 4). These results are suggestive, but are drawn from skin and soil samples taken in different locations within different studies (see Methods). Testing the hypothesis that skin communities resemble terrestrial environments because contact enables frequent dispersal requires samples from human skin and the surrounding environment, including dust and soil, collected at the same time. COMPARISON OF EUKARYOTIC COMMUNITIES IN OTHER HABITATS Our dataset includes samples from a range of environments and enables us to compare eukaryotic communities across environmental habitats. Microbial eukaryotic communities are highly differentiated across host marine, freshwater, and terrestrial habitats as assessed by ANOSIM (Figure 4; ANOSIM R = 0.78, p = 0.001). The sample set analyzed here includes more soil and other terrestrial samples, such as lichens and leaf litter than water samples (Table S1), but the differences across habitat types persist when the data is subsampled to equal sample numbers across habitat types (see Methods). For each of the 1000 sub-sampled trials, the divide between freshwater, marine, and terrestrial environments was highly significant and explains much of the variation (ANOSIM ranges: p = 0.001 to 0.005 and R = 0.65 to 0.60). These habitats were also significantly clustered in the 18S tree (p-test p = 0.001 for each pair of environments). Beta-diversity differences across environments are underlain by a strong differentiation in the high-level clades present across environments (Figure 3). Some clades are restricted to one type of sample, for example, Amoebozoa (Entamoeba) and parabasalids are characteristic of fecal samples and cryptophytes comprise a large portion of the freshwater community, while the recently identified Picozoa clade (formerly "picobiliphytes"; Seenivasan et al., 2013) is restricted to marine environments. Yet, across all environments, diversity is dominated by just a few clades. Animals, fungi, alveolates, Cercozoa, and stramenopiles make up 79% of all sequences (Figure 3). At the OTU level very few taxa are shared across habitats ( Table 2). Communities from environmental samples show a distinct separation between terrestrial and water samples, and between marine and freshwater samples in beta-diversity plots (Figure 4). In accordance with previous studies that report salinity as the most important factor structuring bacterial and archaeal community composition (Lozupone and Knight, 2007;Auguet et al., 2010;Wang et al., 2011), and we also see a major divide in bacterial community composition between freshwater vs. marine habitats (Figure 4). Eukaryotic taxa also cross the saline/nonsaline boundary infrequently (e.g., Shalchian-Tabrizi et al., 2008; FIGURE 4 | NMDS plot of unweighted UniFrac reveal separation across major environmental categories. Plots (A) Eukaryotes and (B) Bacteria show the distinction between fecal samples (red and orange) and those from other environments, including skin (pink). Air samples were collected over terrestrial habitats. Logares et al., 2009;Brate et al., 2010). In our data, compositional differences between freshwater and marine eukaryotic communities are highly significant (ANOSIM p < 0.001, R = 0.58), though our dataset includes a limited number of samples. Interestingly, the difference between aquatic and terrestrial environments are also significant and explain more variation in community structure (ANOSIM R = 0.71 for terrestrial vs. freshwater and R = 0.85 for marine vs. terrestrial comparisons). Further studies that include large numbers of samples from all three habitat types, preferably from consistent geographic locations, will be necessary to determine the deepest divisions in eukaryotic community composition across environments. CONCLUSIONS Our results demonstrate clearly that microbial eukaryotes are a normal component of the mammalian microbiota, and that the communities they form, although not as diverse as bacterial communities in the gut, are nonetheless diverse and correlate with key features of their hosts. Interestingly, humans with nonwestern diets and lifestyles are comparable to other mammals in the microbial eukaryote diversity they harbor. In contrast, humans living Western lifestyles instead have very low diversity of gut microbial eukaryotes. Whether these differences are due to diet, hygiene, level of contact with animals, host genetics, or other lifestyle factors that differ among the populations surveyed remains a topic for further work: of particular interest is whether the loss of the microbial eukaryote diversity with which we as mammals have co-evolved is a trigger for the autoimmune diseases that are far more prevalent in Western populations. One intriguing difference between eukaryotic and bacterial communities is that eukaryotic communities in the vertebrate gut are heterogeneous across samples, whereas the dominant bacterial lineages are consistently recovered across individuals and across species. The patchy distribution of eukaryotes across individuals, combined with the host-species specificity of resident eukaryotic microbes, suggests that it will be difficult to clearly identify the healthy, or "normal," core eukaryotic microbiota of the human gut, just as is it is also difficult to identify a core gut bacterial community shared across humans (Li et al., 2013). Consequently, future studies of microbial eukaryote communities should focus more on identifying variation that is associated with different phenotypic states, including disease states. Finally, comparison of the mammalian gut to other environments shows that fewer deep lineages are associated with the gut than in free-living communities, and alpha diversity is lower. This pattern resembles the pattern found in bacteria in the same environments. Eukaryotes have less diversification within lineages at shallow levels than observed for bacteria, however, suggesting that although the big picture of high-level diversification is the same across these taxa, the fine-grained patterns may differ. With the improved tools for eukaryotic surveys presented here, we are now poised to characterize microbial eukaryotes across environments on a large scale in projects such as the Earth Microbiome Project, providing a much richer understanding of the relationships between pathogens, commensals, and beneficial members of our microbial eukaryote community.
2016-06-17T05:18:14.202Z
2014-06-19T00:00:00.000
{ "year": 2014, "sha1": "7a5000493340a8107a630920b78f738b64fb573e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2014.00298/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a5000493340a8107a630920b78f738b64fb573e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259552398
pes2o/s2orc
v3-fos-license
A Historical Overview of Artificial Intelligence in China Artificial intelligence (AI) refers to the interdisciplinary field of study that involves the development of computer systems and machines capable of performing tasks that typically require human intelligence, such as learning, problem-solving HE significant scientific and technological progress throughout human history can be attributed to the intellect, aspirations, and endeavors of humanity. The development of artificial intelligence (AI) and intelligent machines by humans can be traced back to a period as early as 3,000 years ago. It is not difficult to find historical materials about AI experiments in ancient China. According to Lie Zi: Tang Wen, a Taoist classic, a craftsman named Shi Yan created a robot out of leather, resin, and cinnabar that could sing and dance and communicate with facial expressions just like a real person and dedicated it to King Mu of the Western Zhou Dynasty (1046 BC-771 BC). This is the first record of a humanoid apparatus in ancient China (1). Since its establishment as an academic field in the 1950s, AI has undergone over 60 years of theory, technology, and application development. In comparison to its global standing, academic research on AI in China began relatively late and has endured a tumultuous voyage marked by repression and criticism as well as accolades and success (2). The Silent Stage (1950s-1970s) During this period, China held a critical and negative stance towards AI, likely influenced by the scientific and technological advancements of the former Soviet Union. During its early stages, AI was commonly perceived as a form of "pseudoscience" and "revisionism" (3), leading to a lack of research in this field, with the exception of Xuesen Qian's contributions. Norbert Wiener's renowned publication, Cybernetics, or Control and Communication in the Animal and the Machine, was released in 1948. The fifth chapter of the publication "Computing Machine and the Nervous System" drew a comparison between computers and the human nervous system, which is widely regarded as the forerunner of research in the field of AI (4). The aforementioned significant piece of literature was brought into the Chinese domain by Xuesen Qian during the year 1956. In 1958, Qian released the Chinese edition of his work entitled "Engineering Cybernetics". In the preface, he expressed his belief that the notion of individuals being equipped with computers and machine intelligence to enhance their abilities and achieve superi-T ority was becoming a reality (5). The Chinese Association of Automation (CAA) was founded in Beijing in June 1961, marking the inception of the first academic organization in China that was dedicated to artificial intelligence, with Qian serving as the chairman of its administrative committee (6). During a particular period, Qian, a highly skilled scientist with extensive expertise and a strategic outlook, initiated a promising initiative for the advancement of artificial intelligence in China. The Initial Stage (Late 1970s-Early 21st Century) In China, a campaign of ideological liberation began in the late 1970s, and the country's scientific community revived. At the inauguration ceremony of the National Science Conference in 1978, Xiaoping Deng delivered a significant address on the topic of "science and technology as productive forces" and announced the national strategy of "prioritizing the modernization of science and technology." The conference urged a large number of scientific and technological experts to free their minds and devote themselves to the advancement of the Chinese scientific cause (7). Subsequently, China's AI research could proceed in a legitimate manner, and certain fundamental work was carried out progressively. China started its AI-focused research initiatives as a result of the relentless advocacy of Qian and other experts and the constant backing of the government. The national scientific research program included initiatives like intelligent simulation (1978), intelligent computer systems (1986), intelligent robots (1986), intelligent information processing (1986), intelligent control (1993), intelligent automation (1993), etc. To study cutting-edge science and technology, including fields like AI and pattern recognition, many students were sent to industrialized nations (9). A number of significant AI research laboratories were set up (10). Such initiatives considerably boosted AI research in China and helped China catch up to industrialized nations in terms of AI technology. The Department of Computer Science and Technology at Tsinghua University adopted the name Department of Automatic Control in 1978 and included the research fields of AI and intelligent control (11). The first Chinese monograph on AI with independent intellectual property rights was released by Tsinghua University Press in 1987 with the title Artificial Intelligence and Its Applications (12). In 1988 and 1990, respectively, China published its first monographs on robotics (13) and intelligent control (14), launching a wide range of AI studies in the academic world. To train additional scientists for AI research, several domestic schools and universities offered a variety of AI courses in the years that followed. Associations like the Chinese Association for Artificial Intelligence (CAAI), the China Computer Federation (CCF) Artificial Intelligence and Pattern Recognition Council, and the Chinese Association of Automation Pattern Recognition and Machine Intelligence Council, as well as journals like the Journal of Artificial Intelligence, Pattern Recognition and Artificial Intelligence, and CAAI Transactions on Intelligence, were established to foster academic exchanges in the field of AI. The initial phase of AI research was dominated by topics such as theorem proving, natural Chinese language comprehen-sion, biological control, pattern recognition, robots, and expert systems, and a number of rudimentary results were obtained. For example, in 1978, Wenjun Wu proposed a new method for discovering and proving geometric theorems using machines, which he dubbed "geometric theorem machine proofing," for which he received the Major Science and Technology Achievement Award. Wu's theory on automated mathematics was published in Automated Theorem Proving: After 25 Years in 1984 and has been extensively disseminated internationally as Wu's Method (16). At its annual conference in 1978 (17), the CAA presented research findings on optical character recognition, handwritten numeral recognition, biological cybernetics, and fuzzy sets. In 1984, the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) (18) published "Planning Collision-Free Paths for Robotic Arms Along Obstacles" by Chien, Zhang, and Zhang concerning the motion paths of robotic arms. In 1990, Bo Zhang was awarded the ICL European Artificial Intelligence Prize, signifying international recognition of China's achievements in AI fundamental research (19). In August 2006, the Chinese chess software "Chess King," designed by Northeastern University, won the first Chinese Chess Computer Gaming Championships, the first human-machine Chinese Chess competition sponsored by CAAI; the supercomputer "Sky Shuttle" used in this game defeated the Chinese chess master by a score of 11:9, representing a significant leap in China's AI technology (20). The Stage of Rapid Growth (Early 21st Century to the Present) In the context of the rapid development of information technologies in the 21st century, such as cloud computing, big data, the Internet, and the Internet of Things, computing platforms such as ubiquitous perception data and graphics processors have led to the dramatic advancement of AI technologies, especially deep neural networks, bridging the technological gap between AI theories and their application. AI technologies such as image classification, speech recognition, automated knowledge Q&A, human-machine chess, and automatic driving have achieved significant application advancements, ushering in a period of exponential growth in AI (21). In this phase, policy support and the influx of capital have accelerated China's AI development at an unprecedented rate. Significant technological advancements have been made in computer vision, natural language recognition, speech recognition, and other fields, and a number of globally renowned companies, including iFLYTEK, Face++, and Unisound, have arisen. iFLYTEK won first place in three speech recognition projects at the CHiME-4 competition, with an error rate as low as 2.24 percent, establishing China as the global leader in speech recognition and robot vision (22). In the interim, the Chinese AI research and development communities have endeavored to stay abreast of global research trends and have made significant advances in knowledge discovery, data mining, multi-agent systems, pattern recognition, intelligent robots, natural language processing, and automatic deduction. The gap between China and developed nations in fundamental research on AI is also closing (23). The Chinese government first identified the advancement of AI technology as one of the most crucial initiatives in 2015 while deploying national strategic R&D projects for intelligent manufacturing (24). It then published a number of papers in the years that followed, including "Internet plus Artificial Intelligence Three Year Action Plan" (25), "New-Generation Artificial Intelligence Development Plan" (26), and "New-Generation Artificial Intelligence Regulatory Principles: Developing Responsible Artificial Intelligence" (27). These government papers underlined that in the future, AI would offer prospects for social creation by serving as a new center of global competitiveness and a new engine of economic growth. They emphasized the significance of establishing AI as a national strategy and recommended guiding principles, broad deployment, key initiatives, and strategic objectives for academic study, application promotion, and commercial AI application (28). According to various survey reports, including China's New-Generation Artificial Intelligence Development Report 2020 (29), the Report on Artificial Intelligence Development 2020 (30), China's New-Generation Artificial Intelligence Technology Industry Report 2021 (31), and the Blue Book of World Artificial Intelligence Rule of Law 2021 (32), there has been a significant increase in databases, algorithm innovations, and computing power in China over the past decade. This increase has been attributed to the widespread adoption of digital production, consumption, and social operations, which has led to breakthroughs in both basic research and practical applications of AI. In 2019, there was a notable instance of the emergence of open-source deep learning frameworks, tool sets, application software, and communities, which serves as an illustrative case. The advancement of collaboration in AI innovation has been expedited among industry and academia, as well as small, medium, and large enterprises. This has resulted in significant contributions to global AI development, positioning China as the second-largest contributor to the global AI open-source community, following the United States. As per the aforementioned reports, China has made noteworthy strides in the field of AI, with the following accomplishments being highlighted: Massive Amounts of Scientific Research Output with a Considerable Number of High-Quality Research Results According to China's New-Generation Artificial Intelligence Development Report 2020, Chinese researchers published 28,700 papers on AI in 2019, a 12.4% increase over the previous year; Tsinghua University, Peking University, and the Chinese Academy of Sciences ranked sixth, eighth, and tenth, respectively, among all the world's research institutions in terms of the total publications; among the top 100 highly cited articles on AI in the prior five years, 21 ones were from China (29). The primary fields of machine learning, neural network interpretability techniques, and heterogeneous fusion brain-inspired computing are where innovative AI research in China has produced significant advances. The speech and image recognition technologies developed in China are among the most advanced in the world. In areas including adaptive machine learning, machine perception, comprehensive reasoning, hybrid intelligence, and swarm intelligence, China's AI research shows considerable promise. The world is paying close attention to Chinese-based information processing, intelligent surveillance, biometric recognition, industrial robots, service robots, and automatic driving, which are all approaching the phase of practical applications in China (33). Enhanced Innovative Capacities among Enterprises and Strong Links between Industry and Academia China is currently strengthening the leadership role of corporations in the technological innovation of AI. Anchor AI businesses in the nation have become crucial sources of investment in AI research and development, contributing increasingly to fundamental research and cutting-edge technological advances in AI (34). For instance, Alibaba DAMO Academy has developed a research and development program with an investment of over 100 billion yuan covering quantum computing, machine learning, basic algorithms, network security, visual computing, natural language processing, human-computer natural interaction, chip technology, sensor technology, embedded systems, and more. In the Alibaba economy, AI technology has permeated over 2,000 scenarios, such as online commerce, customer service, logistics, and automatic driving (35). A collaborative framework is being established to facilitate the advancement of AI through joint efforts between industry and academia. In partnership with Nanjing University, JD.com has founded a school of AI that concentrates on advanced research domains such as reinforcement learning and large-scale optimization (36). Tencent has established an AI laboratory that focuses on research in the fields of computer vision, speech recognition, natural language processing, and machine learning. The CCF-Tencent Rhinoceros Bird Funding Program was launched through collaboration between Tencent and CCF. The program's objective is to advance AI research and enhance talent development by fostering a seamless partnership between industry and academia (37). Governments, universities, research institutions, and enterprises collaborate to create a thriving community for AI talent cultivation. Through coordination between industry, education, and research, a complementary AI education ecology has been developed in order to construct disciplines that are highly suited to social demands. Huawei and Nanjing University collaborated to establish the LAMDA Artificial Intelligence Joint Laboratory. Baidu and Beihang University partnered to create the first automated driving graduate program in China (38). Improved AI Industrial Structure and Prompt Application of New AI Technologies The AI industry in China has experienced a consistent improvement in its structure in recent times, which has resulted in the influx of substantial funding in domains such as intelligent vision and intelligent transportation. The domains of intelligent education, intelligent healthcare, and intelligent robots are experiencing swift expansion, with major high-tech corporations devising ambitious, enduring strategies for vertical sectors like intelligent transportation, intelligent healthcare, and intelligent commerce (39). China has made developing novel AI application scenarios a key strategy for accelerating AI industrial application and technological iteration. The Ministry of Science and Technology launched seven new-generation AI innovation pilot zones in 2019 and added ten new national platforms for developing the next generation of AI (40). There are many prospects for the invention and timely commercialization of AI technology thanks to illustrative broad application scenarios like the Beijing Winter Olympics, Beijing Daxing International Airport, and Hangzhou Brain, as well as specific scenarios in other industries (29). Conclusion Since its establishment as an academic study in 1956, AI has not always made progress on a global scale. It has both witnessed quick advancements and stagnation. With its development came debate and even criticism. There are still questions regarding whether machines can match human intelligence even today. Undoubtedly, a new wave of technical revolutions and economic changes is being driven by AI, which is fueled by earlier scientific and technological advances. By consistently producing new goods, services, and business models, it is fostering economic growth and altering the economy. AI will significantly accelerate overall economic and social development by altering human behavior in terms of living, working, and social interaction. It will also play a significant role in determining a country's competitiveness. China has the ability to produce enormous volumes of data and a wide market due to its large population and relatively full industrial structure, making it one of the world's key hubs for AI development. In order to increase its social productivity, national competitiveness, and all-encompassing national capabilities, China must seize the strategic chance to build new-generation AI.■
2023-07-11T16:33:56.257Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "15b9b46c4a103ba2dc431eb7712c58ea232c3577", "oa_license": "CCBYNC", "oa_url": "https://bonoi.org/index.php/si/article/download/1077/703", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "49c186af44c9dc89df41322cb03da760f608500f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
266456336
pes2o/s2orc
v3-fos-license
miRNA-133 and lncRNA-H19 expressions and their relation to serum levels of PKM2 and TGF-β in patients with systemic sclerosis Background and aims Systemic sclerosis (SSc) is a common autoimmune disorder involving the skin, blood vessels, and internal organs with an elusive pathophysiology. SSc is believed to be a genetically prone T-cell-mediated autoimmune disease. miRNAs and lncRNAs were thought to be involved in the etiology of several immunological diseases including SSc. This work aimed to assess the expression of miRNA-133, lncRNA-H19, PKM2, and TGF-β levels in SSc in comparison to controls and their relationship to the clinical course and severity of disease. Patients and methods Fifty patients with SSc and 40 healthy age and sex-matched controls were included in this study. miRNA-133 and H19 expression levels were detected using quantitative RT-PCR while serum levels of PKM2 and TGF-β were measured using ELISA techniques. Patients’ clinical data and treatments received were extracted and correlated with proteins investigated. Results Our results showed that miRNA-133 was significantly downregulated in SSc patients in comparison to controls (Mean + SD of SSc = 0.61 ± 0.22, Mean ± SD of HC = 0.97 ± 0.007, p = 0.003). However, there was significant upregulation of the serum expressions of all other tested biomarkers in SSc patients in comparison to controls; H19 (Mean + SD of SSc = 10.37 ± 3.13, Mean ± SD of HC = 1.01 ± 0.01, p = 0.0001), PKM2 (Mean + SD of SSc = 28.0 ± 4.84, Mean ± SD of HC = 16.19 ± 1.32, p = 0.005) and TGF-β (Mean + SD of SSc = 150.8 ± 6.36, Mean ± SD of HC = 23.83 ± 0.93, p = 0.0001). We also detected several correlations between serum levels of the investigated proteins in patients with SSc. Conclusion Along with TGF-β, our results show that miRNA-133, H19, and PKM2 seem to be potential contributors to SSc pathogenesis and could be promising biomarkers in the diagnosis of SSc patients. The lncRNA-H19 correlations with TGF- β, miRNA-133, and PKM2 suggest a possible influential effect of this RNA molecule on the pathogenesis of SSc. Introduction Systemic sclerosis (SSc), also known as scleroderma, is an autoimmune disease that affects the skin, blood vessels, and internal organs, ending in vasculopathy and fibrosis.Along with the characteristic skin involvement, other internal organs that might be compromised include the lungs, digestive system, heart, and kidneys [1][2][3].The precise etiology of SSc is still unknown [4], however, an autoimmune profile involving both the cellular and humoral immune systems with the development of autoantibodies, vascular alterations, and fibrosis are characteristics of SSc [5,6]. In SSc, endothelial cell damage caused by nonspecific serum toxic substances or T-cell derived proteolytic enzymes, endothelial celldirected autoantibodies, vasculotropic viruses, inflammatory cytokines, oxidative stress, and environmental stress is the initial vascular insult.Endothelial cell dysfunction results in increased expression of endothelial adhesion molecules, altered production of vasoactive mediators, platelet activation, and fibrinolytic pathways.Platelets that have been activated emit thromboxane A2, platelet-derived growth factor (PDGF), and TGF-β, which promote vasoconstriction and contribute to fibroblast activation and myofibroblast trans-differentiation promoting fibrosis and deposition of extracellular matrix (ECM) components such as collagen and proteoglycans [7,8]. TGF-β stimulates fibroblast terminal differentiation into myofibroblasts, which release more ECM components and TGF-β, resulting in tissue contraction [9].In mesenchymal cells, TGF-β is a potent inducer of glycolysis with the synthesis of pyruvate from glucose.A range of cellular enzymes is involved in this process, the most significant of which is pyruvate kinase (PK) [10,11]. At least 30 % of protein-coding genes are synchronized by micro-RNAs (miRNAs) [15].miRNAs have been shown to target PTB1 with members like miRNA-133 that are muscle-specific [16].Downregulation of miRNA-133 is responsible for the overexpression of TGF-β, which led to an increase in TGF-β signaling in cardiac fibroblasts [17].These findings imply that miRNA-133 regulation may be involved in the pathophysiology of SSc. Long non-coding RNAs (lncRNAs), RNA transcripts that do not code for proteins, are longer than 200 nucleotides.LncRNAs have been linked to a variety of disorders, including cancer, Alzheimer's disease, cardiovascular disease, diabetes mellitus, RA, and SLE [18,19].lncRNA-H19, which is found on chromosome 7 in mice and chromosome 11 in humans, is an imprinted gene that is expressed solely from the maternal allele [20,21].H19 has been shown to interact with a range of proteins and miRNAs via decoy, scaffold, and guide modes of action to control genes involved in cell proliferation, migration, differentiation, and tumorigenesis [22].Previous research on cancer patients revealed that H19 stimulates and activates tumor-specific PKM2, which is required for the miRNA-675-mediated enhancement of liver cancer cell proliferation and gene expression during carcinogenesis [23].This shows that the modulation of H19 expression levels is critical in the regulation of PKM2. On account of the intertwined relationship between miRNA-133, lncRNA-H19, PKM2, and TGF-β and their involvement in various immune and fibrotic processes, we aimed to investigate the expression and correlation between these biomarkers in patients with SSc and their relationship with the clinical course and diversity of the disease. Study design and participants This case-control study incorporated 90 subjects; 50 SSc patients diagnosed according to the criteria of the American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) [24] and 40 age-and sex-matched healthy controls.All patients were recruited from the Dermatology outpatient clinic, Cairo University Hospital.A written informed consent was obtained from all included subjects before taking part in the study.Ethical committee approval was obtained from the local ethical committee, Cairo hospitals, Cairo University "Ethical Code MD-203-2021".Any patient with associated other connective tissue disorders was excluded from this study.Patients with pregnancy, lactation, hormonal therapy, and any history of malignancy or concurrent infections were also excluded. Patients assessment Dermatological examination and evaluation of the extent and severity of SSc using "Subcommittee for Scleroderma Criteria of the American Rheumatism Association Diagnostic and Therapeutic Criteria Committee (1980)" [25] were done for all patients.The skin thickness of our patient group was assessed via mRSS based on palpation at 17 body sites.A score of 0 indicates normal skin thickness, 1 mild skin thickness, 2 moderate skin thickness, and 3 severe skin thickness.The score is calculated by summing the rating from all 17 areas (range 0-51).We hypothesized 3 grades for the mRSS; low grade (0-5), moderate (6)(7)(8)(9)(10)(11)(12)(13)(14)(15), and high with an inability to make skin folds between two fingers. Blood sample collection A 5 ml venous blood sample from each participant was collected in tubes.Samples were permitted to clot for 15 min and then centrifuged at 3000×g for 10 min.The serum samples were separated and stored at − 20 • C until the time of use for Molecular and ELISA techniques. RNA extraction and complementary DNAs (cDNAs) synthesis Total RNA was extracted by miRNeasy mini kit (Qiagen, Valencia, CA, USA) according to the manufacturer's instructions.Reverse transcription was carried out on extracted RNA using miScript® II RT kit (Qiagen, Germany.Cat.No. 218161) according to the manufacturer's instructions. The real-time thermocycler Rotor gene Q System (Qiagen, USA) was programmed as follows: heating at 95 • C for 10 min, followed by 45 cycles of denaturation at 95 • C for 15s then annealing and extension at 60 • C for 60s. The relative gene expression to internal controls (SNORD-68 and GAPDH) was calculated using the ΔCt method.Relative expression for both miRNA-133 and H19 was calculated using the 2-ΔΔCt method. Serum PKM2 and TGF-β assay Quantitative determination of PKM2 concentrations in serum was done with a Human Tumour Type M2 Pyruvate Kinase ELISA kit (Cat.No E2125Hu) provided by Bioassay Technology Laboratory, China, and according to the manufacturer's instructions. Quantitative determination of TGF-β concentrations in serum was done with an Invitrogen Multispecies TGF-β kit (Cat.No. KACL1688/ KAC1689) provided by Biosource, California, and according to the manufacturer's instructions. Statistical analysis The statistical data was analyzed using the statistical package of social science (SPSS) version 25 on Windows 8.1.All statistical data are displayed as means ± standard deviation (SD).The post-hoc comparison test was used for pairwise comparisons.Pearson's correlation was used to evaluate the relationships between the variables under study.Receiver Operating Characteristic (ROC) curves were employed to assess the miRNA-133, H19, PKM2, and TGF-β diagnostic performances.For interpretation of results, significance was adopted at P-value ≤0.05. Demographic and clinical data of the study subjects Table 1 provides the demographic characteristics of the study participants.There were no significant differences between SSc patients and the control group regarding age and sex (p = 0.138, 039 respectively). The clinical characteristics of SSc patients are listed in Table 2. The relation of expression levels of studied parameters to the clinical and demographic data The mean serum levels of TGF-β, miRNA-133, H19, and PKM2 in SSc patients concerning the demographic data and clinical characteristics are listed in Table S1. Regarding the modified Rodnan skin score (mRSS) of SSc patients, there were no significant differences between the grades of mRSS and the studied variables (p > 0.05) (Table S1). Correlation between studied parameters We detected a significant positive correlation between H19 and both miRNA-133 (r = 0.574, p = 0.003) and PKM2 (r = 0.429, p = 0.032) while with the TGF-β, it possesses a significant negative correlation with it (r = − 0.502, p = 0.011).We studied the correlation between the mRSS and the studied parameters with no significant correlation (Table 4).curve of 0.84.While the cut-off value of H19 is 0.652 and area under curve 7.67 and sensitivity/specificity of 64 % and 97.5 % respectively.Moreover, the cut-off value of PKM2 is 25.69 ng/ml with sensitivity/ specificity of 99 % and 85 % respectively.The cut-off value of TGF-β is 156 pg/ml with a sensitivity of 100 % and specificity of 89.1 %. Discussion Systemic sclerosis (SSc) is a heterogeneous systemic autoimmune fibrotic disease that is likely to entail the influence of environmental variables on genetically primed individuals [4,[26][27][28] with epigenetic factors believed to be potential contributors to the disease's vast range of manifestations [4].In the current work, we report a disturbance in the expression of 2 important epigenetic regulators of fibrosis and immune functions, namely miRNA-133 and lncRNA-H19 as well as significant positive correlations between both biomarkers in sera of patients with SSc in comparison to controls (r = 0.574, p = 0.003).We also detected abnormal associations of lncRNA-H19 with TGF-β and PKM2, which are both crucial proteins involved in the fibrotic process seen in SSc. TGF-β is a major activator and propagator of myofibroblasts and other mesenchymal cell types and is believed to have a vital pathogenic consideration in fibrogenesis [29].TGF-β′s target genes include miRNAs, among which, miRNA-133 has been shown to be differentially expressed in SSc skin samples [27,30].MiRNA-133 modulates the expression of inflammatory cells and several known fibrogenic cytokines, including TGF-β, IL-4, and IL-6 [31].Although miRNA-133 inhibited TGF-β in cardiac tissue [32], we could not find a significant correlation between them in the sera of SSc patients (r = − 0.127, p = 0.544), nonetheless, a larger sample size may be needed to verify this correlation. Taking into consideration the major role of TGF-β in SSc development, its upregulation with advancing age in our patients' group; but not among controls, may relate to the accelerated onset of internal organ involvement among elderly SSc patients [38].In agreement with previous investigations, female SSc patients had significantly higher serum levels of TGF-β compared to male patients (p = 0.027) which may be attributed to estradiol's upregulation of TGF-β production [39,40]. The fact that the digital scleroderma ulcerations respond well to therapy with divalproex sodium; valproate, that blocks TGF-β [41] can be related to the significantly higher serum expression levels of TGF-β in patients with such manifestations Versus those without as we observed in our patients suffering from digital ulcers and calcinosis. As reported previously [42][43][44], the present study also demonstrates that SSc patients with DM, HTN, ILD, arthritis, dry eye, dry mouth, and vasculitis have significantly higher serum expression levels of TGF-β compared to patients without these associations.Such co-morbidities can be related to the diverse roles of TGF-β in the development of insulin resistance and obesity; its stimulatory effect on the expression of ET-1 and vascular stiffness from collagen; its inhibitory role on the production of nitric oxide; its role in joint infiltration with polymorphonuclear leukocytes, and lymphocytes [45][46][47][48].TGF-β can also stimulate the differentiation of Tregs towards IL-17-producing cells, which may contribute to the ANCA-associated vasculitis (AAV) of SSc [49]. On the other hand, miRNA-133 diminishes the expression of collagen I and collagen III; inhibits TGF-β; and suppresses apoptosis, fibrosis, and inflammation in cardiac tissue [32,50].As all these processes are also involved in the pathogenesis of SSc, miRNA-133 may possibly play an imperative role in controlling the inflammatory process and collagen formation in patients with SSc.Indeed, we observed that pitting scars, a result of the excess fibrous tissue formation and one of the minor criteria commonly seen in SSc [32,51] were accompanied by significantly lower serum levels of miRNA-133 contrasted to patients without scars (p = 0.009).This was previously observed with miRNA-196 as well [34]. In the current work, patients with DM, arrhythmias, and vasculitis had significantly lower serum levels of miRNA-133 compared to patients without these findings.The lower miRNA-133 in patients with DM may be explained by insulin's inhibitory effect on miRNA-133 via sterol regulatory element binding protein 1c and myocyte enhancer factor 2C [52].In concordance with our results, Abdellatif [53] demonstrated that the downregulation of miRNA-133 is a prerequisite for the development of apoptosis, fibrosis, and prolongation of the QT interval and hence arrhythmias. Interestingly, SSc patients treated with MMF had their miRNA-133 expression less downregulated than patients on other treatments signifying the beneficial role of MMF in the treatment of SSc.It would be intriguing to investigate the beneficial effect of MMF Vs. other SSc medications in the management of the co-morbidities, we report to be associated with lower miRNA-133 levels. The innate immunity-related lncRNA-H19 is involved in the biological activities of the skin via its role in the stimulation of the Wnt/ β-catenin signaling pathway [54].H19 aberrantly modulates the proliferation and differentiation of various fibroblasts, particularly dermal fibroblasts, RA synovial fibroblasts, and fibroblast-like cells, such as pulmonary artery smooth muscle cells [55].Reminiscent of SLE, RA, and osteoarthritis (OA) [56,57], implies that H19 may have a fundamental role in SSc pathogenesis as well through induction of ECM differentiation and myofibroblast production.Thus, it was not surprising to detect its upregulation in our SSc patients compared to the control group (p = 0.0001).Interestingly, as DNA methylation can affect the expression of lncRNAs, the H19 upregulation we observed may be related to a hypomethylation of the promoter of the H19 gene or the methylation of CpG islands in promoter regions which is a mutual way for the deactivation of some genes as HLA-DRB1 which present significantly in patients with SSc [58,59]. Significantly higher serum levels of H19 were detected in patients under 40 (p = 0.002).In the same manner, significantly higher serum expression levels of H19 were observed among patients who developed SSc before 14 years of age (p = 0.018).This was also seen in mice models where younger mice had a higher average H19 expression than older mice [60].Further investigations of these findings seem relevant, particularly with the fact that younger patients with SSc ≤30 years frequently present with more disseminated disease [61,62].Skin manifestations of SSc, including digital ulcers, calcinosis, and telangiectasia as well as respiratory manifestations were associated with significantly higher serum levels of H19 compared to patients without these findings.The H19's immune regulatory and aging protective functions may explain these associations [63].Nonetheless, contradictory observations regarding H19 levels and both ILD and arthritis had been formerly narrated by Wan et al. [64] and Fu et al. [65].Such differences may be related to differences in the selection/recruitment of patients, race, other epigenetic factors, or sample size. The present study detected a positive correlation between H19 and PKM2 (r = 0.429, p = 0.032) and this was consistent with previous studies in liver and ovarian cancers, and it was hypothesized that H19 may induce and activate tumor-specific PKM2 which is essential for gene expression during tumorigenesis [23,66].On the contrary, Chen et al. [67] reported that H19 overexpression reduces PKM2 protein levels through ubiquitin-mediated degradation.Similarly, there are conflicting results regarding the relation between H19 and TGF-β.In our work and that of Zhang et al. [68], there is a negative correlation between those variables with the latter authors stating that TGF-β signaling inhibits expression of H19 in liver cancer.In contrast, Wang et al. [69] demonstrated a positive correlation between TGF-β and H19.These contradictions deserve further research in different clinical and immunological settings.While aerobic glycolysis directly influences how lymphocytes, such as T, B, and NK cells, differentiate and function [70], this process encompasses a sequence of cellular enzymes; the most important key enzyme is pyruvate kinase (PK) [10].In the majority of cell types, PK can be found as the M1 or M2 isoform or even both.Alternative splicing is implemented to PK muscle (PKM), in order to create the PKM1 or PKM2 isoforms, which include exon 9 or 10, respectively [12,13]. We observed that the expression level of PKM2 was significantly upregulated in serum samples of SSc patients compared to the control group (p = 0.005).In agreement with our study, cultured dermal fibroblasts of SSc patients in a glucose-free medium, demonstrate a higher glycolytic metabolism compared with normal fibroblasts [71].PKM2 was also shown to facilitate fibrosis progression in murine models [72].Such observations suggest its possible participation in SSc pathogenesis. PKM2 promotes Th17 cell differentiation and autoimmune inflammation through STAT3 activation with the production of further cytokines and chemokines, such as IL-6, IL-8, and monocyte chemoattractant protein-1 (MCP-1) [73,74] with their previously addressed role of these cytokines in SSc pathogenesis [31].This Th17/IL-17 promoting effect of PKM2 may also be why it tended to be more upregulated in patients with dry eyes, reminiscent of Sjogren's Syndrome (SS) where an increased Th17 quantity and IL-17 expression is evident [75]. Although PKM2 may function as a target for miRNA-133 and the latter suppresses glycolysis within lung cancer cells by targeting PKM2 [76], our results didn't show such correlation.PKM2 was significantly higher in female patients compared to males (p = 0.014) possibly due to estrogen's ability to activate PKM2 and further increase its expression [77].On the other hand, it was unexpected that although PKM2 promotion of neutrophil activation may be responsible for bronchiectasis [78], our results showed that patients with bronchiectasis had significantly lower PKM2 serum expression levels compared to patients without bronchiectasis. Until the role of PKM2 in cardiomyocyte cell cycle and cardiac regeneration is studied.We hypothesize that the positive correlation we detected between H19 and PKM2 may explain why patients with higher PKM2 showed an increased incidence of pericardial effusion and arrhythmia. Conclusion In brief, our study shows upregulation of key molecules; TGF-β, H19, and PKM2, involved in the fibrotic and immunological process responsible for the development of SSc and its associated abnormalities.We also demonstrate the downregulation of the anti-fibrogenic/ inflammatory protein miRNA-133 which may possibly be involved in the various pathogenic mechanisms associated with SSc.The targeted pharmacological modulation of these molecules is an interesting field of research and innovation and may prove to be a beneficial option for patients with SSc.Furthermore, in this patients' cohort, MMF seemed to be associated with miRNA-133 upregulation, and methotrexate with lower H19 expression in comparison to other examined treatments.We suggest further studies exploring these biomarkers and pharmacologic mechanisms through which pulmonary and extra-pulmonary symptoms in SSc patients show better improvement when treated by a combination of MMF and low-dose methotrexate as described by Gonzalez-Nieto et al. [79]. Additional experimental studies are needed to illuminate the key roles of miRNA-133, H19, PKM2, and TGF-β in the pathogenesis of autoimmune diseases.The lack of tissue verification of these differentially expressed miRNA-133 and H19 and relevant pathways is the study's principal limitation.The minimal number of SSc patients for quantitative PCR verification is another restriction. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Table 5 and 1 . Fig. 2 demonstrate the sensitivity and specificity analyses of our parameters using the ROC curve showing their diagnostic value.The ROC curve of miRNA-133 has an 84 % sensitivity and a 98 % specificity.The cut-off value of miRNA-133 is 0.56 with area under ig.Boxplot diagram for (A) miRNA-133, (B) H19, (C) PKM2, and (D) TGF-β in SSc patients and control. Table ( 1 ) Demographic data (age and sex) of SSc patients and control group. Abbreviations: SSc, systemic sclerosis.By using T-test for equality of Means of the age of the patients and chi-square test for sex.p-value >0.05Not Significant (NS).p-value <0.05 Significant (S).p-value ≤0.001 Highly Significant (HS). Table 2 Clinical characteristics of SSc patients included in the study. Table 3 Serum levels of studied parameters in patients and controls.
2023-12-22T16:02:09.644Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "937b50da29036c9e807347794e1151409c035624", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ncrna.2023.12.003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8389e0096b806246f83e659dad9f3c401e9c5101", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263659392
pes2o/s2orc
v3-fos-license
Traumatic left common carotid artery thrombosis with ischemic brain injury: A case report Introduction and importance Penetrating neck injuries (PNIs) are common and are associated with arterial injuries in 10–25 % of the cases, with carotid artery twice as frequent as to vertebral arteries. Carotid artery injury constitutes about 22 % of all cervical vascular injuries. Case presentation We present a case of a 44-year-old male who sustained penetrating neck injury in a motor traffic crash. He presented with monoplegia of his right upper limb and an open wound on the left side of his neck which was not actively bleeding hence surgical debridement was done and sutured. CT angiography and CT-scan brain concluded of left common carotid thrombosis secondary to penetrating neck trauma with ischemic brain injury. Patient was successfully managed conservatively. Clinical discussion The general mortality rate in PNI with associated cervical vascular injury is approximately 66 %. Artery dissection occurs when the intima tears causing intramural hematoma leading to narrowing or occlusion. CT angiography is the best and fastest modality to assess these injuries and management depends on the clinical bases of the patient. Conclusion Neck is vulnerable to external trauma because it is not protected by the skeleton. The neck contains vital structures such as the trachea, esophagus, blood vessels and nervous system organs. Vascular injuries can be life-threatening owing to its prompt clinical assessment and investigation. Introduction The neck is a complex region that contains many vital structures therefore injury to the neck poses a great risk of life-threatening events.Penetrating neck injury (PNI) is described as disruption of the platysma muscle, represents 5-10 % of all trauma cases and is associated with a mortality rate of up to 10 % [1,2].Arterial injuries occur in about 25 % of PNI whereby the carotid artery involvement accounts for approximately 80 % and vertebral artery in 43 %.Neurological structures that are at risk of damage include spinal cord, cranial nerves (VII-XII), sympathetic chain and brachial plexus [2].Thrombosis of the carotid artery and/or cerebral artery due to trauma to neck and/or head is very rare without pre-existing pathology.It commonly results from direct, penetrating or blunt trauma to the neck [3].Herein we present a case of an uncommon presentation of brain ischemia from common carotid artery from penetrating neck trauma. This work has been reported in line with the SCARE 2020 criteria [4]. Case presentation A 44-year-old male was referred to our tertiary centre with a history of neck and chest injury, following being involved in a motor traffic accident 6 h prior as a motorcyclist who was knocked by a tricycle (tuktuk).He sustaining a cut wound from the shattered tuktuk windshield to the left side of the neck and chest, which was followed by massive bleeding from the wound site and loss of consciousness for 2 h and upon regaining consciousness patient was noted to have a loss of power on his right upper limb.The patient was primarily attended at a peripheral health care facility where resuscitation was done and surgical debridement of the wound was done before referral to our centre for further investigations and management. On presentation, the patient was conscious with a Glasgow coma score of 15/15, had a blood pressure of 130/70 mmHg, pulse rate of 80 beats/min, and respiratory rate of 14 breaths/min.On inspection of the wounds, he had a sutured and dressed wound on the chest and a 3 by 4 cm partially sutured wound on his left side of the neck (posterior triangle) but dressed well (Fig. 1).He was also noted to have right upper limb monoplegia with power grade 0, with reduced tone and reflexes but intact sensation.However, the respiratory and cardiovascular systems were essentially normal. Doppler ultrasound of the neck was obtained and noted the inability to appreciate neck vessels on the left side.Vascular injury of the neck vessels was suspected where a CT angiography was done and it revealed left common carotid artery (CCA) filling defect from the aortic arc, to the area of bifurcation; highly suggestive of thrombus in the CCA (Fig. 2).A brain CT-scan showed left-sided temporal lobe ischemic, marked around the middle cerebral artery territory that was in keeping with ischemic brain injury (Fig. 3) and the chest CT-scan was normal.His labs revealed a normal CBC with a hemoglobin of 12.8 g/dl, INR of 1.14, serum creatinine of 68 μmol/l and normal liver enzymes. The diagnosis of left common carotid thrombosis secondary to penetrating neck trauma with ischemic brain injury was reached and due to the lack of vascular surgeons at our facility, surgical toileting of the wound was done and the patient was referred to a higher centre to vascular surgeons for further management.Upon follow-up, the patient was counseled and taught on continuous physiotherapy at home and was discharged on oral Aspirin. Discussion Blunt and penetrating injury to the neck may result in carotid artery injury, including vessel laceration, dissection, occlusion, fistula or pseudoaneurysm formation.Traumatic injury to the internal carotid artery is associated with a mortality of up to 40 % and 45 % of survivors have a residual neurological deficit [5].Cervical vascular injuries are classified into blunt (BCVI) or penetrating (PCVI).Post-traumatic carotid artery dissections tend to occur more than in vertebral arteries, as a result of motor vehicle accidents, falls, attempt of suicide and assaults.The main mechanism includes hyperextension and contralateral rotation of the head resulting in stretching of the internal carotid artery [6].Any vascular injury causes disruption to the intima and the exposed subendothelial collagen promotes platelet aggregation leading to thrombus formation that can lead to embolization which is the proposed pathophysiology in the case presented herein [6]. An uncommon cause of an ischemic stroke is Eagle syndrome, whereby there is compression of the internal carotid artery from an abnormal elongated styloid process.The authors described a similar presentation where their patient also presented with middle cerebral artery territory infarction as in the index case [7].The cervical arteries are also at risk of dissection in association with mechanical events caused by surrounding normal bony structures and processes [7].This highlights that there are conditions whereby anatomical variations should be considered as the cause of stroke especially in young individuals. Investigation modalities include Doppler ultrasonography, CT angiography, magnetic resonance angiography (MRA) or four-vessel DSA.Most clinicians prefer CTA as the preliminary evaluation as done in this case [8].The advantage DSA gives are the ability of intervention or endovascular treatment options.However due to its scarcity and 1 % complication rate from the invasiveness it is not widely used [8]. PNI is considered difficult to manage due to the complex neck anatomy therefore a well-prepared trauma team is important on the outcome [9].Management of PNI has been continuously reviewed and revised where the zones of the neck has been described to guide clinicians to account the location of the injury leading to selective surgical exploration [10].In the past, routine neck exploration was the common practice leading unnecessary operations and iatrogenic injuries, however, experiences from high-volume centres in developing countries recommend selective non-operative management (SNOM) as the current standard of care [9]. Hemodynamic stable patients with zones I and III injuries often require further evaluation by arteriography, CT-scan or MRI.Those with zone II injuries and are clinically unstable (shock, external hemorrhage, expanding hematoma, acute neurological symptoms) need surgical exploration [10].However recently, some advice a "no zone" approach in hemodynamic stable patients with no obvious signs and symptoms of vascular injury.In the index case the patient was stable with no "hard signs" hence the surgical toilet of the wound was done. In cases of life-threatening hemorrhage with shock, ligation of the bleeding vessels remains the preferable option.Collaterals are well documented such as vertebrocarotid and vertebrosubclavian collaterals.Babu et al. in their report describe that due to the collateral circulation their patient did not develop neurological deficits despite ligating the right common carotid and right subclavian arteries [11]. Conclusion Vascular injuries to the neck can cause life-threatening hemorrhage and associated neurological complications hence need profound surgical expertise in the management.The target of management aims to reduce the advancement of vessel injury, reduce the occurrence of ischemic events and improve the overall neurological and survival of the patient.Recently the guidelines are changing based on recent evidence-based data, nonetheless, clinicians should target therapy based on their patient's presentation signs and symptoms to achieve favorable outcomes. Fig. 1 . Fig. 1.Clinical photograph showing left sided neck wound with no active bleeding.
2023-10-05T15:40:39.257Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "6f34b54c1c2bf49c11a0dad63adbe91b5e4c55d2", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ijscr.2023.108891", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b2f375465b2c25da503d96194ce080d13ba1839d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209435735
pes2o/s2orc
v3-fos-license
Anti-Obesity Effects of Microalgae In recent years, microalgae have attracted great interest for their potential applications in nutraceutical and pharmaceutical industry as an interesting source of bioactive medicinal products and food ingredients with anti-oxidant, anti-inflammatory, anti-cancer, and anti-microbial properties. One potential application for bioactive microalgae compounds is obesity treatment. This review gathers together in vitro and in vivo studies which address the anti-obesity effects of microalgae extracts. The scientific literature supplies evidence supporting an anti-obesity effect of several microalgae: Euglena gracilis, Phaeodactylum tricornutum, Spirulina maxima, Spirulina platensis, or Nitzschia laevis. Regarding the mechanisms of action, microalgae can inhibit pre-adipocyte differentiation and reduce de novo lipogenesis and triglyceride (TG) assembly, thus limiting TG accumulation. Increased lipolysis and fatty acid oxidation can also be observed. Finally, microalgae can induce increased energy expenditure via thermogenesis activation in brown adipose tissue, and browning in white adipose tissue. Along with the reduction in body fat accumulation, other hallmarks of individuals with obesity, such as enhanced plasma lipid levels, insulin resistance, diabetes, or systemic low-grade inflammation are also improved by microalgae treatment. Not only the anti-obesity effect of microalgae but also the improvement of several comorbidities, previously observed in preclinical studies, has been confirmed in clinical trials. Introduction Microalgae are prokaryotic or eukaryotic microscopic single-cell organisms, found in fresh water and marine systems. They produce approximately half of the atmospheric oxygen and use the greenhouse gas carbon dioxide to grow photo-autotrophically. Together with bacteria, microalgae provide energy for all the trophic levels above them. Although microalgae show a great biodiversity, the ones most studied are Chlorella, Spirulina, Haematococus, Dunaniella y Scenedesmus. Microalgae produce a great variety of compounds, such as photosynthetic pigments (carotenoids and chlorophylls), sterols, polyunsaturated fatty acids, vitamins, minerals, fiber, polysaccaharides, enzymes, peptides, and toxins. It is important to emphasize that the chemical composition of microalgae depends on the species and the cultivation conditions, such as temperature, illumination, pH, CO 2 supply, salt, and nutrients [1,2]. They have attracted great interest in recent years due to their potential applications in nutraceutical and pharmaceutical industries, and are a major source of bioactive medicinal products and food ingredients with anti-oxidant, anti-inflammatory, anti-cancer, and anti-microbial properties [2,3]. One of the potential application fields for the microalgae bioactive compounds is obesity, which has become a serious health problem due to its high prevalence, and because it is a major risk factor for a wide range of chronic diseases, including diabetes, cardiovascular diseases, and cancer [4][5][6]. Nowadays, approved new-generation anti-obesity medications offer a safe and tolerable adjunct to lifestyle interventions for the majority of individuals with obesity. Nevertheless, depending on patient tolerability to side effects, poor adherence or discontinuation can be treatment limitations. In fact, this situation reduces treatment benefits [7]. In this context, the present review gathers in vitro and in vivo studies addressed to analyze the anti-obesity effects of microalgae extracts, but not those where isolated microalgae compounds have been used. In Vitro Studies To date, several in vitro studies have been conducted to analyze the effects of microalgae extracts on adipogenesis and metabolic processes involved in triglyceride (TG) accumulation (Table 1; Figure 1). Sugimoto et al. [8] used an aqueous extract of Euglena gracilis Z (Euglena), unicellular photosynthesizing green algae. Euglena contains vitamins, minerals, unsaturated fatty acids, and accumulates crystalline β-1,3-glucan, a polysaccharide also known as paramylon, which is considered a functional dietary fiber. The authors obtained human adipose-derived stem cells (hASCs) from a non-diabetic female donor with a body mass index (BMI) of 26 kg/m 2 , and differentiated these cells into adipocytes for 7 days (0-7 days). Cells were cultured for an additional 7-day maturing period (8-14 days). Cytotoxicity was not observed in cells treated with any of the checked Euglena extract dilutions used (1.25%, 2.5%, 5%, 10%, 20%, or 40%). When the lipid content of cells incubated with the extract at doses of 5%, 10%, or 20% was analyzed, the authors observed that Euglena extract reduced cellular TG content 17%, 44%, and 74%, in line with the increased concentration of the extract in the medium. In order to explore the mechanism underlying this effect, the authors studied the adipogenic pathway. Adipogenesis is a tightly regulated cellular differentiation process, which allows adipose tissue expansion. In this process, mesenchymal stem cells become pre-adipocytes and pre-adipocytes differentiate into mature adipocytes, the cells that are able to accumulate triglycerides into lipid droplets [12]. For this purpose, they measured gene and protein expressions of peroxisome Anti-obesity mechanisms of action described in in vitro studies (* ex vivo). ACC: acetyl-CoA carboxylase, AP2: fatty acid binding protein, C/EBP: CCAAT-enhancer-binding protein, CPT1: carnitine palmitoyltransferase 1, CREB: cAMP regulatory element-binding protein; DGAT-1: diacylglycerol O-acyltransferase, FABP4: fatty acid-binding protein 4, FAS: fatty acid synthase, LPAATβ: lysophosphatidic acid acyltransferase β, PGC-1α: peroxisome proliferator-activated receptor gamma co-activator 1α, PRDM16: PR domain-containing 16, PPARγ: peroxisome proliferator activated receptor γ, SREBP1c: sterol regulatory element-binding protein 1c. ↑ significant increase, ↓: significant decrease. Sugimoto et al. [8] used an aqueous extract of Euglena gracilis Z (Euglena), unicellular photosynthesizing green algae. Euglena contains vitamins, minerals, unsaturated fatty acids, and accumulates crystalline β-1,3-glucan, a polysaccharide also known as paramylon, which is considered a functional dietary fiber. The authors obtained human adipose-derived stem cells (hASCs) from a non-diabetic female donor with a body mass index (BMI) of 26 kg/m 2 , and differentiated these cells into adipocytes for 7 days (0-7 days). Cells were cultured for an additional 7-day maturing period (8-14 days). Cytotoxicity was not observed in cells treated with any of the checked Euglena extract dilutions used (1.25%, 2.5%, 5%, 10%, 20%, or 40%). When the lipid content of cells incubated with the extract at doses of 5%, 10%, or 20% was analyzed, the authors observed that Euglena extract reduced cellular TG content 17%, 44%, and 74%, in line with the increased concentration of the extract in the medium. In order to explore the mechanism underlying this effect, the authors studied the adipogenic pathway. Adipogenesis is a tightly regulated cellular differentiation process, which allows adipose tissue expansion. In this process, mesenchymal stem cells become pre-adipocytes and pre-adipocytes differentiate into mature adipocytes, the cells that are able to accumulate triglycerides into lipid droplets [12]. For this purpose, they measured gene and protein expressions of peroxisome proliferator-activated receptor γ (PPARγ) and CCAAT-enhancer-binding protein α (C/EBPα), the master regulators of adipocyte-differentiation. While the gene expression of Pparγ and C/ebpα were increased during adipocyte-differentiation in the control cells, these were repressed by 23% when 20% of Euglena extract was added to the medium. Protein amounts of PPARγ and C/EBPα were also significantly reduced, which is consistent with this result. The authors also observed an inhibition induced by the Euglena extract in gene expression of adipogenic markers expressed downstream in the adipocyte differentiation process, and regulated by PPARγ and C/EBPα, such as fatty acid binding protein (Ap2) (also known as fatty acid bonding protein 4, Fabp4) and lipoprotein lipase (Lpl). These results show that Euglena extract inhibits adipocyte-differentiation through suppression of master regulators involved in that metabolic pathway. Furthermore, since Pparγ expression is enhanced at the early phase of adipocyte-differentiation by two members of the C/EBP protein family, C/EBPβ and C/EBPδ, as well as by sterol regulatory element-binding transcription factor 1c (SREBP1c) and cAMP regulatory element-binding protein (CREB), their gene expression was also determined in hASCs. For this purpose, cells were cultured with or without Euglena extract (20%) during the first three days in the differentiation process. All these genes were downregulated when Euglena extract was present in the medium, showing that its inhibitory effect on adipocyte-differentiation was caused by repressing the early stage of adipocyte-differentiation. These observations were confirmed when adipogenesis was evaluated by determining Oil Red O from cells treated with 20% of extract during adipocyte-differentiation (days 0-7) and from those cells treated during adipocyte maturation (days [8][9][10][11][12][13][14]. Constant supplementation (day 0-14) with Euglena extract inhibited lipid accumulation by approximately 50% as compared to the control cells. When supplementation with the alga extract took place only during the adipocyte differentiation period (days 0-7) lipid accumulation was inhibited by 60%, but approximately 96% of the accumulated lipids remained in the cells treated with the extract on days 7-14. Therefore, the authors concluded that Euglena extract suppresses adipocyte-differentiation at the early stage, thus contributing to its anti-obesity effect. Another microalga studied by several authors is Spirulina maxima. It contains pigment proteins such as chlorophyll a and C-phycocyanin, which have been reported as possessing anti-oxidant, anti-inflammatory (both pigments), and anti-diabetic actions (C-phycocyanin). Seo et al. [9] performed an in vitro study to explore whether an ethanolic extract of this microalga also showed anti-obesity and adipocyte browning properties. For this purpose, 3T3-L1 pre-adipocytes and C3H10T1/2 cells, a cellular line functionally similar to mesenchymal stem cells, were treated during the differentiation period (0-8 days) with 50 or 100 µg/mL of the microalga extract. Previously no cytotoxicity had been confirmed at these concentrations. In 3T3-L1 pre-adipocytes, the addition of the extract to the differentiation medium decreased TG accumulation in a dose dependent manner. This effect was due to lower protein expression of the adipogenic regulators C/EBPα, PPARγ, and aP2, meaning that adipogenesis was inhibited. The same results were observed when the authors treated C3H10T1/2 cells with the extract at a concentration of 100 µg/mL. In addition, the authors explored lipogenesis, the metabolic process through which fatty acids are esterified with glycerol for their storage as triglycerides, and that allows adipocytes to increase their size. More specifically, the authors measured enzymes involved in de novo lipogenesis, the biosynthetic pathway by which acetyl-CoA is converted to fatty acids before they are esterified with glycerol to synthesize triglycerides. For that purpose, authors measured proteins such as acyl-CoA carboxylase (ACC) and fatty acid synthase (FAS), as well as markers involved in triglyceride assembly, such as lysophosphatidic acid acyltransferase β (LPAATβ), lipin-1 and diacylglycerol acyltransferase-1 (DGAT1). In this regard, they observed that treating 3T3-L1 pre-adipocytes during differentiation with the ethanolic extract led to reductions in the protein expressions of SREBP1, ACC, and FAS, as well as of LPAATβ, lipin-1, and DGAT1. As far as C3H10T1/2 cells are concerned, incubating cells with the extract obtained from Spirulina maxima during differentiation reduced protein expressions of the three latter lipogenic markers (LPAATβ, lipin-1, and DGAT1). The authors concluded that this extract significantly suppressed lipogenesis either in 3T3-L1 adipocytes or C3H10T1/2 cells. Finally, the authors reported browning effects ex vivo, in cells obtained from the stromal vascular fraction of mice fed a high-fat diet (HFD) supplemented with an ethanolic extract of the alga (150 or 450 mg/kg/day). Cells were differentiated into white adipocytes, and higher protein expression of PR domain containing 16 (PRDM16) and uncoupling protein 1 (UCP1) was detected in the adipose primary cells. The expression of peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC1α) was upregulated only by the higher dose. Using Phaeodactylum tricornutum, a diatom microalga rich in eicosapentanoic acid (EPA) and the carotenoid fucoxanthin, Koo et al. [10] aimed to evaluate the anti-obesity effect of a commercially available extract, containing 3.5-6% fucoxanthin (w/w), on lipid accumulation in 3T3-L1 adipocytes. The cells were cultured during the differentiation period for six days with the Phaeodactylum tricornutum extract (100, 125, 200, 250, and 400 µg/mL), fucoxanthin (active principle; 10, 20, and 40 µg/mL) or curcumin as control (20 µg/mL). The microalga extract reduced adipogenesis in 3T3-L1 preadipocytes at a concentration of 250 µg/mL, and consequently reduced cellular lipid accumulation was observed. When looking at the mechanisms underlying this effect, the authors reported that although no changes were observed in the protein expression of the adipogenic factor C/EBPα, Phaeodactylum tricornutum extract decreased PPARγ protein expression and increased that of UCP1, mainly at the highest dose (400 µg/mL). Therefore, the authors concluded that Phaeodactylum tricornutum extract exhibits anti-obesity effects by controlling lipid metabolism through PPARγ and UCP1. Finally, Gille et al. [11] incubated 3T3-L1 cells on day 7 of differentiation with an ethanolic extract of the same microalga, at a dose of 100 mg/L for 24 h. Moreover, they also tested its bioactive compound fucoxanthin at a concentration of 5 µM. According to the authors, each 100 mg/L of the microalga contained 3.6 µM of fucoxanthin. Regarding the microalga effect, the authors did not appreciate significant effects on lipid content or cell toxicity, although cluster of differentiation 36 (Cd36) and carnitine palmitoyltransferase 1a (Cpt1a) mRNA levels were significantly increased. Furthermore, Cpt1a gene expression was similarly induced by fucoxanthin incubation. Consequently, it can be proposed that the effect of the microalga extract on Cpt1a expression were due, at least in part, to its fucoxanthin content. Animal Studies Studies using animal models and different experimental approaches to analyze the potential anti-obesity effect of microalgae have revealed beneficial effects on body weight management and energy metabolism (Table 2; Figure 2). Seo et al. [9] carried out a study in Male Institute of Cancer Research (ICR) mice fed either a standard diet (SD; 18% of energy from fat) or a high-fat diet (HFD; 60% of energy from fat), supplemented or not with a commercially available ethanolic extract of Spirulina maxima at doses of 150 or 450 mg/kg body weight/day, for 6 weeks. Supplementation of the HFD with the microalga extract led to a significant reduction both in body weight gain as well as in subcutaneous and visceral adipose tissues, but a dose-dependent response was not observed. In addition, after the treatment, decreases in fasting glucose, TG, total cholesterol (TC), and low-density lipoprotein cholesterol (LDLcholesterol), as well as increases in high-density lipoprotein cholesterol (HDL-cholesterol), were observed. By exploring the mechanisms responsible for the anti-obesity of the microalga extract, the authors observed lower protein expressions of adipogenesis and browning markers in white adipose tissue of mice fed the HFD supplemented with the extract. Thus, the authors detected a decrease in protein expression of C/EBPα, PPARγ, and aP2, with both extract doses. They also found increased expression of proteins related to thermogenesis, such as PRDM16 and PGC1α, not only in brown but also in white adipose tissue. These results suggest that the extract obtained from Spirulina maxima ameliorated the obesity induced by HFD by decreasing adipogenesis and increasing energy expenditure via thermogenesis. Heo et al. [13] also tested the effects of Spirulina maxima, but in this case in rats. In their study, Sprague Dawley rats were fed a standard-fat diet (LFD group; 10% of energy from fat) or a HFD (60% of energy from fat) for 6 weeks. After this period, rats previously fed the HFD were divided into four groups: rats fed the same diet (HFD group), rats fed the HFD supplemented with an extract of Spirulina maxima at a dose of 62.5 mg/kg body weight/day (Spirulina maxima 62.5), rats fed the HFD Seo et al. [9] carried out a study in Male Institute of Cancer Research (ICR) mice fed either a standard diet (SD; 18% of energy from fat) or a high-fat diet (HFD; 60% of energy from fat), supplemented or not with a commercially available ethanolic extract of Spirulina maxima at doses of 150 or 450 mg/kg body weight/day, for 6 weeks. Supplementation of the HFD with the microalga extract led to a significant reduction both in body weight gain as well as in subcutaneous and visceral adipose tissues, but a dose-dependent response was not observed. In addition, after the treatment, decreases in fasting glucose, TG, total cholesterol (TC), and low-density lipoprotein cholesterol (LDL-cholesterol), as well as increases in high-density lipoprotein cholesterol (HDL-cholesterol), were observed. By exploring the mechanisms responsible for the anti-obesity of the microalga extract, the authors observed lower protein expressions of adipogenesis and browning markers in white adipose tissue of mice fed the HFD supplemented with the extract. Thus, the authors detected a decrease in protein expression of C/EBPα, PPARγ, and aP2, with both extract doses. They also found increased expression of proteins related to thermogenesis, such as PRDM16 and PGC1α, not only in brown but also in white adipose tissue. These results suggest that the extract obtained from Spirulina maxima ameliorated the obesity induced by HFD by decreasing adipogenesis and increasing energy expenditure via thermogenesis. Heo et al. [13] also tested the effects of Spirulina maxima, but in this case in rats. In their study, Sprague Dawley rats were fed a standard-fat diet (LFD group; 10% of energy from fat) or a HFD (60% of energy from fat) for 6 weeks. After this period, rats previously fed the HFD were divided into four groups: rats fed the same diet (HFD group), rats fed the HFD supplemented with an extract of Spirulina maxima at a dose of 62.5 mg/kg body weight/day (Spirulina maxima 62.5), rats fed the HFD supplemented with an extract of Spirulina maxima at a dose of 125 mg/kg body weight/day (Spirulina maxima 125) and rats fed the HFD supplemented with an extract of Spirulina maxima at a dose of 250 mg/kg body weight/day (Spirulina maxima 250). The cultivated microalgae were harvested by centrifugation (with a tubular separator), stored at -50 • C and lyophilized. All Spirulina maxima samples were administered orally for 4 weeks after being dissolved in carboxymethyl cellulose. The body weight increase induced by high-fat feeding, as well as the increase in white adipose tissue, were significantly reduced after supplementation with Spirulina maxima in a dose-dependent manner. Haemotoxylin and eosin staining of epididymal adipose tissue revealed that the treatment reduced the increase in adipocyte size induced by the HFD at all the tested doses. With regard to brown adipose tissue index, no change was found in the HFD group when compared to the LFD group, but all the treatments with Spirulina maxima induced a significant increase in this parameter. The biochemical analysis revealed that the supplementation with Spirulina maxima attenuated a diet-induced decrease in adiponectin and increase of leptin and tumor necrosis factor α (TNF-α). It is well-known that obesity is commonly accompanied by insulin resistance, and for this reason serum glucose and insulin levels were also measured, and the Homeostatic Model Assessment for Insulin Resistance (HOMA-IR) was calculated. The Spirulina maxima 250 group showed reduced glucose and insulin levels (insulin levels also in the Spirulina maxima 125 group), and all the tested doses of the microalga diminished HOMA-IR values, reflecting the amelioration in insulin resistance induced by the HFD. In addition, TC was reduced in rats treated with 125 and 250 mg/kg body weight/day of Spirulina maxima, and the HDL-c/TC ratio was increased at all the tested doses. Furthermore, other parameters were measured in order to estimate the potential toxicity of microalga extract in the liver and kidneys, but all values were found in normal ranges. In the case of alanine aminotransferase (ALT), all doses reduced the increase induced by the HFD. In order to gain insight into the molecular mechanisms underlying the observed effects, some gene and protein expressions were measured in epididymal adipose tissue and skeletal muscle. In adipose tissue, the HFD reduced the activated form of 5' AMP-activated protein kinase (pAMPK), which leads to a decreased expression of the phosphorylated form of ACC, which is the inactive form of this enzyme, and increased protein expression of SREBP1 and FAS. The addition of the two highest doses of Spirulina maxima to the diet prevented both these effects as well as the increase in gene expression of Srebp1 and Fasn. As far as the lipolytic and oxidative pathways are concerned, HFD decreased gene expression of adipose triglyceride lipase (Atgl) and Cpt1, whereas that of nuclear factor κB (NfκB) was increased. Those changes were prevented by Spirulina maxima supplementation (Atgl and Cpt1 with the highest dose and NfκB with the two highest doses). As in the case of adipose tissue, the highest two doses of Spirulina maxima activated AMPK in skeletal muscle, and increased Cpt1 and Ucp2 gene expressions. A similar pattern of response was observed in both adipose tissue and skeletal muscle for adiponectin receptor (AdipoR1) in that gene and protein expressions increased in the Spirulina maxima 125 and Spirulina maxima 250 groups. Finally, when the authors analyzed the protein expression of nicotinamide phosphoribosyltransferase (NAMPT) and sirtuin 1 (SIRT1), they observed reduced values in both proteins either in adipose tissue and skeletal muscle from rats fed the HFD, that were prevented by Spirulina maxima treatment. Another microalga studied for its anti-obesity properties is Phaeodactylum tricornutum. Gille et al. [11] analyzed the effects of this diatom microalga in adipose tissue in mice fed a HFD. For this purpose, male C75BL/6J mice were divided into three experimental groups: the HFD group fed a diet that provided 45% kcal from fat (mainly lard), the PE100 group fed the same diet supplemented with an ethanolic extract of the microalga, at a dose of 100 mg/kg body weight/day, and the PE300 group fed the same diet supplemented with an extract of the microalga, at a dose of 300 mg/kg body weight/day. Animals were maintained under these experimental conditions for 26 days. At the end of the experimental period, body weight gain, adipose depot weight, adipocyte size distribution, and the expression of lipid and energy metabolism-related genes were analyzed in adipose tissue. Although the energy intake was similar in the three groups, the PE300 group gained less body weight and showed less total body fat mass than the HFD group did. Body weight lost over a 6-h fasting period (an indicator of energy expenditure) was higher in the PE300 group. Regarding the fat depots, in PE300 group epididymal and inguinal tissues decreased by 24% and 17% respectively, compared to control mice and the PE lowest concentration was without effect. In addition, gene expression of mesoderm-specific transcript homolog protein (Mest), a marker of white adipose tissue expansion, was reduced in inguinal white adipose tissue of mice supplemented with the highest dose of the microalga extract. The authors also found that inguinal white adipose tissue of mice from both PE groups contained higher percentage of smaller adipocytes and lower percentage of large ones. Moreover, in this tissue, crown-like structures (CLS, microscopic foci of dying adipocytes surrounded by macrophages), which were positive for immunostaining against the macrophage marker galectin-3 (MAC-2), were found in all the animals of the PE300 group. Regarding plasma parameters, there was a tendency towards reduced HOMA-IR index only in the group supplemented with PE100, due to the significant decrease in fasting glucose. Moreover, the authors observed that fucoxanthin metabolites, such as fucoxanthinol and amarouciaxanthin were found in interscapular brown adipose tissue, epididymal and inguinal white adipose tissues from mice treated with the microalga extracts. In order to analyze fatty acid metabolism, the expression of several related genes was measured in epididymal and inguinal white adipose tissues. In epididymal depot hormone sensitive lipase (Hsl), Perilipin 1 (Plin1), and Lpl genes were downregulated, and in inguinal WAT Ucp1 and Cpt1 were upregulated in the PE300 group, indicating higher fatty acid oxidation and thermogenesis in this group. The authors also analyzed brown adipose tissue. The activation of this tissue in PE groups was proposed based on the smaller size of brown adipocytes and the enrichment in UCP1 protein, measured by immunostaining, which was confirmed by Western blot. Finally, brown adipose tissue gene expression of Cd36 and Ppargc1a was also increased in the PE100 group; whereas the lipolytic gene Hsl (both doses), and the lipogenic gene Fasn (both doses) underwent downregulation. This microalga was also studied by Koo et al. [10] using a commercially available extract containing 3.5-6% fucoxanthin (w/w). In this study, which included an in vitro approach previously described in this review, female C57BL/6J mice were divided into six experimental groups. Thus, mice were fed a normal diet (ND), a HFD, or a HFD supplemented with the extract at a dose of 0.81 mg/kg body weight/day (PE-L), 1.62 mg/kg body weight/day (PE-M), or 3.25 mg/kg body weight/day (PE-H). After 6 weeks of treatment, the area under the curve (AUC) of body weights was higher in the HFD group than in the other experimental groups. Phaeodactylum tricornutum extract reduced total fat volume (measured by Micro Computed Tomography Analysis) (at all doses), abdominal adipose tissues (all doses), and subcutaneous depots (PE-M and PE-H groups). When looking at the mechanisms underlying these effects, the authors observed decreased protein expression of the adipogenic factors C/EBPα and PPARγ and increased protein expression of UCP1. Changes in the expressions of PPARγ and UCP1 had been observed in the in vitro experiments performed by these authors (Koo et al. [10]). This fact, linked with the changes to C/EBPα protein expression observed in vivo, suggests that Phaeodactylum tricornutum extract could activate thermogenesis and inhibit adipogenesis. Moreover, regarding plasma lipid profile, some changes were observed in animals treated with the Phaeodactylum tricornutum extract. Thus, while it decreases TG plasma levels at the medium dose (PE-M group), when it was added at a high dose (PE-H group) a decrease in LDL-cholesterol fraction was reported. In addition to these studies, Kim et al. [16] studied the effect of the same microalgae in C57BL/6 mice. They observed that neither body weight increase, nor food intake were changed along the treatment period, although perirrenal and epididymal adipose depot weights were significantly decreased by the Phaeodactylum tricornutum extract. Due to the fact that these are the only data concerning anti-obesity effects, a longer description was not carried out and it has not been included in the pertinent table. Sakanoi et al. [14] studied the beneficial effects of a spray-dried Euglena gracilis extract on mice. For this purpose, male C57BL/6J mice were fed a high-sucrose diet supplemented or not with Euglena gracilis extract (1%), for 8 weeks. At the end of the experimental period, no changes were observed in body weight or food intake. By contrast, total adipose tissue, perinephric and epididymal fat depots were reduced by the extract. mRNA levels of genes related to fatty acid synthesis (Fasn, glucose-6-phosphate dehydrogenase (G6pdh), malic enzyme (Me) and Srebf1), adipogenesis (Pparγ), and lypolisis (Hsl) were studied. The only change promoted by the microalga extract was an increase in Hsl gene expression. Dyslipidemia is another well-known complication of obesity but under these experimental conditions, serum lipids (TG, cholesterol and phospholipids) were not regulated by Euglena gracilis. In a recent study conducted by Guo et al. [15], the effect of an extract of Nitzschia laevis, a diatom microalga, was studied. For this purpose, C57BL/6J mice were divided into four experimental groups: the ND group was fed a normal chow diet (4.1% of energy from fat), the HFD group was fed a high-fat diet (24% of energy from fat), the HFD-LE group was fed the high-fat diet supplemented with a low-dose of Nitzschia laevis (10 mg/kg body weight/day), and the HFD-HE group was fed the high-fat diet supplemented with a high-dose of Nitzschia laevis (50 mg/kg body weight/day). The Nitzschia laevis was administered daily by oral gavage for 8 weeks. At the end of the experimental period, greater body weights were found in the three HFD-fed groups when compared to the ND group. Of these, the group supplemented with the highest dose of Nitzschia laevis showed lower body weight than that observed in the HFD group. The food intake records confirmed that the body weight-lowering effect was not due to a reduction in food consumption. When the weights of different white adipose tissue depots were analyzed, significantly lower values were appreciated in the epididymal fat depot of both Nitzschia laevis-supplemented groups, without differences between them. A similar pattern was observed in the diameters of the adipocytes of this adipose depot. As far as brown adipose tissue is concerned, weight was significantly greater in the groups fed the HFD, when compared to the ND group, with no differences among them. By contrast, significantly decreased adipocyte numbers were appreciated in these same groups when compared to the ND group, suggesting that the seaweed supplementation had an adipocyte hypertrophy attenuating effect on these animals. Among the HFD-fed groups, those receiving the microalga supplementation showed increased adipocyte numbers when compared with the HFD group, which reached the ND group values in the case of the group treated with the lower dose. In order to gain a better understanding of the mechanisms that may be involved in the observed effects, gene expression of Ucp1 and Pgc-1α was measured in this tissue. Ucp1 was upregulated in the two Nitzschia laevis-supplemented groups when compared to both the ND and the HFD groups. In the case of Pgc-1α gene expression, an increase was only appreciated in the HFD-LE group, when compared to the other groups. Based on the results reported by the authors, a potential involvement of ucp-1 mediated thermogenesis in the body weight lowering effect observed in the Nitzschia laevis-supplemented groups cannot be ruled out. In this study, the authors also analyzed the effects of Nitzschia laevis in gut epithelium integrity and gut microbiota. In this regard, they found that HFD feeding significantly decreased the expression of occludin, a plasma membrane protein considered as an important biomarker of the integrity and barrier function of gut epithelium. This deleterious effect was prevented in the HFD-HE group, where values similar to those found in the ND group were reached. In the case of gut microbiota composition, improved species richness and diversity were found in both Nitzschia laevis-supplemented groups when compared to the ND and HFD groups. At the phylum level, the decreased Firmicutes/Bacteroidetes ratio values appreciated in the HFD group was reversed in the two groups supplemented with the microalga, reaching values similar to those observed in the ND group. Based on the results obtained, the authors concluded that Nitzschia laevis supplementation could be an effective tool in the prevention of body weight induced by HFD feeding in mice. In this regard, the beneficial effects induced by Nitzschia laevis supplementation in gut epithelium integrity and gut microbiota modulation are highlighted as potential underlying mechanisms. Human Studies Studies addressed in humans devoted to analyzing the anti-obesity effects of microalgae are scarce to date, in comparison to those carried out using in vitro and in vivo experimental models (Table 3). In humans, Spirulina has been used as a dietary supplement for ameliorating a variety of diseases. In this line, Hernández-Lepe et al. [17] carried out a randomized double-blind crossover controlled clinical trial to evaluate the effect of short-term Spirulina maxima supplementation on plasma lipid levels and BMI. For this purpose, young (26 ± 5 years) sedentary men with BMI ≥ 25 kg/m 2 , some of whom suffering from dyslipidemia, were divided into two intervention groups: the Sm group received a supplement of Spirulina maxima (4.5 g/day) and the control group received placebo. The results showed a significant decrease in TC and TG along with a significant increase in HDL-cholesterol in the Sm group after treatment, when compared with the basal levels. These changes were observed only among dyslipidemic subjects. However, LDL-cholesterol showed no change. In addition, the authors compared the variation of each parameter between both experimental groups and observed that plasma TC level decreased significantly in obese subjects in the Sm treatment, and LDL-cholesterol was lower in overweight, obese, and dyslipidemic subjects enrolled in the Sm treatment, when compared to those in the placebo group. By contrast, TG and HDL-cholesterol levels were not modified. As far as BMI is concerned, a significant reduction was only observed in obese and dyslipidemic subjects after treatment. These results suggest that Spirulina maxima supplementation results in a partial improvement of blood lipid profile and BMI in men with excess body weight and dyslipidemia. Using the same microalga, Szulinska et al. [18] carried out a randomized, double-blind, placebo-controlled trial addressed on 25-60 year old individuals with obesity (BMI ≥ 30 kg/m 2 ), with well-controlled hypertension and without other comorbidities. Participants were divided into two experimental groups: placebo group (four capsules per day of microcrystalline cellulose over 3 months) and spirulina group (four capsules per day of Hawaiian Spirulina over 3 months). Each spirulina capsule contained 0.5 g of Spirulina maxima. At the end of the experimental period, spirulina group showed lower BMI, waist circumference, serum TC, LDL-cholesterol, glucose and insulin and total antioxidant state than the placebo group. No differences in serum HDL-cholesterol and TG were observed between groups. In another randomized doubled-blind, placebo-controlled trial conducted by Zeinalian et al. [19], the effect of Spirulina platensis supplementation on BMI, serum lipids, appetite, and serum vascular endothelial growth factor (VEGF) was studied. Individuals with obesity were divided into two groups, the placebo group and the group that received Spirulina platensis twice daily (500 mg each dose). After 12 weeks of intervention, a decrease in body weight, and thus in BMI, was observed, along with a reduction appetite in the group treated with Spirulina platensis. With regard to serum lipids, the only change was a significant reduction in TC, while LDL-cholesterol and TG remained unchanged after the intervention. Despite a significant increase in HDL-cholesterol in both treated and placebo groups at the end of the experimental period, there was no change in the mean differences between the two groups. VEGF is an important angiogenic factor implicated in normal and pathological vessel formation that can be an important biomarker of obesity and obesity-related cancer progression. In this study, VEGF remained unchanged after treatment with Spirulina platensis. The authors concluded that a dose of 1 g/day of Spirulina platensis for 12 weeks had beneficial effects modulating body weight and appetite, while it only modified the serum lipid profile partially. Spirulina platensis was also used in a randomized, double-blinded, placebo-controlled clinical trial reported by Yousefi et al. [20]. Obese or overweight subjects (BMI: 25-40 kg/m 2 ) were distributed into two groups, a placebo and a Spirulina platensis-treated group, who followed a restricted calorie diet for 12 weeks. The microalga was administered in four tablets of 500 mg/capsule daily. At the end of the intervention, body weight and waist circumference were reduced in the microalga-supplemented group compared to the control group. Moreover, in this group body fat reduction was higher than that observed in the placebo group. Regarding plasma parameters, TG, LDL-cholesterol, and the LDL/HDL ratio were reduced at the end of the treatment period compared with the baseline in the microalga-treated group. Based on these results, the authors suggested that Spirulina platensis could be a useful as a complementary therapy to reduce weight and TG levels. Concluding Remarks Data reported in the literature, and gathered in the present review, show that there is scientific evidence supporting the anti-obesity effect of several microalgae: Euglena gracilis, Phaeodactylum tricornutum, Spirulina maxima, Spirulina platensis, and Nitzschia laevis. With the exception of one study, the published works carried out in animal models have addressed the effects of microalgae in animals submitted to an obesogenic feeding pattern. Consequently, the results have shown the ability of microalgae to total or partially prevent obesity development associated to this dietary pattern. Preclinical studies have revealed some of the mechanisms of action underlying this effect. Depending on the species and concentration, microalgae can inhibit pre-adipocyte differentiation, thus reducing the number of mature adipocytes ready to accumulate TG. Moreover, they reduce de novo lipogenesis and TG assembly, thus limiting the amount of TG to be stored. An increase in lipolysis and fatty acid oxidation can also be observed. Finally, microalgae can induce an increase in energy expenditure via thermogenesis activation in brown adipose tissue, as well as by inducing browning in white adipose tissue. It could be thought that a potential toxic effect of some constituent common in microalgae could be responsible, at least in part, for the reduced lipid retention and weight reduction. However, this possibility can be discarded because in vitro studies have shown no cytotoxicity of microalgae extracts in a wide range of doses. In parallel with the reduction in body fat accumulation, other features which are typical of individuals with obesity, such as enhanced plasma lipid levels, insulin resistance or diabetes, and low-grade inflammation, are also improved by microalgae treatment. The anti-obesity effect of microalgae, as well as the improvement of several comorbidities observed in preclinical studies, has been confirmed in clinical trials. In this case, due to the experimental design characteristics, the role of microalgae in obesity treatment, rather than in obesity prevention, has been evidenced. Concerning the limitations of the reported studies, it should be pointed out that more research is needed to determine which bioactive compounds, present in microalgae, are responsible for their anti-obesity effects, as well as to look for potential synergies among them. In addition, although several mechanisms have been proposed to explain the anti-obesity effects of microalgae, further studies are needed in order to gain more insight concerning this issue. For instance, in several studies increased expression of genes related to thermogenesis has been found, suggesting the activation of this process, but additional studies are needed to confirm that in fact thermogenesis, and consequently energy expenditure, are increased.
2019-12-22T14:02:54.685Z
2019-12-19T00:00:00.000
{ "year": 2019, "sha1": "93303e25e7df63b3029a1b81c6b2d0a6043321a2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/1/41/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "23bf6cd55ff5a59c90cc9ea75d7e3dd306574f2a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
229157107
pes2o/s2orc
v3-fos-license
Classifying blazar candidates from the 3FGL unassociated catalog into BL Lacs and FSRQs using Swift and WISE data We utilize machine learning methods to distinguish BL Lacertae objects (BL Lac) from Flat Spectrum Radio Quasars (FSRQ) within a sample of likely X-ray blazar counterparts to Fermi 3FGL unassociated gamma-ray sources. From our previous work, we have extracted 84 sources that were classified as $\geq$ 99% likley to be blazars. We then utilize Swift$-$XRT, Fermi, and WISE (The Wide-field Infrared Survey Explorer) data together to distinguish the specific type of blazar, FSRQs or BL Lacs. Various X-ray and Gamma-ray parameters can be used to differentiate between these subclasses. These are also known to occupy different parameter space on the WISE color-color diagram. Using all these data together would provide more robust results for the classified sources. We utilized a Random Forest Classifier to calculate the probability for each blazar to be associated with a BL Lac or an FSRQ. Based on P$_{bll}$, which is the probability for each source to be a BL Lac, we placed our sources into five different categories based on this value as follows; P$_{bll}$ $\geq$ 99%: highly likely BL Lac, P$_{bll}$ $\geq$ 90%: likely BL Lac, P$_{bll}$ $\leq$ 1%: highly likely FSRQ, P$_{bll}$ $\leq$ 10%: likely FSRQ, and 90% $<$ P$_{bll}$ $<$ 10%: ambiguous. Our results categorize the 84 blazar candidates as 50 likely BL Lacs and the rest 34 being ambiguous. A small subset of these sources have been listed as associated sources in the most recent Fermi catalog, 4FGL, and in these cases our results are in agreement on the classification. INTRODUCTION Blazars are a subclass of the Active Galactic Nuclei which have their jets pointing along our line of sight (Blandford & Rees 1978). They are further divided into two categories; Flat Spectrum Radio Quasars (FSRQs) and BL Lacertae Objects (BL Lacs) based on their optical spectra. The FSRQs display broad emission lines, whereas the BL Lacs display no lines or narrow lines (equivalent width < 5Å). The spectral energy distribution of these objects displays a characteristic doublebump structure. The lowest energy bump is typically attributed to the synchrotrom emission (radio to X-ray) from electrons propagating in the jet, and the higher energy bump (X-ray to gamma-rays) is typically attributed to the synchrotron self Compton mechanism (Mar ????), and/or external inverse Compton processes, and/or hadronic processes such as proton synchrotron. Understanding the origin of the differences between the two subclasses of blazars has been one of the open ques-tions in the field. The idea of the blazar sequence, termed by Fossati et al. (1998) and re-visited by Ghisellini & Tavecchio (2008); Ghisellini et al. (2017), revealed that the most luminous blazars possess lower synchrotron frequencies, which often is the case in FS-RQs, and vice versa for BL Lacs. The blazar sequence has been challenged by finding highly-luminous, highsynchrotron-peaked blazars, e.g. Padovani et al. (2012). Moreover, Ghisellini et al. (1998) suggested a unified scheme for blazars in which FSRQs eventually evolve into BL Lacs once their accretion disk is exhausted. The observational evidence of finding FSRQs at typically higher redshifts than that of BL Lacs suggests this scenario. However, several (29) BL Lacs have been found at high redshifts e.g., Kaur et al. 2017Kaur et al. , 2018Rajagopal et al. 2020). These blazar parameter space and the theories regarding their evolution need to be further explored by obtaining a more complete sample of both types of blazars. The Fermi Gamma Ray Observatory has revealed more than 5000 sources since its launch in 2008. Blazars constitute the bulk of the overall known extragalactic gamma-ray population (> 75%) in all the Fermi catalogs; 1FGL (Abdo et al. 2010), 2FGL (Nolan et al. 2012), 3FGL (Acero et al. 2015), and 4FGL (Abdollahi et al. 2020). The galactic population is dominated by pulsars (∼ 8% of the total gamma-ray population). However, each catalog represents approximately one-third of its sources as unassociated or unknown. Finding associations to these sources or classifying these is a multi year task which often requires multiwavelength observations for confirmations. In the past few years, various studies have been conducted on these gamma-ray sources where machine learning methods were employed to classify the unassociated sources; e.g., Saz Parkinson et al. 2016;Marchesini et al. 2020a). In particular, Marchesini et al. (2020b,a) first explored the connection between gamma-rays and X-rays for blazars and later utilized X-ray data from Swift in conjunction with the Fermi gamma-ray data to find BL Lacs among the unassociated sources.. We (Falcone A. D. & et al. Stroh 2015;Kaur et al. 2019) have conducted an X-ray survey targeting the Fermi unassociated source fields and found various possible X-ray associations 1 . Since the majority of the known gamma-ray sources are blazars and pulsars, it is highly likely that a rather large population of the unassociated sources could belong to these two populations. It should be noted that the sensitivity for Swift -XRT is ∼ 1.0 × e −13 erg/cm 2 /s for a 4 ksec exposure, which was the average exposure for this survey. Most of the known Fermi blazars, as well as some of the known Fermi pulsars, are detectable with high signal-to-noise ratio within this exposure time, as shown in Fig. 1-4 in Kaur et al. (2019). These authors utilized machine learning methods on these X-ray counterparts to find blazars and pulsars. Their results yielded 134 blazars and 8 pulsars with high probabilities based on the machine learning methods. In this work, the objective is to classify the highly probable blazar candidates revealed from our previous work into subclassifications of BL Lacs and FSRQs using the methods of Machine Learning. The paper is divided as follows. Section 2 describes the process of the final sample selection and Section 3 explains the overall analysis method. The results and the conclusions of the this study are published in Sections 4 and 5, respectively. The Swift -Xray Telescope (XRT) (Burrows et al. 2005) conducted observations for 803 3FGL unassociated sources in order to search for their X-ray counterparts. These were chosen at random from the complete sample of ∼ 1500 unassociated sources provided they were (i) not listed as a confused source in the 3FGL catalog, (ii) not listed as extended in the 3FGL catalog, and (iii) the 3FGL source 95% uncertainty region has a semi-major axis that is smaller than 10 arcmin, thus enabling it to fit within the Swift field of view. All these X-ray observations were completed through a Swift fill-in program, with an average XRT exposure time for each 3FGL field of approximately 4 ksec, and the results have been provided online at https://www.swift.psu.edu/unassociated/. It is possible that a small number of X-ray sources found during this campaign are actually spurious associations with the 3FGL source. This probability can be estimated based on the known Swift -XRT sensitivity for detecting an X-ray source during the randomly distributed Swift -XRT exposures on the 3FGL fields (the 3FGL follow-up fields were distributed across the whole sky and randomly chosen for Swift -XRT follow-up as part of a fill-in observation program) coupled with the LAT error ellipse for the sources in our sample. We found this sensitivity by using 'empty' Swift -XRT fields distributed across the sky, using GRB fields with the GRB masked out, and calculating the spurious source detection density in these fields as a function of exposure time, using the same source detection criteria that is utilized for finding possible X-ray counterparts to Fermi unassociated sources. The majority of the Swift -XRT followup observations of the Fermi unassociated sources in our sample were of roughly 4 ksec exposure time. To be included in our sample, a potential X-ray counterpart had to be detected at the >4σ signal-to-noise threshold. For a 4 ksec Swift -XRT exposure, less than ∼1 random X-ray detection is expected for every 100 Fermi-LAT error ellipses, when using the 4σ threshold that we used for X-ray source selection and when using a typical Fermi -LAT 95% confidence ellipse for an unassociated gamma-ray source. However, a subset of our observations received significantly longer exposures and some of the Fermi-LAT error ellipses are larger than typical, thus increasing the chance coincidence probability in those cases. By using the actual Swift -XRT exposure and the Fermi-LAT error ellipse of each follow-up field, we found that the median chance probability of a spurious 4σ X-ray source detection was less than 0.01; aside from a few outliers, there was generally a low chance that the given X-ray source counterpart candidate is not actually associated with the gamma-ray source. These chance probabilities are tabulated below, along with the results of this work. Among these, 217 X-ray sources were found which met the following criteria: (1) only one X-ray source within the 95% 3FGL uncertainity region, (2) this possible X-ray counterpart is detected at a signal to noise ratio ≥ 4. Kaur et al. (2019, hereafter, K19) performed a machine learning analysis (random forest) to find pulsar and blazar candidates among these 217 X-ray counterparts to the 3FGL unassociated sources. According to the random forest classifier method used in K19, 134 sources from the sample resulted in classifications as highly likely blazars, i.e. the sources for which the probabilities to be associated with the blazar class was ≥99%. See Section 3 for the random forest analysis method and Section 4 for the classification criteria in K19. For this work, we selected these 134 sources which were found to be highly likely blazar candidates. Since the time of the K19 publication, a few more Fermi unassociated sources were observed with Swift −XRT, and 25 of these were found to have exactly one source within the Fermi uncertainty circle with SNR ≥ 4. We applied the K19 criteria and ML methods to these 25 X-ray sources in order to form a more complete list of blazar candidates for this work. P bzr was defined as the probability for a source to be a blazar, which were yielded by the RF classifier. As was done in K19, the sources were classified into one of the following categories: pulsar (P bzr ≤1%, likely pulsar (P bzr ≤ 10%), blazar (P bzr ≥ 99%), likely blazar (P bzr ≥ 90%) and ambiguous (90% ≤ P bzr ≥ 10%). The results of the 25 new sources are presented in Table 1. Among these, 7 were found to be highly likely blazar candidates. These 7 sources were added to our initial 134 source sample of likely X-ray blazar counterparts to 3FGL unassociated sources. Overall, we obtained 139 sources which are highly likely blazars. The Swift -XRT positions of these sources were utilized to search for any positional associations in the AllWISE catalog (Cutri & al. 2013) within 5 arcsec positional uncertainty. An average uncertainty associated with an XRT position for these data is less than ∼ 5"; therefore, only if a WISE source was found within this search radius, it was assumed to be the WISE counterpart to this source. While it is possible that a small number of these assumed WISE counterparts could be spatially coincident by random chance, this is likely to be the case for only a few of the WISE sources. We randomly selected sources from the Swift -XRT blazar catalog which comprises of 2831 blazars using 15 years of data (Giommi et al. 2019). We searched for WISE sources corresponding to these blazar positions within the 5" uncertainity region which is the average Swift -XRT positional uncertainty. Aside from the WISE counterparts to these blazars, we found that a secondary source within 5" was found for 22 positions which suggests that there is a ∼ 0.7% probability that a WISE source can be found on a random location in the sky within a 5" uncertainty region. Of course, some of the WISE positions fall within a much smaller radial distance from the XRT positional centroid and WISE counterparts are expected for many XRT sources so this 10.8% estimate is an upper limit that simply tells us that most of the WISE sources are indeed likely counterparts to the XRT sources. This AllWISE catalog was generated using the Wide-field Infrared Survey Explorer Wright et al. (2010, WISE) which provides the fluxes, proper motion, accurate positions for approximately 800 million objects. We found matches for 84 sources in our highly likely blazar candidate sample. We proceeded with further analysis for this final sample of 84 sources as described in the next section. 3. ANALYSIS Massaro et al. (2012) introduced a method to find blazars among other sources using WISE colors; W1, W2, W3 and W4 corresponding to 3.4, 4.6, 12 and 22 µm, respectively. These authors showed that blazars occupy a particular region on a color-color plot (W1-W2 vs W2-W3) in the IR regime (WISE in this case) which separates them from other source types. They termed this region as the WISE Blazar Strip (WBS). Moreover, the two classes of blazars; FSRQs and BL Lacs occupy different regions within this strip. Therefore, these color indices could be utilized to separate one blazar class from another, which is the immediate objective of this work. See Fig. 1,2 in Massaro et al. (2012) for details. Fig. 2 shows the blazar strip for Fermi blazars along with the 84 unassociated sources. It is quite apparent from this figure that these are highly likely blazars (also predicted by our ML methods in K19), as they follow the pattern of the known blazars. In addition, the fact that the BL Lacs and FSRQs occupy different parameter space on this color-color plot is clearly shown. In our previous work, we used both X-ray and gamma-ray properties of the Fermi unassociated sources to distinguish pulsars from blazars using the random forest classifier machine learning algorithm (Breiman 2001). Here we employ the same method, with additional WISE parameters, to the subsample of highly likely blazar candidates which are 84 in number, as described in Section 2. While some distinction between the two classes of blazars can be seen when the five considered X-ray and gamma-ray properties are compared (e.g. X-ray flux, gamma-ray flux, gamma-ray variability index, gamma-ray spectral index, and curvature), the addition of WISE color parameters is expected to enhance this distinction. In this work, we utilize these five X-ray and gamma-ray parameters along with two WISE color indices; W1-W2 and W2-W3. We compare them simultaneously using the random forest classifier. In order to proceed with this algorithm, a sample that includes both known classes of blazars was required, with each of the blazars in this sample having known values for all of these above mentioned seven parameters. Training and Test data A total of 501 known blazars were extracted from Ackermann et al. (3LAC, 2015) for which gamma-ray, X-ray and WISE data were available. The gamma-ray and X-ray properties of these known blazars were obtained from Acero et al. (3FGL, 2015) and Ackermann et al. (3LAC, 2015), respectively. Fig 4 displays the comparison of two subclasses of blazars along with the unassociated sources. It should be noted that the X-ray fluxes for blazars in 3LAC catalog were extracted from the RASS survey (Voges et al. 1999(Voges et al. , 2000. These flux values are provided in the energy range 0.1-2.4 keV. For the 84 unassociated sources, the X-ray fluxes were derived using Swift -XRT in the energy range from 0.1 to 2.4 keV for the consistency. The WISE color indices were obtained from the AllWISE catalog. Out of these 501 blazars, 162 were FSRQs and 339 were BL Lacs. The unbalanced proportion of these two classes could lead to biased results towards one particular class, therefore this was corrected by using a class balancing algorithm, SMOTE (Chawla et al. 2002). SMOTE uses the k nearest neighbors method to synthetically generate sources for the underrepresented class to match it with the number of sources in the other class. Here we employed the SMOTE algorithm provided in the scikit-learn library of python which utilizes 5 nearest neighbors to create one synthetic data point. In this case, since FSRQs were 162 by number as compared to 339 BL Lacs. SMOTE algorithm added 177 synthetic data points mimicking the properties of the original FS-RQs. This led to our final sample of 339 BL Lacs and 339 FSRQs. An example displaying the results of SMOTE analysis are shown in Fig. 1. Here these results are shown for gamma-ray flux vs the spectral index, but these FSRQs mimic the real FSRQs for all the parameters which would be used to train the classifier. Random Forest Parameter Selection and Accuracy Calculation Method We employed the random forest classifier from sklearn using python 3.6, which is a supervised method of machine learning based on the method of decision trees (Breiman 2001). A complete description of this method and the details of its implementation are described in section 3.1 in K19. A parameter tuning algorithm GridSearchCV in sklearn was employed to find the optimum parameters for the random forest classifier. Based on this optimization, 1000 decision trees with a maximum depth of 10 splits(nodes) in each tree were employed to obtain the final classification for each source in this method. Generally in a machine learning algorithm, a majority of the complete sample is reserved for the training set and a smaller subset is utilized as the test sample to check the accuracy of the underlying classifier. However, the accuracy obtained from this method is clearly biased since it is based on one given test sample. Therefore, in this work, we employed a 10-fold cross-validation method using sklearn which divided the original sample into 10 equal size subsamples such that one out of these 10 samples was chosen as a test sample (one at a time) and the rest combined were considered a training sample. The trained classifier was then applied to the given test subsample to obtain the accuracy value. This procedure was repeated 10 times to obtain accuracies from each test sample. The overall accuracy was calculated as an average of accuracies obtained from these iterations. This accuracy calculation method has the advantage that it iterates through the complete sample which results in less sample bias, relative to calculating accuracy based on a single test sample. Based on the procedure explained above, the random forest classifier was trained and then cross validated which yielded an average accuracy of 93.5%. For an additional crosscheck, we also conducted an experiment where only a single test sample example was chosen to check the accuracy of this classifier. Based on a random one test sample of 106 known BL Lacs and 107 known FSRQS, our classifier returns 98 true FSRQs and 102 true BL Lacs. In other words, this wrongly classified 4 BL Lacs and 9 FSRQs. This yielded an accuracy of ∼ 94%, and is consistent with our more robust accuracy calculation described above. While this result might imply that the classifier is slightly biased towards finding the BL Lac class objects, this effect would be less than a few percent, based on these accuracy estimates. Furthermore, we don't classify a source as a BL Lac or FSRQ when their respective probabilities are less than 90%. Of course, while misclassifications are not expected, it is expected that the 'ambiguous/unclassified sources' could harbor both FSRQs and BL Lacs that could not be accurately classified. The trained classifier using the X-ray, gamma-ray and WISE parameters was then applied to the sample of 84 blazar candidate counterpart sources. Since the RF classifier provides a probability value for each source to belong to a particular class, we define the following classes based on the predicted probabilities: BL Lac (bll): P bll ≥ 99%, likely BL Lac: P bll ≥ 90%, FSRQ: P bll ≤ 1%, likely FSRQ: P bll ≤ 10%, and ambiguous: 10% ≥ P bll ≤ 90%; where P bll is the probability for a source to be a BL Lac. Using these definitions, we found that among the sample of 84 highly likely blazar candidates, 50 are likely BL Lacs and 34 are ambiguous. None of the sources were predicted to be FSRQs, nor did any of them fall into the category of likely FSRQs (See Table. 2). This is consistent with a visual inspection of Fig. 2, which shows that our newly identified blazar candidate/counterpart sample seems to be constituted primarily of BL Lac class blazars, with a relatively small number of outliers that are ambiguously consistent with either the BL Lac or FSRQ classification and another small group of outliers that are ambiguous in the sense that they seem to fall outside of both the Bl Lac and the FSRQ distribution. The respective significances (percentage importances) of each parameter employed in the classifier are as follows: X-ray flux: 0.058, Curvature: 0.062, Spectral Index: 0.241, Variability Index: 0.094, Gamma ray flux: 0.055, W1-W2: 0.312 and W2-W3: 0.175. Miscellaneous-Outliers A few of these unassociated source candidates seem to diverge from the usual WBS, as displayed in Fig. 3 by us-ing black circular regions around them. These are seven in number out of which one belongs to the likely BL Lac and the rest to the ambiguous category. After further inspection, it was found that the positions of three of these were coincident with stars within 5 arcsec positional uncertainties of Swift−XRT positions. TYC4199-1248, which was also reported in K19, corresponds to a positional coincidence with 3FGL J1729.0+6049. Similarly, 3FGL J0748.8-2208 is spatially coincident with a star, TYX 5993-3722-1 and 3FGL JJ1801.5-7825 with HD 162298. No further information about these stars was found in literature. It is clear that the WISE position match yielded the colors for these non-blazar sources, which would explain their placement far left from the WBS in Fig. 3. This is consistent with the fact that our ML method classified them as "ambiguous." In these few cases, it is also possible that the stellar systems could be associated with the source of gamma rays from the corresponding Fermi source. However, one of these outliers, namely 3FGL J1958.1+2436, is a confirmed BL Lac despite its position on the WISE colorcolor plot. Another interesting case is that of 3FGL J2035.8+4902, for which the position of the only X-ray source in the 3FGL error circle is positionally coincident with an eclipsing binary, V* V2552 Cyg. Some of these outliers may not belong to the blazar population and/or may not be the actual X-ray counterpart to the Fermi unassociated source, and various direct methods such as optical spectroscopy could be used to verify their true nature. However, in some of the outlier cases, it is also possible that the detected X-ray source is a counterpart blazar which deviates from the usual gamma-ray blazar population and therefore should be further investigated for its interesting behavior. It should be noted that although the position of the few outliers on the WISE color-color plot does not mimic the other gamma-ray blazars, these parameters lie within the limits of the more general blazar population, particularly as an extension of the BL Lac distribution; please see Fig. 1 in Massaro et al. (2011). Since the latest Fermi catalog Data Release 2 (Lott et al. 2020, 4FGL-DR2) was published recently, we compared our list of 84 sources to the source classifications in this release, for the cases where they are available. We found that 52 of the 84 sources are identified as bcus, and 7 are identified as BL Lacs. All except one (3FGL J0427.9-6704 classified as a "Binary" in the 4FGL catalog) of our results match with the 4FGL predictions as seen in Table 2, although three identified BL Lacs in the 4FGL catalog were characterized as "ambiguous" by our RF classifier (note: our classifier did find that these 3 sources were > 81% likely to be BL Lacs). In addition, we compared our classifi-cation results with a similar study conducted by Marchesini et al. (2020a), in which the authors searched for BL Lac candidates among the Fermi Unassociated sources. These authors selected their sample using a slightly different criteria such as SNR>3 as compared to ours with SNR>4. Moreover, these authors selected X-ray counterparts where more than one X-ray source was found within the Fermi uncertainty region. Regardless, these authors found 19 sources in which are highly likely BL Lacs. Among these 19, we found 7 which are also present in our sample. 4 out of these 7 are identified as BL Lacs in our classification, whereas 3 are identified as ambiguous. Furthermore, it should be noted that two of these ambiguous sources have blazar probability >89% in our classification, which makes them consistent with the BL Lac classification. Therefore, we consider our results to be consistent with this independent study conducted by these authors on this subset of the sources in our study. CONCLUSIONS The immediate objective of this work is to classify, as either FSRQ or BL Lac, a sample of 84 highly likely Xray blazar candidates that were drawn from a list of Xray counterparts to 3FGL unassociated sources. This is a step forward towards the completeness of finding the gamma-ray emitting blazars. Finding associations to the unassociated gamma-ray emitting sources has been a necessary step in order to understand the gammaray emitting population in the Universe. Most of the gamma-ray sources belong to the category of blazars. Classifying these blazars is an important step towards understanding their evolution and their role in galaxy evolution. Understanding the distribution of the subclasses for these gamma-ray emitting blazars plays a vital role in putting constraints on the blazar sequence Ghisellini et al. 2017). In previous work, we contributed to this task by finding counterparts and classifying the blazars among these unassociated sources, while this paper focuses on sub-classification of these blazar sources. Using the methods of machine learning, we find that 50 out of these 84 sources are ≥ 90% likely to be BL Lacs, and the other 34 are not able to categorized (i.e. we categorize them as 'ambiguous'). However, since all these outliers/ambiguous sources are a subset of our blazar sample, these could be considered "bcus" (blazars of uncertain type). This implies that these sources are mostly likely either peculiar BL Lacs, FSRQs, or transitional blazars. Various follow up multiwavelength campaigns would be required to discern their nature. We don't find any of these sources to be clearly labeled as FSRQs. There could be multiple reasons for the paucity of sources categorized as FS-RQs; e.g. (i) most of the X-ray counterparts to Fermi unassociated sources are indeed BL Lacs, which could be caused by inherent selection biases such as the fact that BL Lacs are more likely to have a synchrotron peak in the UV to X-ray band, (ii) The blazar component of Fermi unassociated sources has a selection bias that makes it more likely for an unassociated source to be a BL Lac, or, (iii) Since FSRQs are more bright and often have spectra available via various surveys, it is highly likely that most of the unassociated blazar population in the Fermi catalog are indeed BL Lacs. This pattern has also been seen in various optical spectroscopic surveys of unassociated Fermi sources, e.g., Álvarez Crespo et al. (2016); Crespo et al. (2016); Peña-Herazo et al. (2017); Paiano et al. (2017) In the future, optical spectroscopic techniques can be utilized to find the nature of the 24 ambiguous sources, and to further investigate the properties of the classified blazars. Our study provides likely blazar targets for these spectroscopic optical observations, providing another avenue for localizing and characterizing possible counterparts with high precision.The ongoing questions regarding the Fermi blazar sequence and Fermi blazar divide require the redshift estimates for blazars. One should be able to confirm and determine the redshifts for the FSRQs found within the 24 sources by using various 4m class optical facilities, e.g.Álvarez Crespo et al. (2016); Crespo et al. (2016). For the case of BL Lacs, traditionally 8-10m class telescopes (Shaw et al. 2009(Shaw et al. , 2013Paiano et al. 2019) are utilized. This method of estimating the redshifts for BL Lacs is highly effective, but it is time and cost consuming. Recently, Rau et al. (2012) devised a photometric method to determine the redshifts (z), or find an upper limit, for BL Lacs. One caveat of this method is that it works for sources with z > 1.3. This method has successfully found redshift estimates for 29 sources to-date (Kaur et al. 2017(Kaur et al. , 2018Rajagopal et al. 2020). We emphasize that our results are consistent with the updated 4FGL catalog classification as seen in Table. 2, for the newly classified sources, which provides further evidence that our added classifications (not classified in the 4FGL) are likely robust. In addition, we find three sources coincident with positions of known stars, as well as another source, 3FGL J2035.8+4902, spatially coincident with an eclipsing binary, V* V2552 Cyg. It should be noted that the latter is listed as a "bcu" in the 4FGL catalog based on a nearby fainter source at R.A.(J2000) : 20 35 51.63 and Dec(J2000) : +49 01 44.28, for which no WISE association could be found within a 5" radius. The further investigation of these sources is currently out of the scope of the work presented here, hence no further investigation was performed in this paper. ACKNOWLEDGMENTS The authors would like to gratefully acknowledge the support provided by NASA research grants 80NSSC18K1730 and 80NSSC19K1713. M.C.S. is partially supported by the Heising-Simons Foundation under grant #2018-0911. This publication made use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. The astronomical tool to compare databases, Topcat (Taylor 2005) was employed in this work. Figure 3. WISE blazar strip for the known Fermi BL Lacs (blue) and FSRQs (red). Overplotted are the unassociated blazar candidates from this work as displayed in Fig. 2. The subcategories displayed are based on the probabilities obtained with our machine learning algorithm dividing these into likely BL Lacs (magenta, P bll ≥ 90%) and ambiguous blazars (green, 10% ≥ P bll ≤ 90%). The outliers from the WBS are enclosed in black circles. See the discussion in Section 4.1 for a complete details on these sources. The W1, W2 and W3 correspond to the WISE filters; 3.4, 4.6 and 12 µm, respectively. ) are also plotted. The plotted parameters are defined as follows: FX, Signif Curve, Spectral Index, Variability Index, FG, w1-w2, w2-w3 represent log10(X-ray Flux), Gamma-ray Curvature, Gamma-ray Spectral Index, log10(Gamma-ray Variability Index), log10(Gamma-ray Flux), WISE Color Index (W1-W2) and WISE Color Index (W2-W3), respectively. b The Fermi source name as defined in 3FGL catalog. c The Classification based on the probability of a given source to be called a blazar/pulsar/ambiguous as defined in K19 and also described in Section 2 d The probability of a given source to be identified a BL Lac, derived from the random forest classifier. a The name of the Swift discovered X-ray source within the 95% Fermi uncertainty region of the corresponding 3FGL source as defined in K19. b The Fermi source name as defined in 3FGL catalog. c The expected number of X-ray sources found within the Fermi uncertainty ellipse using Swift -XRT. See section 2 for further details d The Classification based on the probability of a given source to be called a BL Lac/FSRQ/ambiguous as defined in this work. See Section 4. e The probability of a given source to be identified a BL Lac, derived from the random forest classifier. Note-: The Swift X-ray position of this source is positionally coincident with an eclipsing binary, V* V2552 Cyg Facilities: Swift(XRT), WISE
2020-12-15T02:15:52.926Z
2020-12-11T00:00:00.000
{ "year": 2021, "sha1": "9f9f630a4fd25780cb432f4404c7bc17eda54bcc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2012.06587", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f5f71e573fb9dcc8f9a5e108d56ef38ad0533089", "s2fieldsofstudy": [ "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
245882359
pes2o/s2orc
v3-fos-license
Large-Scale Gastric Cancer Susceptibility Gene Identification Based on Gradient Boosting Decision Tree The early clinical symptoms of gastric cancer are not obvious, and metastasis may have occurred at the time of treatment. Poor prognosis is one of the important reasons for the high mortality of gastric cancer. Therefore, the identification of gastric cancer-related genes can be used as relevant markers for diagnosis and treatment to improve diagnosis precision and guide personalized treatment. In order to further reveal the pathogenesis of gastric cancer at the gene level, we proposed a method based on Gradient Boosting Decision Tree (GBDT) to identify the susceptible genes of gastric cancer through gene interaction network. Based on the known genes related to gastric cancer, we collected more genes which can interact with them and constructed a gene interaction network. Random Walk was used to extract network association of each gene and we used GBDT to identify the gastric cancer-related genes. To verify the AUC and AUPR of our algorithm, we implemented 10-fold cross-validation. GBDT achieved AUC as 0.89 and AUPR as 0.81. We selected four other methods to compare with GBDT and found GBDT performed best. INTRODUCTION There are about 950,000 new cases of gastric cancer worldwide each year, and nearly 700,000 deaths. It is one of the most serious tumors (Rawla and Barsouk, 2019). The early clinical symptoms of gastric cancer are not obvious, and metastasis may have occurred at the time of treatment (Axon, 2006). Poor prognosis is one of the important reasons for the high mortality of gastric cancer (Eguchi et al., 2003). Therefore, the identification of gastric cancer-related genes can be used as relevant markers for diagnosis and treatment to improve diagnosis precision and guide personalized treatment (Duffy et al., 2014). Identifying gastric cancer-related genes plays an important role in the treatment of gastric cancer. Research on metastasis-related genes is conducive to timely detection of early metastasis, screening of new markers and therapeutic targets, thereby improving the survival rate of patients (Arturi et al., 1997). Using animal models to screen gastric cancer metastasis-related genes (Wang and Chen, 2002), fully mimic the process of tumor metastasis in vivo, with high metastasis efficiency, clear phenotypic characteristics, and good clinical similarity. Cell line derived xenograft (CDX) model is a tumor model constructed by transplanting cultured tumor cells into immunodeficient mice (Georges et al., 2019). The cell lines used in the CDX model have been cultured in vitro for many generations, and their biological characteristics have changed significantly. Some tumor cell lines that adapt to culture in vitro and have metastatic potential have been selected, so it is easy to obtain the metastasis model. The establishment of the CDX model can be realized by subcutaneous injection, intraperitoneal injection, caudal vein injection, and so on (Lallo et al., 2017). Zhu et al. (2020) established a xenotransplantation model by subcutaneous injection of gastric cancer cell line BGC-823 into the hind limbs of nude mice. They found that mir-106a had the potential to promote tumor growth by targeting Smad7. At the same time, they found that mir-106a was related to peritoneal metastasis of gastric cancer. At present, studies have found that gastrin level has a strong relationship with the development of gastric cancer. Zu et al. (2018) successfully established a cell xenotransplantation model by subcutaneous injection of human gastric cancer cell line SGC-7901 in nude mice. They found that gastrin can inhibit the proliferation of poorly differentiated gastric cancer cells and enhance the inhibitory effect of cisplatin on gastric cancer by activating erk-p65-mir 23a/27a/24 axis. Tumor cells with biological enzyme markers can also be used to establish a CDX model (Agashe and Kurzrock, 2020), which is helpful to dynamically monitor tumor metastasis in vivo and facilitate the screening of metastasis related genes. Miwa et al. (2019) successfully established the intraperitoneal metastasis model by injecting MKN1 (MKN1 LUC) and MKN45 (MKN45 LUC) gastric cancer cells stably expressing luciferase and n87, Kato III, nugc4, and ocum-1 gastric cancer cells into the abdominal cavity of nude mice. The liver metastasis model was successfully established by injecting MKN1 Luc and MKN45 Luc directly into the portal vein of mice. Because the establishment of CDX model uses passage cell lines and lacks the microenvironment of tumor growth in human body (Lallo et al., 2017), it cannot well simulate the growth and metastasis of tumor in the human body. Patient derived cell models (PDC) use patient derived tumor cells isolated from malignant effusions such as ascites and pleural effusion (Bolck et al., 2019). Therefore, it can better reflect the individualized characteristics of patients and show unique advantages in the screening of tumor metastasis related genes and clinical drug screening. Lee et al. (2015) established a PDC model with cells collected from patients with metastatic cancer. The study found that the genomic changes of primary tumor and offspring PDC model were highly consistent, and the correlation of average variant allele frequency was 0.878. Further compared the genomic characteristics of primary tumor P0, P1, and P2 cells, and found that three samples (P0, P1, and P2 cells) were highly correlated. The drug response of the model reflects the clinical response of patients to targeted drugs. Although the PDC model established by metastatic patient derived tumor cells can reflect the individualized characteristics of patients, it is cultured in vitro, which is difficult to culture and cannot simulate the process of tumor metastasis in vivo. Therefore, the use of this model to screen metastasis related genes is limited. The metastasis related genes screened by the above CDX model and PDC model are conducive to the discovery of relevant molecules promoting gastric cancer metastasis and provide help for the early detection of gastric cancer metastasis in the clinic (Almagro et al., 2014). Patient derived xenograft (PDX) model improves the shortcomings of the CDX model and the PDC model. It is a better model to screen metastasis related genes at present. The model is a xenotransplantation model established by transplanting fresh clinical surgical specimens into immunodeficient mice. It maintains the microenvironment of primary tumor growth, so it can better simulate the biological behavior of tumors in vivo. Choi et al. (2016) successfully established 15 cases of gastric cancer PDX models, and found that the histological and genetic characteristics of the tumor models remained stable in subsequent passages and were highly consistent with the primary tumor. This discovery made the use of PDX models for the development of gastric cancer molecules possible. Research and individualized treatment are possible. The PDX model has relatively consistent genomics characteristics with the primary tumor, which is very conducive to the screening of individualized metastasis-related genes. Zhang et al. (2015) successfully established 32 PDX models of gastric cancer, and found that the gene amplification of FGFR2, MET, and ERBB2 is very similar between PDX models and their parent tumors, and the expression of PTEN and MET proteins are also moderately consistent. These data are in vivo testing of individualized therapy and screening of transfer-related genes provides a theoretical basis. There are many methods of tissue transplantation when establishing a PDX model, including subcutaneous transplantation, renal capsule transplantation, orthotopic transplantation, etc. (Okada et al., 2018). Among them, subcutaneous transplantation is the most commonly used transplantation method. Guo et al. (2019) established a PDX model of gastric cancer by subcutaneous transplantation and revealed the molecular mechanism of ISL1 that promotes gastric cancer metastasis by combining the ZEB1 promoter and the cofactor SETD7. ISL1 may be a potential prognostic marker of gastric cancer. Because the microenvironment of orthotopic transplantation tumors is closer to the human environment, orthotopic transplantation can simulate the growth of tumors in the human body better than subcutaneous transplantation, and it is easier to simulate clinical metastasis, which is beneficial to screening metastasis-related genes. Wang et al. (2018) found that 28 miRNAs are differentially expressed in invasive gastric cancer through array analysis. Among these 28 miRNAs, miR-29b is one of the most significantly down-regulated miRNAs. RNA response element (miRNA response element, MRE) binds to the negative regulation of MMP2, thereby affecting the development of gastric cancer. However, this kind of animal model experiment method is very costly and time consuming. With the continuous enhancement of computing power, computing methods have been able to process massive amounts of biological data and mine knowledge from the data (Zhao et al., 2021). Deep learning, machine learning, and reinforcement learning have been widely used in the fields of biology and medicine (Zhao et al., 2020a;Tianyi et al., 2020). These methods use existing knowledge to construct complex mathematical models to predict new knowledge (Zhao et al., 2020b). In this paper, we extracted network association of each gene by Random Walk (RW) and used GBDT to identify the gastric cancer-related genes. METHOD We obtained 435 genes that are known to be related to gastric cancer in DisGeNet (Piñero et al., 2020). We collected genes that can interact with these 896 genes in HumanNet V2.0 (Hwang et al., 2019). Based on the interaction information, we built a gene interaction network. This network contains 1331 nodes, and each node is a gene. Extracting Features by RW The core formula of RW is as follows: A is the adjacency matrix of the gene interaction network. P is random walk matrix. γ is a parameter that is needed to be set. We set γ as 0.5 based on experience. If P t+1 − P t > ℓ (we can set ℓ as arbitrarily small number), we can repeat Formula (1). Otherwise, we could obtain P t+1 as the final RW matrix. Identifying Gastric Cancer Susceptibility Gene by GBDT After obtaining the feature of genes by RW, we need to build a classifier to identify whether a gene is associated with gastric cancer GBDT does not need to scale the data to build model, and it is also suitable for data sets where dual features and continuous features exist at the same time. First, the decision tree used by GBDT is a CART regression tree. Whether it is dealing with regression problems or two classifications and multiple classifications, the decision trees used by GBDT are all CART regression trees. Because the gradient value to be fitted in each iteration of GBDT is a continuous value, a regression tree is used. The most important thing for the regression tree algorithm is to find the best division point, then the division point in the regression tree contains all the desirable values of all features. The criterion for the best division point in the classification tree is entropy or Gini coefficient, which are both measured by purity, but the sample labels in the regression tree are continuous values, so it is no longer appropriate to use indicators such as entropy, instead of the square error, which can judge the degree of fit very well. The process of constructing CART is as follows: Input: training data set D. Output: regression tree f (x). Recursively divide each region into two sub-regions in the input space where the training data set is located and determine the output value on each sub-region to construct a binary decision tree: As shown in Formula (2), we need to choose (j, s) to minimize min (y i − c 1 ) 2 + min (y i − c 2 ) 2 . Then, we need to introduce (j, s) to divide the area and determine the corresponding output value: Continue to call Steps (1) and (2) for the two sub-regions until the stop condition is met. Divide the input space into M regions (R 1 , R 2 , ..., R m ), build the final decision tree. Gradient boosting is an improved algorithm of the Boosting Tree. There are three steps to implement the Boosting Tree. Step 3 Fit the residual r mi to obtain regression tree and obtain h m (x). Step Step 5 The final regression boosting tree would be: Based on the Decision Tree and Gradient Boosting, we can combine them to obtain the final GBDT. First, we need to initialize week learner. For each sample i 1, 2,..., N, we need to calculate the negative gradient (residual): Use the residual obtained in the previous step as the new true value of the sample and use (x i , r im ) as the training data of the next tree to obtain the new regression tree f m (x). The leaf node area of f m (x) is R jm , j 1, 2, . . . , J. J is the number of leaf nodes. Calculate the Best Fit Value Update Strong Learner Get the Final Learner RESULTS Since we obtained 435 genes that are known to be related to gastric cancer in DisGeNet and 896 genes that have strong interaction with them, the 435 genes were used as the positive samples and 896 were used as negative samples. We used these data to build GBDT model to identify gastric cancer susceptibility genes. We applied 10-cross validation to verify the accuracy of our model. The AUC (Area Under Curve) and AUPR (Area Under Precision Curve) of our model is shown as Figures 1 and 2, respectively. The average AUC of 10-cross validation is 0.89 ± 0.008 and average AUPR of 10-cross validation is 0.81 ± 0.006. Since the number of negative samples is significantly higher than positive samples, to balance the training sample set, we randomly selected 435 negative samples from 896 genes each time and repeat the 10-cross validation. In addition, we also compared our method with other methods, such as Support Vector Machine (SVM), Xgboost, Adaboost, and Deep Neural Network (DNN). We totally randomly sampled five negative sets. The performance of these methods is shown as Figures 3 and 4. As shown in Figures 3 and 4, the AUC and AUPR of GBDT are higher than other methods, which explains the superiority of our method over other methods. CONCLUSION Through early detection, early diagnosis, and early treatment, the cure rate of patients with early gastric cancer can reach 85%; However, the 5-year survival rate of patients with advanced gastric cancer is less than 10%. At present, inhibitors targeting vascular endothelial growth factor (VEGF), epidermal growth factor (EGF), and tyrosine kinase have been successfully developed, showing significant curative effects on gastric cancer. This greatly encourages us to study the characteristic markers of recurrence or metastasis of gastric cancer from the perspective of genes. Few genes related to gastric cancer have been found in cohort studies and animal model experiments. However, due to the cost, such methods cannot be popularized large scale. In this paper, we proposed a novel method to identify gastric cancer-related genes in large scale. Genes that interact more closely are more likely to be related to similar diseases. Based on this hypothesis, we considered to use the gene interaction information to build a network and infer the gastric cancerrelated genes by this network. RW was applied to encode the features of genes and GBDT was implemented to identify gastric cancer-related genes. We verified our method by two kinds of 10cross validation experiments. Our method showed high accuracy in both experiments, indicating that our method can be used to identify genes related to liver fibrosis. The method proposed in this article will provide guidance for genetic mechanism and clinical treatment of gastric cancer. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. ETHICS STATEMENT Ethical review and approval were not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS QC, JiZ, and JeZ designed the study. QC, JiZ, BB, and FZ interpreted the data and analyzed the results. All authors read and approved the final manuscript. FUNDING Financial support comes from the National Natural Science Foundation of China (81371508, 81572985, and 31000471).
2022-01-13T14:20:58.845Z
2022-01-13T00:00:00.000
{ "year": 2022, "sha1": "c9ad06b253030d0c814beb1ef1ac72aeefcf0f28", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "c9ad06b253030d0c814beb1ef1ac72aeefcf0f28", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }