id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
236362151
pes2o/s2orc
v3-fos-license
Application of Cellulose Derivatives in Mineral Processing Cellulose derivatives (CDs) have been recognized as an anionic water-soluble, non-toxic, biocompatible and biodegradable polysaccharide. The CDs have been used as a viscosity regulator, thickening agent, sizing agent and coating agent and emulsion stabilizer, electrode binder in various industries. These characteristics properties of CDs are associated with hydroxyl groups/functionalized groups present in their structure. The CDs have significant advantages in various fields including several industrial applications such as mineral processing, palletisation process, oil drilling industrial applications due to their non-toxic and selective properties. Moreover, The CDs have been extensively used as a depressant, dispersant as well as flocculants in mineral processing from various ores. During the mineral processing like flotation of sulfide minerals highly toxic inorganic species were used as dispersant and depressant which ultimately cases environmental toxicity. Therefore, there is a current need to introduce CDs as various alternative nontoxic dispersant and flocculants. This chapter emphasized an overview of the application of CDs in mineral processing including the structure, properties of the commonly used minerals processing. Introduction Cellulose has been recognized as the most abundant polymer on the planet, making it an important raw material for a variety of applications. Because of its potential use in the development of biofuels, cellulose has recently gained attention. However, cellulose's flexibility has been demonstrated in a variety of applications. It can also be chemically modified to produce cellulose derivatives (CDs) [1]. The two main classes of cellulose derivatives (CDs) were Cellulose ethers and cellulose esters, which have different physicochemical and mechanical properties. The Cellulose derivatives (CDs) have been widely used in a wide range of applications, including particle dispersion, flocculation processes, surface treatment, and so on. Tablet binding, thickening, film-forming, water retention, adhesion, and suspending and emulsifying agents are some of the most common uses of cellulose derivatives in tablet and capsule formulations. Natural aggregates are still important in cement production, though high-purity sources are becoming harder to come by. The CDs have been used to "inert" the aggregate to the cement formulation to prevent clay minerals associated with the aggregates from adversely affecting the cement formulation by adsorption of plasticizers and resulting property alteration [2]. The CD's application in the upstream petroleum industry, such as exploration, drilling, development, and distribution, has recently sparked renewed interest. Adding CDs to fluids, in particular, can have important benefits for improved oil recovery and well drilling, such as changing fluid properties, rock wettability alternation, advanced drag reduction, sand consolidation strengthening, minimizing interfacial friction, and increasing mobility of capillary-trapped oil [3]. For the depression of copper minerals, inorganic modifiers such as sodium cyanide, sodium sulphide or hydrosulphide, ferrocyanides, and Nokes reagent are frequently used. These reagents were very reliable, but their use has recently raised environmental concerns. CDs depressants have been investigated as possible alternatives to avoid this issue. The use of polysaccharides namely starch, dextrin, and carboxymethylcellulose (CMC) in sulphide minerals have been identified [4][5][6][7][8][9][10][11]. A frother is produced by combining hydroxypropylmethylcellulose (HPMC) or hydroxyethyl methylcellulose (HMC) with at least one non-ionic organic surfactant or polyglycol esters. The new cellulose-based frothers can be used in mineral processing plants to allow for the processing of larger amounts of minerals without requiring major changes to existing equipment. This book chapter describes the structure, properties of the commonly used CDs in minerals processing. Background Raw material demand has steadily increased on a global scale as a result of demographic and economic developments. If current trends in raw material use intensify, Industrial technologies will be unable to meet this growing demand. As a result, it is necessary to highlight that raw material (CDs present case) production must be supported as a strategic necessity, requiring the development of new technologies that can help meet this upcoming raw material demand. An additional challenge for the mining industry is that, as the world's mineral reserves are exhausted and demand for metallic raw materials rises, it will be designed to process ever greater amounts of low-quality mined material to manufacture concentrates in adequate quantities to meet current and potential demand. As a result of these requirements, minerals trapped in tailings ponds have started to receive interest as a potential source of raw materials, as many existing ore bodies near depletion. As a result, the mineral processing industry was interested in finding new solutions for the treatment of tailings to reprocess. It's worth remembering that the global amount of tailings was enormous, and if an available processing method could be developed, this could translate into a massive feed stream for the metals industry [11]. One of the most commonly used enrichment methods in mineral processing is froth flotation separation. The flotation separation is used in the processing of metals like copper, gold, and platinum to produce concentrates that can be refined economically. Efforts are currently being made to better understand the various phenomenon like frother phenomenon, adsorption etc. and how to regulate it, but no genuinely innovative frother method has been proposed, as most studies are focusing on CDs [12,13]. As soon as synthetic polymers (CDs) were introduced into mineral processing, combinations with cellulosic materials became available and the importance of CDs came to light. Selective depressants were essential components of any flotation reagent scheme that aims to separate various minerals selectively. Inorganic depressants have been used often. Many of these depressants, especially those used in differential sulphide flotation, are highly toxic and unsuitable for use in the environment. Sodium cyanide, sodium dichromate, sulfur dioxide, arsenic trioxide, phosphorous pentasulfide, and other depressants were examples. Since some of these inorganic depressants were reducing agents, they can be oxidized in aerated flotation pulps. The use of a lot of reagents was normally the result. Polysaccharides, on the other hand, were non-toxic and biodegradable natural organic polymers. They're much less expensive and less prone to oxidation than inorganic depressants. These characteristics not only make them perfect flotation reagents, but they have also shown promise as selective depressants in a variety of differential mineral flotation systems. For nearly 70 years, cellulose derivatives (CDs) and polysaccharides/CDs have been used in the mineral industry as depressants for iron oxides, naturally hydrophobic minerals, and rock-forming gangue minerals. They've also been reported to be selective in sulphide mineral differential flotation [14]. Cellulose derivatives (CDs) Cellulose is a linear polymer made up of D-glucose monomers linked together by D-β (1-4) linkages and arranged in repeating cellobiose units, each of which contains two anhydroglucoses (Figure 1). Cellulose has a long molecular chain and the three hydroxyl groups have a high hydrogen-bonding ability. The hydrogen atoms of hydroxyl groups in cellulose's anhydroglucose units were replaced with alkyl or substituted alkyl groups to create cellulose ethers, which have a high molecular weight. The molecular weights, chemical structure, and distribution of substituent groups, as well as the degree of substitution and molar substitution, determine the commercially important properties of cellulose ethers (where applicable) [1]. The solubility, viscosity in solution, surface activity, thermoplastic film characteristics, and resilience against biodegradation, heat, hydrolysis, and oxidation were all examples of these properties. The molecular weights of cellulose ether solutions were specifically correlated to their viscosity. Methylcellulose (MC), ethyl cellulose (EC), hydroxyethyl cellulose (HEC), hydroxypropyl cellulose (HPC), hydroxypropylmethylcellulose (HPMC), carboxymethyl cellulose (CMC), and sodium carboxymethyl cellulose have several identified of the most commonly used sodium-carboxyl methylcellulose (Na-CMC). However, the CDs HPMC, HPC, microcrystalline cellulose (MCC), silicicedmicrocrystallinecellulose (SMCC), HEC, sodium carboxymethylcellulose (SCMC), ethylcellulose (EC) methylcellulose (MC), oxycellulose (OC), etc. have also been used in allied industries Table 1 [14,15]. Carboxymethylcellulose (CMC) Carboxymethyl cellulose (CMC) has been introduced as a cellulose derivative in which some of the hydroxyl attached to them (-CH2-COOH) make up the cellulose backbone (Figure 2). The alkali-catalyzed reaction of cellulose with chloroacetic acid produces it. The polar carboxyl groups in cellulose (organic acid) rendering it soluble and chemically reactive. The degree of substitution of the cellulose structure (i.e., how many of the hydroxyl groups have taken part in the substitution reaction), as well as the chain length of the cellulose backbone structure and the degree of clustering of the Carboxymethyl substituents, impact the functional properties of CMC. The CMC was also used in the oil drilling industry as a viscosity modifier and water-retaining agent in drilling mud. CMC has been used to make poly-anionic cellulose (PAC) which was often used in oilfield operation. Some researchers performed surface modification and used surfactant to adjust the surface tension of the carbon fiber to improve dispersion. The wettability of carbon fibers by water was effectively improved by ozone surface treatment, which increases the fiber-matrix bond [15]. The silane treatment of carbon fibers enhances the mechanical properties of carbon fiber reinforced cement, according to Xu and Chung [16]. Wang et al. [17] used hydroxyethyl cellulose and an ultrasonic wave to support fiber dispersion in carbon fiber-reinforced cement-based composites. As a dispersing agent, CMC was used. CMC can increase carbon fiber dispersion because it has both hydrophobic and hydrophilic sides as a dispersant. Concerning carbon fiber dispersion, the effects of CMC concentration and solution pH were investigated [18]. Sodiumcarboxymethylcellulose (CMC) Sodium carboxymethyl cellulose (CMC) is one of the most important products of cellulose ethers, which have been cellulose derivate with an ether structure produced by natural cellulose modification (Figure 3). Since the acid form of CMC has a low water solubility, it is generally preserved as sodium carboxymethylcellulose, which is widely used in many industries and is commonly referred to as monosodium glutamate [19,20]. Na-CMC can be supplied stably and in large quantities, and its technical and cost efficiency was uniform and robust as compared to botanical natural polysaccharides such as tragacanth, arabic, and gua gums, or microbiological polysaccharides such as xanthan gum, which perform the same functions. Na-CMC-Na can thus be recommended as a suitable additive for coal-water slurries (CWS) as an energy supply source with low cost, stability, uniform property, and broad quantity supply capability [21]. A bituminous coal sample from Zonguldak (Thermal Code No. 434) was also used in addition to the brown coals. In all of the samples, Na-CMC was used as a stabilizer in the preparation of CWS [22]. Besides this Na-CMC was used to relieve dry, irritated eyes. Common causes for dry eyes include wind, sun, heating/air conditioning, computer use/reading, and certain medications. Electronics, pesticides, leather, plastics, printing, ceramics, and the daily-use chemical industry are only a few of the fields where Na-CMC has been used. Hydroxyethyl cellulose (HEC) Hydroxyethylcellulose (HEC) (Figure 4) has been introduced a polysaccharide derivative with gel thickening, emulsifying, bubble-forming, water-retaining and stabilizing properties. It is a white, yellowish-white or grayish-white, odorless and tasteless, hygroscopic powder [23,24]. Hydroxyethylcellulose (HEC) and its hydrophobically modified derivatives have been widely used in many industrial areas such as pharmaceuticals, cosmetics, textiles, paint and mineral industries including in oil extraction, coating, medication, food, and polymer polymerization. This was non-toxic and inexpensive. Despite this, only a few reports on HEC as a sulphide depressant have been published. The function of hydroxyethyl cellulose (HEC) in the flotation separation of chalcopyrite and galena has been investigated, and the explanation for this has been explained. In the presence of H2O2, a small amount of HEC can depress galena flotation but had only a negligible effect on chalcopyrite flotation. HEC was adsorbed on galena surfaces primarily through chemical reactions with oxidation products formed on the surface, and the addition of H2O2 can significantly improve HEC adsorption by producing further oxidation products. As a result, its research would be used as a galena depressant in the flotation separation of chalcopyrite and galena, as well as to propose a method for separating copper/lead sulphide minerals. Furthermore, HEC can be used as the stabilizer of beer foam [25][26][27][28][29][30]. Conclusions Mining is one of humanity's oldest industries, and it has aided in the advancement of technology that has brought us our modern products. The rising demand for mineralbased technological goods, combined with the increasingly difficult and complex method of obtaining raw materials, necessitates more efficient separation and recycling processes. At the same time, the pressures on mineral processing in terms of sustainability and environmental friendliness are making it more difficult to produce minerals as a scarce resource economically viable. Therefore, Cellulose derivatives (CDs), high-molecular-weight condensation polymers made up of basic monosaccharide sugar units, has been applied in mineral processing. Although there were many different types of polysaccharides including their derivatives in nature, wildly have been used in the mineral industry, especially in flotation. The chapter which is presented here is a small contribution to the growing understanding of polymer/CD's applications. We thus feel that this chapter is a valuable contribution to the field of mineral processing and its allied areas with a brief description of CDs. Hence in this chapter, we have summarized the current status of important cellulose derivatives employed in mineral processing. © 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2021-07-27T00:04:39.760Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "9b8457aeede9291224eae3bf48a0f03bac24dbfe", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/75913", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "68eeb66efcbab797635cfb6ea55636a673c20f4e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
206741367
pes2o/s2orc
v3-fos-license
Successful Memory Response following a Booster Dose with a Virosome-Formulated Hepatitis A Vaccine Delayed Up to 11 Years ABSTRACT Boosting adult travelers with the virosome-formulated, aluminum-free hepatitis A vaccine Epaxal up to 128 months after a single primary dose confers full protection against hepatitis A, even in travelers aged 50 years and above. Delaying the booster dose did not influence the immune memory response to Epaxal. Adults traveling from regions where the prevalence of hepatitis A is low to regions where it is high are at risk of acquiring symptomatic hepatitis A virus (HAV) infection (1). Long-lasting protection against HAV is, by recommendation, achieved with administration of 2 vaccine doses 6 to 18 months apart (12). In practice, many travelers do not return to see their doctors within the recommended time for the second (booster) dose, and therefore, defining the maximum interval between the two doses still conveying long-term seroprotection is of importance. Several studies using an aluminum-absorbed HAV vaccine (Havrix) and comparing various lengths of delay of the second dose (Ն24 months [9] or up to 8 years [8] after a single primary dose) have shown comparable memory responses irrespective of the interval between the two doses. A single dose of Epaxal, the only aluminum-free virosomal HAV vaccine currently available, is highly immunogenic (3). Two doses of Epaxal administered 12 months apart give adults a real-time protection of at least 9 to 11 years, which is predicted to last for at least 30 years in Ն95% of individuals (4). A study in 1999 showed that Epaxal is highly immunogenic when a booster is given 18 to 54 months after the primary dose, indicating that a delay in the administration of the booster of up to 54 months does not lead to loss of immunogenicity (2). A subsequent study in 2006 investigated the immunogenicity of an Epaxal booster administered Ն5 years after the primary immunization. This report presents the results from that 2006 study but as a combined analysis of both the 1999 and 2006 studies and evaluates the level of memory response to Epaxal when given as a booster dose 9 to 128 months (0.8 to 10.7 years) after the primary dose. Previously unpublished results from the 1999 study evaluating the postbooster immune re-sponse in a subgroup 9 to 17 months after the primary immunization are also included. Both studies were noncomparative, open-label, and singlecenter studies and were performed at the Swiss Tropical and Public Health Institute (STPH) in Basel, Switzerland; they were approved by the Ethics Committee of Basel EKBB (Basel, Switzerland) and conducted in compliance with the Declaration of Helsinki. All subjects provided informed consent before study entry. The study population included subjects who had received Epaxal primary immunization at the STPH travel clinic but had not received a booster dose for Ն9 months (in the case of the 1999 study) or Ն5 years (in the case of the 2006 study). The exclusion criteria were as previously described (2). All participants received a booster dose of 0.5 ml Epaxal (containing Ն24 IU of HAV antigen; Crucell Switzerland AG) supplied in ready-to-use syringes and given intramuscularly into the deltoid muscle. HAV antibody concentrations (mIU/ml) were measured in parallel in paired serum samples that were obtained for both studies at baseline and 4 to 7 weeks after the booster dose, using an enzyme immunoassay, AxSYM HAVAB 2.0 (Abbott). Seroprotection cutoffs of Ն20 mIU/ml and Ն10 mIU/ml are presented, both of which are accepted as HAV protection cutoffs (6). Additionally, the 6-mIU/ml cutoff is presented as the lowest measurable concentration of specific anti-HAV antibodies by this sensitive assay, validated at the Department of Virology, Max von Pettenkofer Institute, Ludwig Maximilians University, Munich, Germany. Descriptive statistics were used for data analysis. Seroprotection rates and geometric mean concentrations (GMCs) of HAV antibodies were evaluated by booster interval (9 to 29 months, 30 to 41 months, 42 to 54 months, and 98 to 128 months), by the age of the subjects (Ͻ50 years and Ն50 years), and by their gender. The time interval effect on the HAV antibody response and the pre-versus postbooster HAV anti-body concentration correlations were calculated using logistic regression analysis. Overall, 130 subjects were analyzed, i.e., 104 from the 1999 study (booster interval, 9 to 54 months), whose samples were still available for retesting, and 26 from the 2006 study (booster interval, 98 to 128 months). The mean age was 39.3 years (range, 20.5 to 73.0 years) for the whole group, 33.4 years (range, 20.5 to 48.0 years) for the subgroup of Ͻ50-year-old subjects (n ϭ 100), and 59.1 years (range, 50.0 to 73.0 years) for the subgroup of Ն50-year-old subjects (n ϭ 30). There were more females (n ϭ 72) than males (n ϭ 58). The proportions of seroprotected subjects across the booster time intervals and according to age group and gender are presented in Table 1. The majority (73.8%) of subjects tested 9 to 128 months after receiving the primary dose of Epaxal still had measurable anti-HAV antibody concentrations (Ն6 mIU/ml is the lower limit of detection), and 59.2% still had protective levels of anti-HAV antibodies (Ն10 mIU/ml). The prebooster seroprotection rates were, for each cutoff level, relatively similar across the different time intervals, indicating a remarkable long-term persistence of antibody levels up to 128 months after the priming immunization. A 100% postbooster seroprotection level was achieved in all interval groups. Prebooster seroprotection rates were lower in the older (Ն50) than the younger (Ͻ50) age group (47% versus 63% at Ն10 mIU/ml), but both age groups achieved 100% postbooster seroprotection in all time intervals. Females, as expected and previously reported (7,9), had up to three times higher antibody concentrations than males (data not shown). Females also had higher prebooster seroprotection rates than males (74% versus 41% at Ն10 mIU/ml), but both groups achieved 100% postbooster seroprotection in all time intervals (Table 1). The GMC increased from 17 to 1,557 mIU/ml for the total population (Table 2). There were no significant differences in postbooster anti-HAV GMCs between older and younger subjects, and the time intervals did not influence the memory response in either of the two age groups (Table 2). Logistic regression analysis revealed that there were no statistically significant differences in antibody concentrations between the 4 booster interval subgroups (P ϭ 0.1381) ( Fig. 1; Table 2) and that low prebooster antibody concentrations correlated significantly with lower postbooster values (r ϭ 0.59; P Ͻ 0.0001) (data not shown). The present evaluation of 130 travelers aged 21 to 73 years demonstrates that a delay of the booster dose of up to 128 months after receipt of the primary vaccination does not influence the memory response to Epaxal. All subjects, even those Ն50 years old with lower prebooster anti-HAV antibody concentrations than the younger subjects, achieved 100% postbooster seroprotection irrespective of the time interval between the primary and booster vaccination. The proportional postbooster increases in GMCs were comparable between older and younger subjects, and high and nearly identical GMCs were obtained in all groups, even in the group of Ն50year-old subjects, which had the highest proportion of subjects with no-longer-detectable specific anti-HAV antibodies (Ͻ6 mIU/ml) prior to the booster. To our knowledge, the 2006 study reports on the largest group of subjects published to date who had received a booster dose after a considerably long interval (8.2 to 10.7 years). The observation that the time interval between primary and booster dose does not influence the immunogenicity of the booster dose confirms the published findings of the 1999 study (2) and is in line with the findings of other studies using an aluminumadsorbed hepatitis A vaccine (8,9). This antibody memory recall response indicates that the first vaccine dose elicits an efficient priming of the immune system via an early proliferative T-cell response, known from natural HAV infection (13) and observed following immunization with an aluminum-adsorbed hepatitis A vaccine (5). The results of the present study confirm the observation from other studies that although lower prebooster seroprotection rates are found in older subjects (Ͼ40 years old) than in younger subjects, the postbooster immune responses are comparable between the different age groups (7,10,11). The present findings are of special importance for clinical practice, as travelers frequently do not return for the scheduled hepatitis A booster vaccination after 6 to 18 months. The fact that a delayed booster dose does not impede the memory response to Epaxal offers more flexibility for hepatitis A vaccination schedules for travelers.
2018-04-03T04:59:38.638Z
2011-03-16T00:00:00.000
{ "year": 2011, "sha1": "0c6f4b5444fe309f5a29a5ee8d2cf7686c3ecdd3", "oa_license": null, "oa_url": "https://cvi.asm.org/content/cdli/18/5/885.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "c3b7413601bb53e678b2ba0a56a5f918e1ddd841", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
233907475
pes2o/s2orc
v3-fos-license
The Effect of Color and Positional Noise on Reading Performance in Human Vision BACKGROUND: Reading can be described as a complex cognitive process of decrypting signs to create meaning. Eventually, it is a way of language achievement, communication, and sharing information and ideas. Changing lighting and color are known to improve visual comfort and the perceptual difficulties that affect reading for those with poor vision. AIM: This study aims to investigate the effect of changing the wavelengths and different levels of positional noise on reading performance for participants with best-corrected distant visual acuity (BCVA) of 6/6 or better. METHODOLOGY: Twenty English speakers with BCVA 6/6 or better were asked to read words presented in a printed format. The stimuli were black print words in a horizontal arrangement on matte white card. They were degraded using positional noise produced by random vertical displacements of the letter position below or above the horizontal line on three levels. RESULTS: Introducing positional noise affected word recognition differently with different wavelengths. The role of short wavelength in enhancing orthographic reading and word recognition is clear – they reduce the effects of positional noise. The error rate and duration time have different effects with different wavelengths, even when positional noise is introduced. CONCLUSION: The reading rate is not affected by changing the wavelength of the light. However, the mean differences in wpm were affected by changing the wavelengths. Also, introducing positional noise affects word recognition differently with different wavelengths. Edited by: Ksenija Bogoeva-Kostovska Citation: Alsalhi A, Northway N, Walsh G, Elmadina AEM. The Effect of Color and Positional Noise on Reading Performance in Human Vision. Open Access Maced J Med Sci. 2021 Feb 18; 9(B):130-135. https://doi.org/10.3889/oamjms.2021. 5729 Introduction Reading can be described as a complex cognitive process of decryption signs to create meaning. It is also a way of language achievement, communication, and sharing information and ideas. Therefore, reading is a basic requirement for an advanced society. There are three basic processes underlying reading: Sight perception, printed word recognition, and language comprehension [1]. Word recognition is considered a fundamental literacy skill that enables access to and processing of written language, as well as influencing reading performance. Word recognition can be described as the ability to accurately and automatically recognize words, with and without semantic context. It is a stamp of skilled readers' performance or the ability to accurately identify printed words [2]. Changing lighting and color are known to improve visual comfort and the perceptual difficulties that affect reading for those with poor vision [3], [4]. The role of colors in reading has been researched for many decades. It was mentioned back in 1958 that a student with reading difficulties was unable to recognize words printed on white paper but was able to recognize words printed on yellow paper [3]. It has been argued that colored overlays applied above written texts positively influence both reading fluency and reading speed [3]. Colored lenses and lighting are used to maximize visual efficiency in a range of patient groups such as those with reading difficulties. However, we do not know which wavelengths or colors can improve reading performance. Introducing noise to measure human visual performance may provide important insights into the neural mechanisms (cortical processes) and computations used to solve a visual task [5]. There are some factors that affect reading performance by creating visual noise. One of the main factors that affect reading performance is letter spacing [6], [7], [8]. Manipulating the letter spacing can reduce the reading rate. According to Chung (2000), the reading rate increased with letter spacing, up to a critical letter spacing, and then, it either remained constant at the same reading speed or decreased slightly for larger letter spacing. The rate of the critical letter spacing was very close to the standard letter spacing and this held for all eccentricities (peripheral vision) and for both print sizes [6]. In human eyes, photoreceptors are the first link to visual perception, functioning as photon detectors. There are two types of photoreceptors: Rods and cones. Cones activate in the ranges of mid to bright light intensity. Cones are responsible for color vision as subtypes of cones are maximally sensitive to different wavelengths of light [9], [10]. There are three types of cones according to the wavelength of peak absorption: Short wavelength-sensitive cones (S-cones, max ≈ 420 nm), which account for approximately 8-10% of the total number of cones; middle wavelength-sensitive cones (M-cones, max ≈ 530 nm), and long wavelengthsensitive cones (L-cones, max ≈ 560 nm) [11], [12]. This study investigated the effect of changing the wavelength (along with types of cones by choosing three different wavelengths: Short, medium, and long) with different levels of positional noise on reading performance for normal vision participants. It also studied how this affects orthographic reading and the word recognition of real words. We measured the reading speed, duration time for reading, and the error percentage when changing the wavelength and level of positional noise. Methodology An interventional cross-section study included 20 English speakers (native and non-native) with bestcorrected distant visual acuity (BCVA) of 6/6 or better participants (13 males and 7 females) (age range 18-38 years and mean age 28.9 years). The stimuli were presented in printed format. Stimuli were black print words in a horizontal arrangement on matte white card. The text samples contained unrelated words of 3, 4, 5, and 6 characters, presented in 9 lines using Courier monospaced font, using black words on a white background. The distance between two adjacent words was two character widths, and the interline distance was five character heights. A simple random sample was selected from a set of the 560 most commonly used words in the English language [13] and was not repeated within the same trial. The viewing distance was 40 cm (Figure 1). The words had an optimal font size of 12 pt. (angular character size of 0.3 deg., defined as center-to-center spacing of horizontally adjacent characters) with contrast (>90%). First, words were degraded using positional noise produced by random vertical displacements of the letter position below or above the horizontal line on three levels. Each vertical letter position was sampled from a Gaussian distribution with zero mean and variance in the range 0.00 (N0), 0.30 (N1), and 0.60 (N2) × character height 2 ( Figure 2). Second, the wavelengths were changed from short to long stimuli. We used blue lighting (454 nm, short), green lighting (514 nm, mid), red lighting (620 nm, long), and white lighting as everyday lighting, with constant illumination for all four different wavelengths (30 lux). The wavelengths were measured and controlled using a UPRtek spectrometer (MK350N Plus). The range of wavelengths measurable by the spectrometer (MK350N) is between 380 nm and 780 nm. All participants gave informed consent to take part in this study, which was approved by the Glasgow Caledonian University Ethics Committee. The participants were instructed to read the text samples aloud as fast as possible. They were given a brief explanation of all the experimental conditions. Tests with real words were presented in a random order. The participants were video recorded while they were reading out loud. The total number of words read and the number of words read incorrectly were counted. The participants were asked to read under three different colored LED lighting conditions: Short (blue), mid (green), and long (red) wavelength. White LED light (combining the same LED sources) was also used to more closely approximate normal reading lighting conditions. They were asked to read three different texts of real words with N0, N1, and N2. The participants were asked to read each text for 1 min under each wavelength. Results The Shapiro-Wilk and D'Agostino test indicated normal distribution of the data (p > 0.05) which fulfilled or satisfied the most important condition for using parametric tests (t and F) which were used in the GraphPad Prism program to analyze the data. We grouped the average reading rates for all the participants under the four different wavelengths and colors and calculated the average word per minute rate. The reading rate did not vary with the changes in wavelength, except that with the long wavelength (red), it was slower than with the other wavelengths, but the difference was not statistically significant (one-way ANOVA, p = 0.628, F = 0.69, and R 2 = 0.026) ( Figure 3). We, therefore, calculated the change in reading rate for each individual to see what effect changing the wavelength had on their reading rate. We then calculated the difference for the wavelength (long, mid, and short) from the white wavelength (considered as mean). The results showed that the mean difference was reduced when the wavelength changed from long (red) to mid (green) or short (blue): It reduced by an average of 14 words per minute. This change was statistically significant (one-way ANOVA, p < 0.0001, F = 4.2, and R 2 = 0.53). However, the mean difference was not significant between the mid and short wavelengths, as it reduced by an average of 1. When introducing the positional noise, the average reading rate was recorded for all the participants under the four different colors and the three different levels of positional noise by calculating the average word per minute rate (Figure 3). The statistical analysis showed that the reading rate was not significantly reduced in any wavelength when introducing positional noise between N0 and N1 (one-way ANOVA, p = 0.147). However, the reading rate was significantly reduced when introducing positional noise between N0 and N2 (one-way ANOVA, p < 0.0001, F = 8.3, and R 2 = 0.28), but the changes with the short wavelength and white color were not significant ( Figure 3). Furthermore, the statistical analysis suggested that the mean reduction in reading rate relative to white light when the wavelength changed from short to long or mid was significant, reducing by an average of 22 words per minute (one-way ANOVA, p < 0.0001, F = 15, and R 2 = 0.35). However, there was no significance in the mean difference between long and mid wavelength with positional noise, as the reading rate reduced by an average of 3.3 words per minute (p = 0.714) ( Figure 5). The linear regression tests were conducted on each color with and without positional noise. The reading rate as a function of positional noise was fitted with linear function. Furthermore, the reading performance was reduced when introduction the positional noise in all wavelengths ( Figure 6). However, the initial reading (reading at 0 level of positional noise) was not similar with different wavelengths. The initial reading rates with short, mid wavelengths and white color were almost the same, but with long wavelength were slower compared with other colors. Moreover, a comparison of regressions in the different wavelengths revealed that the reduction in the reading rate was not similar for all the participants as the noise increased. In short wavelength, the gradient was the slower (the slope was flatter) compared with other wavelengths, and the mid wavelength was the highest gradient (the slope was steeper). The statistical analysis showed significant reading reduction with and without positional noise in mid (green) wavelength and white light (p = 0.0085, F = 460, and R 2 = 0.99, which indicate 95% confidence intervals). However, the statistical analysis showed that there was no significance in the reading reduction for long (red) and short (blue) wavelengths (p = 0.169, F = 13.3, and R 2 = 0.93, which indicate 95% confidence intervals). A comparison of regressions in the different wavelengths revealed that the reduction in reading rate was not similar for all the participants as the noise increased ( Figure 6). The error rate was calculated as a percentage for each individual across the wavelength conditions. The error rate was calculated by dividing the number of incorrectly read words by the number of words read and multiplying by 100. The error rate was similar across all conditions of different wavelengths for the same level of noise and statistically the differences were not significant. Furthermore, the error rate was increased when changing the noise level and statistical analysis revealed no significant difference between N0 and N1 or N0 and N2 (one-way ANOVA, p = 0.185, F = 4, and R 2 = 0.16) (Figure 7). Furthermore, the average time The duration of word fixation was unchanged across the light levels with the same level of noise ( Figure 8). Furthermore, the average time increased when the noise level was changed for all levels of light but the statistical analysis showed no significant difference between N0 and N1 for all wavelengths. However, the time for word fixation increased and there was a significant difference between N0 and N2 for long and mid wavelengths (oneway ANOVA, p < 0.0001, F = 7.1, and R 2 = 0.25). However, the short and white wavelengths were not significant for the change between N0 and N2 (p = 0.404). Discussion This study has shown that reading speed was not affected by changing the wavelengths or color for the short and mid wavelengths and white color. The reading rate under a long wavelength (red) was the slowest but the results showed that the difference was not significant, compared with the other light conditions for readers with BCVA of 6/6 or better. This differs from what we expected based on the results of the previous studies, which reported the effects of overlay colors on reading performance [4], [14], [15]. However, the mean differences in wpm were reduced from the long to the mid and short wavelengths, but they were minimized between the mid and short wavelengths, which are also different to what we expected. This may be because of the role of magnocellular processing in reading performance, which reduces or impairs reading under red light (long wavelength) compared with normal (white) light [16]. When introducing positional noise, the results showed that reading speed was not significantly affected by changing the wavelengths or by introducing positional noise (N1). However, the introduction of positional noise (N2) significantly reduced reading speed under long and mid wavelengths. These results were expected because we found the same in a previous study with changing levels of noise. Furthermore, these results confirm that the color or wavelength of light affects reading performance and word recognition [4], [15]. This means that the different wavelengths and colors, especially short wavelength and white color, have different cortical effects by reducing the effects of positional noise on orthographic reading and word recognition. Moreover, binocular vision may reduce the noise effects or enhance orthographic reading, as has been shown by Murav'eva et al. [17], but this enhancement is not clear with different wavelengths or colors. Therefore, changing the wavelength and color produces cortical effects on word recognition and reading performance. The mean differences in reading rate were reduced from short to long and short to mid wavelengths. However, positional noise had no effect with a short wavelength, which is different to what we expected. The magnocellular system is also believed to be enhanced by blue light and it allows rapid word identification, known as orthographic reading [16]. In addition, the previous studies have reported that koniocellular layers have been shown to transport short wavelength-sensitive cone signals to cytochrome-oxidase blobs in V1 in the cortex [18], [19]. The results provide evidence to support the belief that short wavelength (blue light) background colors have an impact on word recognition and reading performance by reducing the effects of positional noise. In this study, the results showed that the reading rate decreased with green and white wavelengths as a linear function for positional noise levels. However, the reading rate function was not obviously different between the blue and red wavelengths, but the red wavelength produced the slowest rates of reading, compared with other light conditions as a linear function for positional noise levels. These results were unexpected. In the mid wavelength, the reading reduction seems to be a normal reduction because there is no factor affecting the reduction under the mid wavelength (green light). In the short wavelength, it is presumably because of the role of the M and K pathways, enhancing orthographic reading and word recognition by reducing the effect of positional noise [16], [19]. For long wavelengths, we would also have expected a significant reduction in reading rate because of the role of the M pathway function and positional noise, impairing word recognition and reading performance [16]. This study demonstrated that the detrimental effects of positional noise on reading rate were less evident when the words were viewed in blue wavelength or short wavelength conditions. One possible explanation for this may be that only about 10% of cones process blue light [11], [12]. This may reduce the noise effects by processing through a small number of cones, reducing the amount of internal noise generated by the positional noise within the visual system. Alternatively, the blue light may enhance the orthographic ability of the reader by enhancing the magnocellular layers (M pathway) and koniocellular layers (K pathway) function, enabling more rapid identification of the spatially disrupted words [12], [19]. Breitmeyer and Breier (1994) reported that the reaction time for the M pathway in stimulus was slower with a red background and faster with blue only for the target [20]. This means that the cortical processing is different with different wavelengths, which leads to effects on word recognition and reading performance. Furthermore, this is evidence that eye movement is unlikely to be a factor that affects word recognition [21], [22], [23]. This study showed that the error rate was not varied across all wavelengths when the level of positional noise was zero (N0). Furthermore, when changing the level of noise (N1 and N2), the error rate was not obviously different for any wavelengths. This was unexpected. It may because in this study, we used simple and short words and also single words rather than sentences, which enhances orthographic reading and word recognition [24]. The duration time for reading a word (fixation time) was unchanged across the different wavelengths with the same level of noise. However, average duration time was increased when changing the noise level for all levels of light and the statistical analysis showed no obvious differences between N0 and N1, and obvious differences between N0 and N2. However, fixation time was not significantly different between N0 and N2 under the short wavelength, which is what we expected. This is presumably because of the role of the M and K pathways, enhancing orthographic reading and word recognition by reducing the effect of positional noise [16], [19]. Limitation of the current study mainly due to lighting was a bit higher than the threshold lighting for reading (25 lux), further cohort studies need to be conducted with more participants. Conclusion The reading rate is not affected by changing the wavelength of the light. However, the mean differences in wpm were affected by changing the wavelengths. Furthermore, introducing positional noise affects word recognition differently with different wavelengths. The role of short wavelength in enhancing orthographic reading and word recognition is clear -they reduce the effects of positional noise. The error rate and duration time have different effects with different wavelengths, even when positional noise is introduced as well. How this may be correlated to the role of reading processes.
2021-05-08T00:03:40.513Z
2021-02-18T00:00:00.000
{ "year": 2021, "sha1": "79a0f8ede45fc1c8ef6f2565c1d13d19dfd7f26c", "oa_license": "CCBYNC", "oa_url": "https://oamjms.eu/index.php/mjms/article/download/5729/5414", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "134db4ab35e719b9a25d7cf2deb27830d05c117d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261648944
pes2o/s2orc
v3-fos-license
Primary pulmonary Hodgkin’s lymphoma mimicking granulomatosis with polyangiitis – a case report of diagnostic and therapeutic dilemmas Primary pulmonary Hodgkin’s lymphoma (PPHL) is a rare subtype of lymphoma that comprises a small percentage of primary pulmonary lymphomas. Due to its rarity and nonspecific symptoms, PPHL often presents diagnostic challenges. This case report presents a unique case of PPHL mimicking granulomatosis with polyangiitis, emphasizing the difficulties encountered during the diagnostic process. A 53-year-old female presented with vague symptoms including weakness, oedema, dry cough, and nasal cavity ulceration. Laboratory investigations revealed elevated C-reactive protein levels, a white blood cell count with neutrophilia, and lymphopaenia. Initial treatment with oral corticosteroids for suspected polyangiitis yielded no response. The patient subsequently developed a low-grade fever and pruritic erythematous rash. Diagnostic procedures, including bronchial brush biopsy, bronchial washing, mediastinal lymph node biopsy, nasal cavity ulceration biopsy, and initial lung biopsy, were inconclusive and resulted in exclusion of granulomatosis with polyangiitis. A subsequent computed tomography scan indicated disease progression in the left lung. A lung biopsy revealed fibrotic tissue with nodules containing Hodgkin- Reed-Sternberg cells, leading to the final diagnosis of classic Hodgkin lymphoma, nodular sclerosis subtype. Positron emission tomography scan findings confirmed PPHL. The patient received multiple chemotherapeutic regimens, with brentuximab vedotin demonstrating efficacy as the sole effective treatment. This exceptional case of PPHL underscores the extensive diagnostic and therapeutic workup involving a multidisciplinary team of clinicians, radiologists, and pathologists. Increased awareness of PPHL and its distinctive features will aid in the diagnosis of similar cases in the future, benefitting both clinicians and pathologists. Introduction Lung involvement in Hodgkin's lymphoma (HL) occurs in approximately 15-40% of HL cases [1].However, primary pulmonary Hodgkin's lymphoma (PPHL) is an exceptionally rare diagnosis, with fewer than a hundred cases reported between 1927 and 2006.Primary pulmonary Hodgkin lymphoma represents less than 0.5% of primary lung malignancies and less than 1% of all pulmonary lymphomas [1][2][3][4].Primary pulmonary Hodgkin lymphoma is defined as histologically confirmed HL localized in the lung, with or without hilar or mediastinal involvement [5,6].Due to its rarity and nonspecific symptoms, PPHL is rarely considered in the initial differential diagnosis of lung diseases.In this case report, we present the perplexing and occult presentation of PPHL in a 53-year-old patient, which posed numerous challenges in terms of diagnosis and treatment. Case report A 53-year-old woman was admitted to the hospital with weakness, weight loss, dry cough, and dyspnoea that had been progressively worsening for a few weeks.Clinical examination revealed nasal cavity ulceration.Blood tests showed elevated levels of C-reactive protein (CRP) (33.2 mg/l) and white blood cells (21.5 × 10³/μl) with neutrophilia (17.8 × 10³/μl) and lymphopaenia.Radiological findings on computed tomography (CT) revealed infiltrative changes within the third segment and lingula of the left lung, along with disseminated nodular changes.The presence of a lung mass raised suspicion of carcinoma, leading to several diagnostic procedures, including bronchial brush biopsy, mediastinal lymph node biopsy, and bronchoalveolar lavage.However, none of them showed the presence of atypical cells.Nasal cavity ulceration, elevated CRP, and lung infiltrative changes led to suspicion of granulomatosis with polyangiitis (GPA).Therefore, biopsies of the nasal cavity and lung were performed.However, both tests yielded ambiguous results with equivocal features of GPA.Despite not having a definitive diagnosis of granulomatosis associated with polyangiitis, it was decided to initiate a course of steroids (60 mg of Encorton per day).However, the therapy did not result in clinical improvement, and the patient developed a lowgrade fever and an itchy erythematous rash.Moreover, lung lesions demonstrated radiological progression, prompting a repeat diagnostic evaluation of the pulmonary changes.Video-assisted thoracoscopic surgery was performed.Histopathological analysis showed lung tissue with disrupted histoarchitecture, fibrosis, and lymphoid infiltration forming nodules, without features of phlegmonous inflammation or granulomatous vasculitis, but with the presence of dispersed larger cells resembling Hodgkin's and Reed-Sternberg-like cells (HR-S) (Fig. 1). A complex immunophenotyping was performed to differentiate PPHL from other non-Hodgkin B-cell lympho-mas, particularly lymphomatoid granulomatosis (Fig. 2).Immunohistochemistry revealed negativity for CD20, (octamer binding protein 2) OCT-2, and the Epstein-Barr virus antigen, weak positivity for (B cell lineage specific activator protein) PAX5-BSAP, and a background with a predominance of CD3-positive T-cells.These findings excluded a diagnosis of non-Hodgkin's B-cell lymphoma.Considering that the primary lesion was located in the upper lobe of the left lung, the diagnosis of PPHL was made. Next, a bone marrow biopsy and positron emission tomography (PET) scan were performed (Fig. 3).The disease was classified as stage IIIB according to the Ann Arbor pulmonary lymphoma staging system, which includes symptoms and the location of the disease, based on PET imaging demonstrating lymph node involvement on both sides of the diaphragm and the patient experiencing fever and weight loss [7]. Following the final diagnosis of PPHL, the first-line treatment for classical Hodgkin's lymphoma, the Adriamycin, bleomycin, vinblastine, dacarbazine (ABVD) regimen, was administered [8,9].Unfortunately, after this therapy, PET imaging revealed progression of the lesion in the left lung and involvement of retroperitoneal lymph nodes.In response to disease progression, intensified treatment with the BEACOPP escalated regimen (bleomycin, etoposide, doxorubicin, cyclophosphamide, vincristine, procarbazine, prednisone in escalated doses) was initiated.Three cycles of this regimen were completed, but it also proved ineffective as PET imaging showed further progression of the lesion in the left lung. Subsequently, high-dose chemotherapy for HL, specifically the ICE regimen (fractionated ifosfamide, carboplatin, etoposide), was administered.However, after 2 cycles of this regimen, PET imaging once again showed progression of the previous lesion in the left lung, as well as new metabolically active lymph nodes in the left hilum and metabolically active bone marrow.Consequently, the patient was treated with the dexaBEAM regimen (dexamethasone, carmustine, etoposide, cytarabine, and melphalan), which had previously been proposed as a salvage therapy for Hodgkin's disease [10].However, after 2 cycles of dexaBEAM, PET imaging revealed new lesions in the right lung (segments 3 and 6) and left lung (segment 6).In light of the disease progression, the patient underwent the IGEV regimen (ifosfamide, gemcitabine, and vinorelbine) for mobilization and collection of CD34+ cells for autologous stem cell transplant (auto-SCT).After the mobilization, the patient received 2 cycles of BDG (bendamustine, gemcitabine, and dexamethasone).Positron emission tomography imaging showed a response with a Deauville score of 3. The auto-SCT was performed in January 2021. However, the period of drug-induced aplasia was complicated by febrile neutropaenia, bacteraemia, gastrointestinal mucositis, as well as infection by Clostridium difficile and Candida glabrata.Additionally, slow regeneration of platelet parameters was observed, and a fine-needle bone marrow aspiration biopsy revealed a few megakaryocytes with poor platelet cleavage. Due to the disease's resistance to multiple treatment lines and the increased risk of relapse or progression after auto-SCT, the patient became eligible for treatment with brentuximab vedotin (BV) through the Ministry of Health's drug reimbursement program for HL patients (B.77).Currently, the patient has achieved a partial response after the 16th cycle of BV.The treatment is well tolerated, with only grade 1 peripheral neuropathy reported as of June 2023. Discussion Primary pulmonary lymphomas are rare, accounting for less than 1% of lung cancers, with fewer than 100 reported cases [5].Incidence rates have been slightly higher in women (M:F ratio 1:1.4), with age ranging between 12 and 82 years (mean 42.5 years) [4].The most common symptoms include cough, dyspnoea, weight loss, fever, and night sweats [11].However, in our case, the patient presented with a misleading clinical picture resembling granulomatosis with polyangiitis, which, to our knowledge, has not been previously reported in PPHL cases.This is particularly interesting because GPA has been reported primarily in diffuse large B-cell lymphoma and lymphoid granulomatosis patients [12,13].In our case, both diagnoses were considered in the differential diagnosis but were ultimately excluded based on comprehensive histological and immunophenotypic studies. Typically, PPHL presents in the superior portions of the lungs, in contrast to secondary pulmonary Hodgkin lymphoma, which can involve any region of the lungs [2].Despite the typical localization in our patient (left lung with disseminated lesions in the upper lobes of both lungs), 4 biopsies were necessary to confirm PPHL, highlighting the importance of accurate lung biopsy supported by high-quality radiographic imaging and consultation with a pulmonologist.Histopathologically, the patient exhibited nodular sclerosis Hodgkin's lymphoma, which is the most common subtype of PPHL [14]. Most HL patients respond well to chemotherapy; however, 5-10% will exhibit resistance to initial treatment, and 10-30% will experience relapse [15].Until recently, cytotoxic chemotherapy has been the sole approach used to achieve a complete response (CR) before undergoing auto-SCT for relapsed or refractory HL patients.The salvage chemotherapy regimens that are often used include ICE and DHAP (cytarabine, cisplatin, dexamethasone).Commonly employed salvage chemotherapy regimens encompass ICE and DHAP, which demonstrate a limited CR rate of approximately 25%, alongside a relatively high objective response rate (ORR) of 88% [16].Despite other proposed regimens in the event of treatment failure, these chemotherapy protocols remain viable choices for pre-auto-SCT salvage treatment [16].At present, no specific therapeutic recommendations exist for PPHL, so patients are managed in accordance with HL guidelines.Adverse prognostic indicators for PPHL include advanced age, B symptoms, bilaterality, multiplicity, pleural effusion, and cavitation [16].Some studies suggest a combination of radiotherapy and chemotherapy, but this approach has been associated with an increased risk of radiation pneumonitis [2,4].In the presented case, the patient received 5 different chemotherapeutic regimens: ABVD, BEACOPP, ICE, dexaBEAM, IGEV, followed by auto-SCT and BV.The therapeutic effect was finally achieved after the implementation of BV, providing further confirmation of the Hodgkin's lymphoma diagnosis.In one of the largest available analyses, Radin reported a two-year disease-free survival in only 15 (39.5%) out of 38 PPHL cases [4].However, this study included patients who underwent various treatments, including radiation, surgery, or chemotherapy, with the latter primarily being the MOPP regimen (mechlorethamine, vincristine, procarbazine, prednisone), which is currently not recommended.Conversely, recent analyses and individual case reports on ABVD administration are optimistic, demonstrating mostly CR in patients [17,18].Given these circumstances, the prognosis of PPHL and the outcomes of novel chemotherapy regimens remain uncertain, necessitating further studies involving a larger patient cohort.In the described case, exceptional chemo-resistance was observed, with a sustained response achieved following BV treatment. Conclusion Our case presents an extremely rare pulmonary tumour with a misleading clinical and pathological presentation, a protracted diagnostic process, and high resistance to therapy (Fig. 4).Therefore, this case report contributes to increasing awareness of Hodgkin's lymphoma in the differential diagnosis of lung diseases.
2023-09-10T15:02:49.329Z
2023-08-20T00:00:00.000
{ "year": 2023, "sha1": "21002bca121b684f0ff718d9a799266cdc905551", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "5b142851decdbe80049fff00e464263760ab974a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237752433
pes2o/s2orc
v3-fos-license
Lossless Image Compression Schemes: A Review Data compression refers to the process of representation of data using fewer number of bits. Data compression can be lossless or lossy. There are many schemes developed and used to perform either lossless or lossy compression. Lossless data compression allows the original data be conveniently reconstructed from the compressed data while lossy compression allow only an approximate of the original data to be constructed. The type of data to compressed can be classified as image data, textual data, audio data or even video content. Various researches are being carried out in the area of image compression. This paper presents various literatures in field of data compression and the techniques used to compress image using lossless type of compression . In conclusion, the paper reviewed some schemes used to compress an image using a single schemes or combination of two or more schemes methods. INTRODUCTION The growth and development of modern information and communication technologies, has led the demand for data compression to increase rapidly. Recent development in the field of Computer Science and information has led to the generation of large amount of data Systematic Review always. According to Parkinson's First Law (Parkinson, 1957), the need for storage and transmission increases at least double as storage and transmission capabilities increases. The breakthrough of multimedia technologies has made digital libraries a reality. Digital images are usually encoded using lossy compression scheme because of their memory size and bandwidth requirements. The lossy compression scheme leads to high compression ratio while the image experiences lost in quality. However, there are many cases where the loss of image quality or information due to compression needs to be avoided, such as medical, artistic and scientific images. Therefore, efficient lossless compression become paramount, although the lossy compressed images are usually satisfactory in divers' cases. In most cases, common characteristics of most images is that the neighboring pixels are highly correlated and therefore contain highly redundant data or information. According to Singh, Kuma, Singh and Shrivastava [1], some importance of image compression schemes may include the following: It leads to sending less amount data on the network, it decreases the amount for storage and decrease the entire time of execution, reduces the chances of the errors during transmission as some bits have got transferred and enable a level of the security against monitoring the unlawful activities. Data compression refers to the process of representing or encoding a file, image, video or audio using fewer numbers of bits. The major aim of lossless image compression is to reduce the redundancy and irreverence of image data for better storage and transmission of data in the better form. According to Bindu, Ganpadi and Sharma [2], compression is the means by which the description of digitized information is modified so that the storage capability or the bit rate required for transmission is reduced. Image compression is briefly the process that is used to operate on the image and hence perform modification on the element of the image to perfectly reduce its size to visual appealing level. By performing data compression on image, it is absolutely possible to reduce the size of the data and also the bandwidth for image transmission. Many real time applications depend largely on a huge number of images for their smooth processing. Data compression has two major essential components. They are the redundancy and the irrelevancy. Redundant data can be found in almost any type of image. Duplications of data in an image are termed as redundancy. It may be seen as repeated pixel across the image, which is most of the time and more frequently repeated in the image. Redundant information in the image mostly can be used in saving the storage space of that image [3]. Generally, data can be compressed by removing irrelevancy and redundancy present in the original data before compression. There are two major levels to compress data: They are modelling and coding. In the modelling level, the data to be compressed is analyzed first for any redundant information if available then extract it to develop a working model. In the next level, the difference between model and actual data which is called the residual is now computed and is encoded by an encoding scheme. There are various ways to characterize a data and these characterization leads to the development of series of data compression methods or schemes. Since various data compression technique have been developed, there is need to review the existing methods which will be fine helpful for researchers who have interest in data compression to approximately choose the required algorithms in a particular situation. LITERATURE REVIEW Data compression refers to the process of representation of data using fewer number of bits. Data compression can be lossless or lossy. There are many researches carried out and is being carried out in the area depending on the type of data to be compressed. The data files that can be compressed can compressed include image data files, video data files, textual data files or even sound data files. For the purpose of this survey, image data file is considered. Different data compression schemes have been developed and deployed to use throughout the years. Some of these compressions are used for general used which means it can be used to compress files of different types and others are developed to compress a particular type of file. Generally, lossless data compression seeks to reduce the number of bits required to show content of an image, video, file without affecting the quality of that data. It also lowers the quantity of bits that is needed to save and send the digital media [1]. Pai, Cheng, Lu and Ruan [4], proposed that a compression technique by the use of two lossless technologies Huffman coding and Lempel-Ziv-Welch coding for image compression. At first stage of the scheme, an image is get compressed with the Huffman coding that is resulting in the Huffman -tree and a generated code-words. In their work, a technique has been proposed which is called the "sub-trees modification of Huffman Coding for stuffing Bits Reduction and Efficient NRZI Data Transmission". The authors basically targeted on transmission of the data and multimedia compression and also handling the issue as encoding of compression and transmission to come up with the low-bit rate of transmission model which depends on Huffman encoding scheme. The suggested scheme balances the 1 bit and 0 bit by measuring the chances of mismatch found in the traditional Huffman-tree. More so, the suggested techniques also get modified with the transitional -tree within the same compressional ratio [5]. The Lempel-Ziv-Welch (LZW) algorithm was introduced in 1984 by Terry Welch. It removes redundant characters in the output and includes every character in the dictionary before starting compression and employs other techniques to improve the compression (Sharma and Kaur, 2014). The Huffman coding algorithm named after its inventor, David Huffman, who developed the method as a student in a class on information theory at MIT in 1950. Huffman code procedure is based on the two observations. More frequently occurred symbols will have shorter code words than symbol that occur less frequently. The two symbols that occur less frequently will have the same length. The Huffman code is designed by merging the lowest probable symbols and this process is repeated until only two probabilities of two compound symbols are left and thus a code tree is generated and Huffman codes are obtained from labelling of the code tree Suresh, Nair and Kutty [6] in their research presented, a vector quantization technique of image compression. In their work, they adjusted the encoding for the difference in map between the actual images and then after that it got restored in the VQ compressed variation. It is the experimental results that show that their model which is required to enable the original data, it may considerably enhance the VQ images compression and also be compromised based on the difference in map from the lossy schemes to the lossless compression scheme. Jassin and Qassim [7], presented an ideal image compression scheme known as the Five Module Method (FMM). The scheme transforms every pixel value in the 8x8 blocks into the multiple of five for every array of RBG. After which the value may be fragmented by 5 to generate the new values that are bit of length for every pixel and which is less in the storage area as compared to the actual values that is of 8 bits. Unlike the existing approaches, the method of Alarabeyyat et al., [8] encodes information of edge line obtained using the modified edge tracking method instead of directly encoding image data. The second is compressing the encoded image Huffman and Lempel-Ziv-Welch. Alarabeyyat, et al., [8] proposed a numerous number of existing schemes. In the approach, LZW algorithm was applied first on the image at hand for encoding and later BCH for error detection and correction. This is to improve the compression ratio. The result of the scheme indicates that the algorithm achieves an excellent compression ratio without data loss when compared with standard compression algorithms. Pujar and Kadlaskar [9] used a lossless method of image compression and decompression was proposed. It uses a simple coding technique called Huffman coding. A software algorithm has been developed and implemented to compress and decompress the given image using Huffman coding techniques in a MATLAB platform. Their major concern is to compress images by reducing the number of bits per pixel required to present it, and to decrease the transmission time for images transmission. The image is reconstructed back by decoding it using Huffman codes. Kaur and Kaur [10] proposed a new lossless compression scheme and name it Huffman based LZW lossless image compression using Retinex algorithm which consist of three stages: In the first stage, a Huffman coding is used to compress the image, in the second stage all Huffman code words are concatenated together and then compressed with LZW coding and decoding. In the final stage, the Retinax algorithm are used on compressed image to enhance the contrast of image and improve the quality of image. This proposed scheme is used to increase the compression ratio (CR), peak signal to noise ratio (PSNR), and mean square error (MSE) in the MATLAB software. Hasan [11], presented an algorithm for image compression consisting of a combination of lossy and lossless methods which is based on a discrete wavelet transform and apply entropy coding as lossless compression with using quantization and thresholding techniques to produce a high compression ratio and high quality of image. Smith, [12] presented one of the most vital research in data compression. In his work, he explained DC from mathematical level to coding level. The study of [13] pointed out that theories are the starting point of any latest technology. The author theoretically explained namely Shannon's theory, Huffman code, Lempel Ziv (LZ) code and self-learning autopsy data trees. The work did not include many lossless DC schemes and reviewed only few schemes. Hosseini, (2012) tried to explained many DC techniques with their performance evaluation and applications. The work exploit Huffman algorithm, Run length encoding (RLE), LZ algorithm, arithmetic coding, JPEG and MPEG with their various applications. Cheng and Ang (2008) employed some image compression algorithms and classify them into first and second-generation image compression algorithms. A comparison is also done on several features such as preprocessing, post processing, code book, memory complexity, size and compression quality. Sangeetha and Betta [14] presented a dynamic image compression using an improved LZW encoding algorithm. The objective of their research is to present a comparative measure of present techniques of image processing in accounts by using an image compression technique which are used in bio-metric data. The performance measure reveals that LZW compression algorithm have better accuracy over other predictive methods like Run-length encoding, Huffman encoding, delta encoding, JPEG (Transform Compression and MPEG algorithms because they perform less. Boopathiraja et al., [15] proposed a hybrid lossless method using Lempel-Ziv-Welch (LZW) and Arithmetic Coding for compressing the multispectral Images. The performance of the method is compared with existing lossless compression methods such as Huffman Coding, Run Length Coding (RLE), LZW and Arithmetic Coding. This leads to conception of several compression methods for these multispectral images. Moreover, every tiny information from multispectral image is very important for efficient processing and so the lossless encoding is always preferable. improved compression technique is proposed by Naeven ,Jagadale and Bhat [16] using wavelet transform and discrete fractional cosine transform to achieve high quality of reconstruction of an image at high compression rate. The algorithm adopted uses wavelet transform to decompose image into frequency spectrum with low and high frequency sub bands. Application of quantization process for both sub bands at two levels increases the number of zeroes, however rich zeroes from high frequency sub bands are eliminated by creating the blocks and then storing only non-zero values and kill all blocks with zero values to form reduced array. The arithmetic coding method is used to encode the sub bands. The Experimental results of proposed method are compared with its primitive two-dimensional fractional cosine and fractional Fourier compression algorithms and some significant improvements can be observed in peak signal to noise ratio and self-similarity index mode at high compassion ratio. Ravikumar and Arulmozhi, [17], outlined some of the applications of digital image processing which includes: image sharpening and restoration, colour processing, pattern recognition, hurdle detection and video processing. They also outlined the techniques for digital image processing which includes: image editing, independent component analysis, image restoration, anisotropic diffusion, linear filtering and pixilation, principal component analysis, neural networks and partial differential equations. PROPOSED METHODOLOGY Arora and Shukla [27] basically describes the usual steps involved in compressing an image data are: Kamal and Al-hashemi, [18]. Separation of invalid code words from the valid codewords. A New Lossless Image Compression Technique Based on Bose, Chandhuri and Hocquengham Codes. Lossless Image Compression Using Combination Methods. 3. Hassan, [11]. Dividing the bit available among those classes, such that the distortion in the data is reduced to the barest minimum. iv. Quantize each class separately using the bit allocation information derived from step. v. Encode each class separately using an entropy coder and write to the file. Reconstructing the image from the compressed data is usually a faster process than compression. The steps involved are: vi. Read in the quantized data from the file, using an entropy decoder. (Reverse of step v) vii. Dequantize the data. (Reverse of step iv). viii. Rebuild the image. (Reverse of step ii). Two important structures used in image compression model are: Encoder and Decoder. Encoder: In this structure, an input image f(x,y) is fed into the encoder and this creates the set of symbol from the input data. Decoder: Here, the encoded information is sent to the decoder where the required reconstructed output image is generated f(x,y) The general compression system model is shown in Fig. 1. Three Source encoder consist of three encoding stages which are: The mapper, quantizer and the symbol encoder. The mapper translates the input data format required to reduce interpixel redundancies present in the image. Reversible operation may or may not reduce the size of data requires to represent the image. The quantizer process reduces the accuracy of mappers output and psychovisual redundancies of the input image and the operation is not reversible. This must be omitted when error free in lossless compression is desired. The third and the final stage is the symbol encoder which creates a fixed and variable length code to represent the quantizers output and maps the output image in accordance with the code. Variable length code is used and the operation is reversible. The source decoder contains two components. These are the symbol decoder and the inverse mapper. The operation performed by the inverse mapper is in reverse order. For the purpose of this research, a combination of existing algorithm was used. Improved Bose Chaudhuri and Hocquenghem (BCH), for image encoding and LZW for dynamic compression. Standard research images were used. The algorithm was implemented using JAVA programming language. Twelve different research images were used and their compression ratio were compared against the existing compression method. Table 2 shows the compression ratio result of the test images in which the obtained results depends on the size of the original to the size of the image compressed. In the table, the column two presents the compression ratio result from compressed image by the RLE algorithm and the next two columns present the compression ration results from compress LZW and Huffman respectively. The last column presents the compression ratio of the new scheme. The average compression ratio of each of the methods after applying the test images are RLE 1.2017, LZW 1.4808, Huffman 1.191957882 and the new scheme's average compression ratio is 1.6489 respectively. Based on the new method, the average compression ratio achieved so far is higher than the average compression ratio of RLE, LZW and Huffman algorithm which means that is the best compared to the three algorithms. This mean that is reduced higher when combination method BCH and LZW are used compared to the standard of lossless data compression.
2021-09-28T01:09:52.817Z
2021-07-06T00:00:00.000
{ "year": 2021, "sha1": "d748dd1c2fe9ef6217e7863053a653be4db629d5", "oa_license": null, "oa_url": "https://www.journaljsrr.com/index.php/JSRR/article/download/30398/57047", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fd33b73d1baf17aabc42d168b02845b6e716897f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
258646339
pes2o/s2orc
v3-fos-license
Adhesion of individually formed fiber post adhesively luted with flowable short fiber composite Abstract This laboratory study aimed to measure the push-out bond strength of individually formed fiber-reinforced composite (FRC) post luted with flowable short fiber-reinforced composite (SFRC) and to evaluate the influence of post coating with light-cured adhesive. Post spaces (Ø 1.7 mm) were drilled into 20 single-rooted decoronated premolar teeth. Post spaces were etched and treated with light-cured universal adhesive (G-Premio Bond). Individually formed FRC posts (Ø 1.5 mm, everStick) were luted either with light-cured SFRC (everX Flow) or conventional particulate-filled (PFC) dual-cure luting cement (G-CEM LinkForce). Half of the posts from each group were treated with dimethacrylate adhesive resin (Stick Resin) for 5 min before luting. After storage in water for two days, the roots were sectioned into 2 mm thick disks (n = 10/per group). Then, a push-out test-setup was used in a universal testing machine to measure the bond strength between post and dentin. The interface between post and SFRC was inspected using optical and scanning electron microscopy (SEM). Data were statistically analyzed using analysis of variance ANOVA (p = .05). Higher bond strength values (p < .05) were obtained when flowable SFRC was used as a post luting material. Resin coating of a post showed no significant effect (p > .05) on bond strength values. Light microscope images showed the ability of discontinuous short fibers in SFRC to penetrate into FRC posts. The use of flowable SFRC as luting material with individually formed FRC posts proved to be a promising method to improve the interface adhesion. Introduction In recent decades, adhesive dentistry has grown quickly. Since the use of modern dental materials produces results that are superior to those of conventional ones, a number of novel and creative treatments have replaced traditional treatment approaches. The development of fiber-reinforced composite (FRC) posts as a reliable substitute for prefabricated metal posts marked a turning point in the field of dentistry [1,2]. Their modulus of elasticity being similar to dentin and their ability to bond to luting cement and tooth structure have been suggested to reduce the likelihood of root fractures most commonly associated with endodontically treated teeth (ETT) restored with metal posts [3]. This coupled with their superior esthetics and easy retrieval adds to their many advantages, making fiber-reinforced posts the material of choice in routine dental practice. Despite the positive biomechanical behavior of FRC posts, the drawbacks have also been discussed [4][5][6][7][8][9][10]. The most common types of failure reported on ETT restored with adhesively luted FRC posts, were post fracture and debonding of the post. Debonding has been shown to take place at different interfaces; the cement-dentine interface; cement-post interface; and/or the composite core-post interface [4][5][6][7][8][9][10]. Poor bonding between a cross-linked prefabricated FRC post and luting cement and/or core material may eventually lead to marginal failure and subsequently result in secondary caries [9]. Research literature has shown that prefabricated FRC posts, compared to individually formed posts, have lower bond strength [11,12]. Improving bonding between FRC posts, luting cement and teeth increases load transfer from crown to root, which improves the restoration's longevity [13]. In fact, large and uneven root canal spaces can be filled more effectively using an individually formed FRC post approach than with a single, prefabricated post placed in the middle of the cavity [8]. The available literature on bond strength of FRC posts is abundant, but with contradictory outcomes. This can be as a result of variations in the testing procedures, post-surface pretreatment techniques and materials employed [14,15]. However, the fact that individually formed FRC posts lack radiopacity is considered a major drawback of this material [15]. According to laboratory research [16][17][18], high loads can be placed on luting cements, especially in the cervical region, and in vitro fatigue studies have revealed that post-luting cement microfractures or cracks are the first failure mode that contributes to the development of catastrophic failure [19,20]. On the other hand, previous studies have reported that flowable short fiber-reinforced composite (SFRC) improved the load-bearing capacity of restorations when it was used as post-luting and core build-up material [21][22][23]. According to them, the drawbacks of using a weak link between FRC post and root dentin were apparently minimized by the tight adaptation of SFRC [21]. However, still the question arises whether the light-cured flowable SFRC material inside the root canal would have adequate bond with individually formed FRC posts. Consequently, the purpose of this in vitro investigation was to verify if using flowable SFRC as a luting cement would improve the adhesion of individually formed FRC posts and therefore increase the posts' longevity. In addition, to study the influence of FRC post coating with light-cured bonding resin on adhesion. The null hypotheses were that the luting material type (I) and post coating (II) will have no effect on the adhesion of FRC post to dentin. Specimen preparation A 20 human caries-free and single-rooted premolar teeth submerged (for a maximum of four weeks until use) in chloramine T trihydrate (Fluka Analytical, France) were retrieved from a university dental clinic. The crown of every tooth was decoronated at the cement-enamel junction (CEJ) using a ceramic cutting disk running at a speed of 100 rpm while being cooled by water (Struers, Glasgow, Scotland). With post drills (Parapost stainless drills, Colt ene/Whaledent, Mahwah, NJ, USA), low speed hand piece and water cooling, post space preparations (Ø 1.7 mm) were made. An individual FRC post (1.5 mm, everStick, GC, Japan) was pre-cut (allowing for an approx. of 0.1-0.2 mm space around the entire circumference of the post) to the required length (8 mm) after drying the prepared canal. It's length and compatibility were then confirmed by its insertion into the dried prepared root canal with a tweezer. After being removed from the canal, the post was shielded from light prior to luting. During this stage, the post had not yet undergone polymerization. All teeth received the same adhesive treatment. After etching for 10 s using a 37% phosphoric acid etch-gel (Scotchbond, 3 M ESPE, USA), they were rinsed and gently air-dried. A disposable microbrush applicator was used in accordance with the manufacturer's instructions to apply a dual-cure one-step adhesive system (G-Premio Bond and DCA, GC). Excess adhesive was removed by paper points and blowing air. The adhesive was light-cured for 60 s using a light-curing unit (Elipar TM S10, 3 M ESPE, Germany). The tooth surface was always in close proximity to the light-curing tip. The average light intensity of the light source, measured with a calibrator (Marc Resin Calibrator, BlueLight Analytics Inc., Canada) before the bonding procedure, was 1200 mW/cm 2 and the wavelength was between 430 and 480 nm. As control, conventional (particulate filled, PFC) dual-cure luting cement (G-CEM LinkForce A2, GC) was injected into the post space of half of the teeth (n ¼ 10). With an 'elongation tip' for direct root canal application, the luting cement was applied using its own automix cartridge. The other half of specimens had light-cured SFRC (everX Flow, bulk shade, GC) as post luting material. For the purpose of removing extra cement out of the way of the post and preventing the creation of air bubbles, voids and other defects at the apical end of the canal, a cylindershaped stick (1.3 mm) was dipped into the fully filled post space. Then rounded unpolymerized post (everStick Post) was inserted into the canal. Half of posts from each group (n ¼ 5) were soaked in bonding resin (Stick Resin, GC) for 5 min before insertion (post coating). The specimen was light cured through the FRC post for at least 60 s (Elipar TM S10). After 48 h storage in water (37 C), teeth were then horizontally sectioned perpendicular to the long axis of the post with a precision cutting saw (Struers, Glasgow, Scotland). These cross sections were 2 mm (± 0.1 mm) thick. From each tooth, three disks were obtained from the coronal and middle levels. Four different groups were prepared, each having 10 specimens: Push-out test Adhesion between FRC post and dentin was tested with a push-out test set-up in a universal testing machine (Model LRX, Lloyd Instruments Ltd., Fareham, England). The settings of the testing machine were: preload 3 N, preload speed 2 mm/min, extension rate 1 mm/min. In the test setting, a 1.5 mm flat cylinder end pushed specimen posts (Ø 1.5 mm) through a hole (custom-made metal jig) under the specimen (Figure 1). The hole was Ø 2.15 mm and a 4.3 mm deep cylinder was used being placed directly under the post. The machine stopped measuring when load had dropped to 40% from the peak value, as the post complex detached from the tooth. The data of maximum failure or debonding load (N) was collected using a computer program (Nexygen, Lloyd Instruments Ltd.) and converted into megapascal (MPa) using the following formula [16]: Failure mode per specimen was analyzed using a stereomicroscope (Wild M3Z, Wild Heerbrugg, Switzerland) with different illumination angles and magnifications (6.5 and 15Â) and categorized as interfacial failure between tooth and luting cement or post and luting cement. Microscope analysis of post-SFRC interface The interface between FRC post and luting materials was analyzed using stereomicroscope with a magnification force of 15 and scanning electron microscopy (SEM, JSM 5500, Jeol Ltd., Tokyo, Japan). For light microscope analysis, teeth were sectioned vertically dividing the posts into halves (n ¼ 2/per group). For SEM analysis, cross sectioned specimens were attached to SEM metal stands with conductive double-faced carbon adhesive tape (Nisshin EM Co., LTD., Japan) and left in an exicator for one day before they were gold coated (10 nm) in a sputter coater (BAL-TEC SCD 050 Sputter Coater, Balzers, Liechtenstein). SEM analysis was performed at an operating voltage of 15 kV, spot-size of 37 and working distance of 18 mm. Statistical analysis Levene's test was used to evaluate the assumption of equality of variances. After that, two-way ANOVA was used to determine the influence of the luting material and post coating influence on maximum stress (push-out strength) at maximum load. A significance level of 0.05 was used. Software for analysis was JMPV R , Version 14.2.0 Pro. SAS Institute Inc., Cary, NC, 1989-2020. Results The results of the push-out test are presented in Figure 2. ANOVA indicated that luting material type has a significant (p < .05) effect on FRC post adhesion. The highest bond strength values (23.5 MPa) were obtained after using SFRC (everX Flow) as postluting material (p < .05). While, conventional dualcure PFC resin showed the lowest values (12.5 MPa). Post coating didn't show a statistically significant correlation to bonding strength (p ¼ .102), neither between luting material type and post coating had a statistically significant correlation (p ¼ .641). The variances were homogeneous and equal among groups, according to Levene's test. Regarding the failure modes, 11 (55%) SFRC specimens were broken between tooth and luting cement and 9 (45%) between luting cement and post. On the other hand, 7 (35%) PFC specimens were broken between tooth and cement and 13 (65%) between cement and post. In total, 18 (45%) had dislodged between tooth and cement and 22 (55%) between cement and post. When number of specimens having different dislodgement were compared between groups SFRC and PFC, statistically significant differences were not detected (Chi-Square Pearson, p ¼ .204). Light microscope images ( Figure 3) showed a pattern where short fibers of SFRC luting cement penetrate into FRC post. This happened in all specimens of SFRC groups regardless of the application of post coating. SEM images showed the bonding interface between fiber post and used luting materials, however, it was not able to show so clear penetration of luting material or fibers (Figure 3). Discussion This pilot study was designed to evaluate the potential use of light-cured flowable SFRC as a post luting material. To our knowledge, this aspect has not been studied in existing literature. Flowable SFRC has been reported to have high fracture toughness and flexural properties compared to conventional dual-cured PFC resin [24]. Furthermore, recent studies showed the possibility of getting micro-mechanical interlocking between the monomer from successive PFC resin and protruded fibers from SFRC composite base [25,26]. As a result, we assume that the use of flowable SFRC as a luting material would be beneficial to improve the adhesion of FRC post to dentin. Our results indicated that luting material type has influence on the push-out bonding strength of FRC post to dentin; therefore, the first null hypothesis was rejected. The findings of the present investigation ( Figure 2) demonstrate that flowable SFRC could be used as post-luting material to achieve higher pushout bond strength values. These results can be explained from a range of perspectives. First, the presence of micro-mechanical interlocking between the protruding short fibers on the interface surfaces with FRC post (Figure 3, light microscope image). Particularly in the event of shear stress, this interlocking may affect bond strength values. Second, the SFRC's improved mechanical characteristics, particularly its fracture toughness, would increase it's capacity to withstand shearing stresses [24,27]. In addition, randomly oriented fibers in SFRC have been demonstrated to affect the depth of the oxygen inhibition layer, which improves resin penetration at interfaces [28]. One of the key elements in load transfer is good adhesion between the luting material and the post, as well as the dentin and the luting material. Our results were supported by findings of some loading studies, where flowable SFRC was used as post-luting and core build-up material, in which fracture resistance of the post-core foundation showed the highest among all tested groups [21,23,29]. Authors stated that SFRC was tightly connected to the fiber post and root dentin, minimizing the drawbacks of using a weak link between them [21,29]. Despite the fact that there was little difference between groups regarding the failure modes, the use of SFRC demonstrated less adhesive failure at interface with FRC post compared to dual-cure PFC group. However, due to the semi interpenetrating polymer matrix structure (semi-IPN), the individually formed FRC posts demonstrated good bonding ability with luting cement and direct composite core restorations providing reliable surface retained applications [11,12,30]. In particular, when FRC posts were cured along with the luting cements after being inserted into the root canals, studies have shown that adhesive failures were predominately detected at the cementdentin interfaces [12,30]. Applying bonding resin is reported to create a link between FRC post and luting cement and increase penetration depth in post and thus enhance adhesion by forming a solid adhesive interface [31]. Nevertheless, in this study, post coating with bonding resin did not have a significant effect on adhesion strength, neither between SFRC nor PFC specimens ( Figure 2). Hence, the second hypothesis was accepted. This might be because the posts are already uncured, so it does not make a significant impact. Bonding resin could have marginal influence on uncured posts to either direction: either making a film over the post and hindering cement/fiber penetration or making a post's surface more soluble. Effective light transmission and scattering through the post is essential for optimal bonding and polymerization of FRC posts and luting cement [32]. In a simulated root canal, individually formed FRC posts with a semi-IPN polymer matrix demonstrated an appropriate level of monomer conversion and a tendency toward light conductivity [32]. However, it is uncertain if the light-cured SFRC will polymerize sufficiently inside the root canal. Earlier investigations by Lassila et al. and Frater et al. demonstrated that light-cured flowable SFRC material may be polymerized efficiently inside the root canal next to an individual FRC post, reaching just about the same microhardness levels as dual-cure material [22,23]. According to their approach for calculating microhardness, dual-cure PFC and flowable light-cure SFRC could both be utilized safely up to an 8 mm depth inside canals [23]. This could be traced back to multiple reasons, namely the light transmission of the FRC post [32], the transparency of the SFRC materials and the scattering of light by the short fibers [33]. However, having a dual-cure flowable SFRC as presented by S€ ailynoja and her colleagues would be an optimum and safe option for deeper canals [24]. From a clinical point of view, our results could shape the process of post cementation to a new direction, where flowable SFRC could be used as post core/luting material, without needing specific cement for the task. The outcomes of our study may have been influenced by the inconsistency between the post space and post diameter. This discrepancy could have led to the creation of bubbles or gaps in the material, which is less likely to be seen in a thin and uniform layer of luting material [16,34]. The bond strength between a post and a tooth can be evaluated using different methods, such as conventional tensile testing on external root dentin [35] or on the post space surface using pullout [36] and push-out [11] techniques. The push-out method is preferred as it is more relevant to clinical situations, but there are concerns that this method may create a highly non-uniform stress at the adhesive interface when applied to the entire post or thick root segments [37]. Goracci et al. conducted a study to compare the accuracy of a microtensile technique with a push-out test for measuring the bond strength of fiber posts luted in post spaces. The authors found that the push-out test was more reliable than the microtensile technique as each specimen provided a useful measurement, and the data variability was low [38]. Though, the results of this study must be seen in the context of some limitations, some errors which can occur during specimen preparation where in which unpolymerized post might not perfectly round when inserted, although they were rolled to be round. This would create an error in calculations and testing. Also, posts were not perfectly in the middle of the post space, but at the push-out test, the pushing head was placed on posts. It should be noted that the mechanical properties and conversion rate of the luting material can impact the stress distribution and failure modes at the adhesive interface, which can ultimately affect the push-out force and bond strength values. Another limitation of this study is that we did not assess the variation in push-out bond strength between different root sections, such as the coronal and middle levels. However, numerous studies in the literature have indicated that bonding at the coronal level of the root canal appears to be more reliable than bonding at the middle or apical level [39,40]. Conclusions The use of flowable SFRC as luting material with individually formed FRC post proved to be promising method to improve the interface adhesion. Disclosure statement No potential conflict of interest was reported by the author(s).
2023-05-13T15:04:52.166Z
2023-05-11T00:00:00.000
{ "year": 2023, "sha1": "be281668d4945fa6807683ab6c2f6c2427941509", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1b425eb4a2788cfe1ef0f625a9da6b6cb9bc1a7c", "s2fieldsofstudy": [ "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226452971
pes2o/s2orc
v3-fos-license
A REVIEW ON EFFECTIVE UTILIZATION OF COMPUTATIONAL RESOURCES USING CLOUDSIM Data Center is a very important part of cloud computing which involves different database types that store and run all types of information. Data Center containing network, server, etc. absorbs more resources, more electricity, and more carbon emissions. Workload management is a process of directing the proper distribution of workloads to provide applications with more appropriate performance. A hierarchical approach is also presented. This approach includes 5 algorithms. In this survey, multiple techniques and algorithms are presented to minimize the energy, and power consumption in the cloud. INTRODUCTION Since a data center expends 30% to 50% of operating expenses in electricity.Therefore, it is necessary to reduce energy and power utilization and carbon emissions.Cloud computing provides some services over the internet.The cloud may be public, private, hybrid and community.A data center can be defined as a storage which is used to organize, store, and access large amount of data.Mostly it is useful for businesses and other organizations.A business or organization typically heavily depends on the application, services, and data contained in a data center.Virtualization is a technique, in which different operating system can be installed on hardware.They are completely independent and separated from each other.Virtualization is a method in which machine is shared by multiple users and customers.There are different types of virtualization: i) Server Virtualization ii) Client & Desktop Virtualization iii) Services and Applications Virtualization iv) Network Virtualization v) Storage Virtualization.Virtualization includes specific advancements, such as consolidation and resource utilization.Green computing is non-polluting and ecological utilization of computer resources.Load balancing is a method in which workload is distributed among various computer [1].A load balancer distributes client requests or network load efficiently across multiple backend servers.Workload Management [2] is a challenging task and to handle this task, a literature survey is Done in which 5 algorithms, Round robin, Equally Spread Current Execution, Throttled, HoneyBee, and AntColony are used.In this survey, we have used CloudAnalyst simulation tool for comparing performance of various algorithms to check overall response time.CloudAnalyst tool looks like uncomplicated as it has graphic environment with which it looks simple to apply. It also happens to have a simulation standard that is much greater than a tool set.There are several tools available for modeling environment and could be apply to test the output of applications on web.Cloud Observer distinguishes simulation setting up system exercise and encourages the modeler to concentrate on the criteria used for simulation purposes instead of only programming techniques.In greatly little time, it also helps simulation by changing variables readily and immediately. The cloud analyst's graphical user interface simulation tests allow the findings to be analyzed more explicitly and more proficiently, as well as helping to emphasize any issues with simulation accuracy and efficiency.[3] The aim of the author is to reduce the amount of carbon emission using effective utilization of resources.The work is carried out by balancing the load among various resources evenly.It also allocates the work based on the content of the request sent either as read only or read write operation.Effective and appropriate resources are also the major contribution of the work.By assessing the nature of the request the work is allocation to its appropriate available resources.Due to this they have reduced the amount of carbon emission used in a traditional system and their proposed water shower model.[4] The aim of the author is to use an efficient way of power consumption model named balanced exothermic dynamic voltage carbon scaling which is used for analyzing the power of the resources and according the job is allocated.They also use CPTS, circular peak time services to monitor the entire power usage and carbon emission by this complete monitoring of resources the author has proved reduced carbon emission and power consumption using that model.[5] The aim of the author in this survey is to examine energy for the efficient utilization of data centers and resources.This survey represent some virtualization and data center-based methods.It also present a revision of vivacity efficiency technique in cloud computing on the source of presentation and energy saving.The reason of this survey is to discover up to date for energy and presentation management.[6] In this survey, a hierarchical approach, EnergyCloud for the workload of data center is presented.Two algorithms are used in this survey which is used for workload assignment and migration.In this survey architecture is also presented for sharing workload.In this architecture two scenarios are defined, EnergyCloud framework for two interconnected data center and EnergyCloud framework for four interconnected data center.The purpose of this survey is to distribute the workload among different data centers.[7]Authors are studying the energy-efficient arrangement of responsibilities in this survey, where the implementation time of the job is unknown.To describe the effectiveness of unlike frequencies, they characterize a novel task model to explain the responsibilities and the energy consumption ratio.They show that the job allotment is related to and additional difficult than the inconsistent size bin packing difficulty.Then, to assign jobs, they represent two successful heuristics.They also plan an algorithm for confined job migration to recover presentation when a job is complete.Ultimately, this survey presents a model framework for evaluating their approaches, which achieves better energy-saving performance.[8] In this survey, a project named DATAZERO is represented.The aim of this survey is to find the solution about the plan and operations of data center.The project's key originality is to suggest a concession system between IT and power management, which seeks to seek a tradeoff between the priorities and limitations of both sides instead of trying to explain a question of universal optimization.They suggest reliable electrical and IT models to make this concession probable: they outline the need for identification of electrical sources.[9] In this survey they used technique dynamic voltage frequency scaling for minimizing data consumption and performed cloudsim toolkit using real cloud traces, and considered DVFS would be necessary when mapping virtual machines to maintain quality of services.Their result demonstrate that including DVFS awareness is workload management provides substantial energy saving till 41.62% for scenarios below workload conditions.Easy to apply-As in the case of javaPackage, CloudAnalyst is very simple to apply in setting simulation environment, we must to doubleclick on icons.User can change the variables as many times and run simulation easily, it is very simple to setup configuration of CloudAnalyst.Only the thing is to do by user is to enter number of datacenter as required and number of VMs for each datacenter ,and user have to set the regions as required.ii.Defining simulation functionality with a large point of configuration and versatility-This tool may give degree of configuration is most important part possibly.Web application upto on many variables and most often the amount of that variables require to be presumed.So it is necessary to input and modify that variable readily and rapidly.iii.Graphic based result -CloudAnalyst is able to show output in form of table and charts which helps to analyze result easily and quickly.Graphical output helps to compare the result effectively.It can easily compare overall response time, Avg, minimum, maximum, and cost readily.iv.Repetitions of experiments -CloudAnalyst has important features to repeat experiment as many times as user want to repeat.If user is applying an experiment having some variable or input then on simulation it shows some result, so if users apply same experiment with same variable, on simulation same result will be obtained.v. RELATED WORK Capability to save the output -On simulating CloudAnalyst when user apply any experiment, user can save that experiment file and also can save the output file in pdf format.These files can be saved on computer or can save in any pendrives, which helps user to use that files for longer time. MAIN COMPONENTS OF CLOUDANALYST The following part defined about the main component of Cloud Analyst and fig1.Describes the system of Cloud Analyst.e) Data Center Manager -DCMs a variety of variables summarizing the site's status: potential variables include total resource, operational capacity, energy costs, CO2 emissions.After receiving this information from other pages, data center should be allocated to a new application or VM in compliance with the stated goals and very that the workload is adequately spread between the different sites and cause the relocation of applications where necessary.f) VM Load Balancer -Data Center make use of VM LoadBalancer., to control each VM assigned for refining of forthcoming cloudlet.At present, CloudAnalyst tool has 3 VM LoadBalancer for three already available loadbalancing algorithms for implementation, but user can manage and add their own load balancer for implementing other added load balancing algorithms.As per required, that policies can be choosen by the simulator.In this survey, we have used three VM LoadBalancer, roundrobin, throttled, and active monitoring. Honey Bee Load Balancing Algorithm As per user request, requests are dynamically in load balancing.Here, the requests are converted into groups and each virtual machine manages a procedure.After cleaning, gain is calculated.If there is more gain, server will stand, else low gain activate a return.Each vertex manages a distinct line.After determining threshold value,load is transferred to unplanned allocated under underloaded virtual machine.Due to this, determination of virtual machine having high presentation is not possible.Initially calculating Capacity, particular VM would be allocated based on highest value of capacity.Steps of algorithm are as follows: 1.In the beginning of algorithm, set number of job. 2. Then put number of virtual machine and determine capacity. 3.At the beginning, on each virtual machine put the load equals to null. 4. Select virtual machine which is having highest throughput, and send first job to that virtual machine. 5. Checks that at present VM load is greater than threshold value.6.If alright, choose the virtual machine which have highest throughput and examine the threshold value of load, if it is then allocate to that virtual machine.7. If not, then choose another virtual machine with succeeding highest throughput and check load is less than threshold value.8. If virtual machine received all the jobs, go to steps 10. 9. Otherwise go to step4. Ant Colony Load Balancing Algorithm Steps for this algorithm are as follows: 1.In the beginning of algorithm, set pheromone for existing nodes.2. Set the ants and set them on existing virtual machines randomly. 3. Calculate moving probability of ants p as per gain matrix, and select following node.4. Limited pheromone would be upgraded if ant p finishes the searching cycle, and if not, go back to step 3. 5. Universal pheromone would be upgraded, if all ants finishes the searching cycle, and if not go back to step2.6. Check if any virtual machine is remained to be allocated in the list and if yes, go back to step1.7. Otherwise stop. IMPLEMENTATION The load balancing algorithms deployed shall be evaluated in a CloudAnalyst.CloudAnalyst contains the software Cloudsim.Cloudsim is a Java-based, application library.By integrating with the JDK this library can be used directly to compile and execute the code. Steps for Implementation: 1. Download CloudAnalyst package 2. Download eclipse and install on computer 3. Import cloud analyst package on eclipse 4. Now run the CloudAnalyst, it will automatically open the GUI for CloudAnalyst. Implement as many as algorithm and import it on CloudAnalyst and run simulation, by setting up entire configuration.6. Set userbase as required and data centers.Set as many as VM for each data center as per user requirement. RESULT ANALYSIS In this survey, CloudAnalyst is used for comparing the performance of various load balancing algorithm by taking different no of data centers and different of virtual machine for each data center.Cloudanalyst tool helps to examine various loadbalancing algorithms on distributing load on virtual machine of each data center.By taking various userbases this environment is simulated and each simulation is run for 60 minute.For comparison, Avg peak users are taken as 10000 and Avg -off peak user as 100 for each userbase.For simulation, optimize response time is used as service broker policy. When Round Robin Load Balancing is applied : For distributing load on each VM, Round Robin method is used for every VM.For comparison, 35 data centers and 40 VMs for each data center are taken for Round Robin algorithm.Result is represented in table 1.The datacenter processing time of Throttled algorithm is good as compare to other four algorithms.As compare to other four algorithms, the cost of throttled Algorithm is least.According to the result, Equally Spread Current Execution proves to be good in response time and Throttled Algorithm proves to be good in both cost and data center processing time. CloudAnalyst[10] tool is based on cloudsim works with java packages, it is completely based GUI for giving graphical output.It is very simple to use with a scale of visualization ability.Its graphical result helps user to analyze results more readily and most effectively.Characteristics of CloudAnalyst are: i. Fig. 1 : Fig. 1: Cloud Analyst System a) Regions -The world's main continent are divided into 6Regions in the CloudAnalyst.All of these areas belong to the further key bodies like userbases and datacenters.In the CloudAnalyst this geographical categorization is very helpful for supporting the degree of its easy and simple models for simulations and the output are gained.b) Web -The CloudAnalyst web is a proposal for the world web which implements only that characterstics which are important for simulation.Cloud analyst also controls the duplicates of web traffic, which looks scattering on all side of world along organizing suitable quantity data transportation delay and transfer impotency.Convenient bandwidth and transfer impotency between six areas can be configured by users.c) CloudAnalyst Service Broker -Service Broker controls all obstruction which is routing among userbases and datacenters.Service broker also decides about maintenance of datacenters from one and all userbases.Separate routing strategies are implemented by three kinds of service broker policy which are provided by CloudAnalyst tool at present.Closest datacenter, Optimize Response Time, Reconfigure Dynamically Load Balancing are three kind of server broker policy.d) UserBases -Group of users participated in CloudAnalyst are taken as single unit are called as UserBases.Individual userBase can express millions of users but configured like an individual unit. Fig. 2 : Fig. 2: Cloud Analyst Architecture ALGORITHM In this survey, CloudAnalyst is used for comparing different through distributing load on each data center, by using java packages.There are five algorithms are used for comparing their performance on basis of their response time and data processing time.By using VM LoadBalancer [11], Data Center balanced the load request on all VMs.Algorithms are described as follows: Round Robin Load Balancing Algorithm Round Robin Load balancing algorithm is very simple and easy procedure which is based on the time quantum and applies round robin.Rrlb() { [initialize]Time_Quantu m as TQ=10 Repeat steps until (actual_request_list).size==NULL{ Perform[actual_request] till TQ Performed_list [requests] performation[Actual_request] actual_requestrequest_list[next_list] } VM[performed_request_list] performed_list [requests] Go to step 4 When Equally Spread Current Execution Algorithm is applied: For distributing load on each VM, Equally spread method is used for every VM.For comparison, 30 data centers and 35 VMs for each data center aretaken for Equally Spread algorithm.Result is represented in table 2. When Throttled Algorithm is applied: For distributing load on each VM, Throttled method is used for every VM.For comparison, 40 data centers and 45 VMs for each data center are taken for Throttled algorithm.Result is represented in table3. When HoneyBee Algorithm is applied: For distributing load on each VM, HoneyBee method is used for every VM.For comparison, 45 data centers and 50 VMs for each data center are taken for HoneyBee algorithm.Result is represented in table 4. When AntColony Optimization Algorithm is applied: For distributing load on each VM, Antcolony method is used for every VM.For comparison, 50 data centers and 55 VMs for each data center are taken for AntColony algorithm.Result is represented in table 4. Table 5 : Output of AntColony Algorithm Variable The output obtained for the cost variable after simulation is 276.37$.On Comparing different algorithm with different variables, the graph of Total Response Time and Datacenter Accessing Time obtained are represented in Fig 7. and Fig 8.
2020-07-16T09:08:44.034Z
2020-07-02T00:00:00.000
{ "year": 2020, "sha1": "38bd46aa3a2db2233434f82297fb03ab12f44920", "oa_license": null, "oa_url": "https://doi.org/10.31838/jcr.07.13.177", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bd1cfdfd454e324b0d6b7e5d34d0254a754a362d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
10940083
pes2o/s2orc
v3-fos-license
Single Incision Laparoscopic Surgery for Acute Appendicitis: Feasibility in Pediatric Patients Background. Laparoscopic appendicectomy is accepted by many as the gold standard approach for the treatment of acute appendicitis. The use of Single Incision Laparoscopic Surgery (SILS) has the potential of further reducing postoperative port site complications as well as improving cosmesis and patient satisfaction. Method. In this paper we report our experience and assess the feasibility of SILS appendicectomy in the pediatric setting. Results. Five pediatric patients with uncomplicated appendicitis underwent SILS appendicectomy. There were no significant intraoperative or postoperative complications. All patients were discharged within 24 hours. Conclusions. The use of Single Incision Laparoscopic Surgery appears to be a feasible and safe technique for the treatment of uncomplicated appendicitis in the pediatric setting. Further studies are warranted to fully investigate the potential advantages of this new technique. Introduction The rapid uptake of minimally invasive techniques has affected many areas of surgery, including the management of acute appendicitis. Laparoscopic appendicectomy is also a standard and recognised technique in the paediatric setting, with some surgeons advocating a primarily laparoscopic approach to all paediatric patients presenting with appendicitis [1]. Initial fears regarding the possibility of increased rates of postoperative complications seem to have been dispelled with improved instrumentation, technique, and growing experience both from the surgeon and ancillary staff [2]. In fact, although operating times and cost may be increased with the laparoscopic approach, this may be offset by a reduced postoperative stay compared to the standard open approach [3]. Single Incision Laparoscopic Surgery (SILS) is a new technique through which laparoscopic surgery takes place through a single umbilical incision, without the need for additional laparoscopic ports. This new method has been used for a variety of laparoscopic operations including tubal ligation [4], hysterectomy [5], appendicectomy [6,7], cholecystectomy [8], sleeve gastrectomy [9], colectomy [10], and nephrectomy [11]. The single incision technique has the possible advantages of reduced postoperative pain, faster return to normal function, reduced port site complications, and improved cosmesis and patient satisfaction. In this paper we present our first experiences and assess the feasibility of using SILS to treat appendicitis in the pediatric population. Patients SILS appendicectomy was carried out in 5 children in a teaching hospital in central London. All patients had a body mass index between 20 and 25, and all operations were carried out by the same consultant surgeon. The first patient was a 12-year-old boy, who presented with a single day history of central abdominal pain that localised to the right iliac fossa. On admission his white cell count and CRP were both within normal range, but he was tender with localised peritonism in the right iliac fossa. His symptoms did not improve overnight and thus the decision was made to proceed with laparoscopy. The second patient was a 14-year-old girl who presented with a 5-day history of worsening right iliac fossa pain with localised peritonism. She had a normal white cell count, but a raised CRP of 29 on admission and was booked for laparoscopy The third patient was a 13-year-old boy with a 2-day history of right iliac fossa pain. His white cell count and CRP were within the normal range. However, his symptoms worsened overnight and thus he was booked for laparoscopy. The fourth patient was a 12-year-old girl with a 1 day history of abdominal pain and normal white cell count and CRP. Her symptoms also worsened overnight, and thus we proceeded to laparoscopy. The fifth and final patient was a 13-year-old boy presenting with a 12-hour history of pain and raised white cell count of 15. Technique Access was gained via an open umbilical incision. Firstly the umbilicus was everted using a Littlewoods forcep. The incision was made either vertically or transversely, with a Prolene (Ethicon, New Brunswick, NJ, USA) stay suture placed either side of the incision. Care was taken to keep the incision within the umbilical ring for the best cosmetic outcome. A mixture of sharp and blunt dissection was used down to the linea alba which was incised. The peritoneum was opened under direct vision, and a 11 mm laparoscopic port inserted. Pneumoperitoneum was then established. A 5 mm 30 degree laparoscope was used to complete a full laparoscopy. Up to 2 further 5 mm DEXIDE (Covidien, Mansfield, Massachusetts, USA) ports were then inserted through the fascia to either side of the 11 mm port. Mobilisation of the appendix was achieved with the use of Roticulator instruments (Covidien, Mansfield, Massachusetts, USA). A window was made in the mesoappendix near the appendix base, and the appendix and mesoappendix both stapled and divided using an EndoGIA stapler (Covidien, Mansfield, Massachusetts, USA). In our third and fourth patients this procedure was assisted by the placement of a single suture, placed through the abdominal wall in the right iliac fossa. The needle was then passed through the mesoappendix near the appendix base, before being passed back up through the skin again. This formed a sling to retract the appendix ventrally, allowing easier positioning of the EndoGIA stapler (Covidien, Mansfield, Massachusetts, USA). All specimens were removed with the use of an Endo-CATCH bag (Covidien, Mansfield, Massachusetts, USA). Irrigation was carried out as required. Closure of the wound was performed in layers, with absorbable sutures to both fascia and skin. Results SILS appendicectomy took an average of 56.4 minutes to perform (80, 48, 65, 50, and 45 minutes for patients 1, 2, 3, 4, and 5, resp.). The first patient had a macroscopically normal looking appendix. No other intra-abdominal pathology could be found and it was decided to proceed to appendicectomy. Following surgery, the patient symptoms improved and he was discharged on postoperative day 1. The other four patients all had macroscopically inflamed appendixes without evidence of gangrene or perforation. There were no significant intraoperative complications in any patients, and no need for conversion to standard laparoscopic appendicectomy. All patients were discharged within 23 hours and were brought back to clinic 6 to 8 weeks later for out-patient review. There were no postoperative wound infections, intra-abdominal abscess formation, or episodes of small bowel obstruction. Anecdotally all patients and their parents were very satisfied with their operative management, and particularly enthusiastic in regard to the single incision approach. On follow-up in clinic, the umbilical scar was very difficult to visualise once healing had been completed. Discussion Laparoscopic appendicectomy is now accepted as the gold standard for treatment of acute appendicitis in many centres. The laparoscopic approach has been demonstrated to have lower wound infection rates postoperatively, as well as having significant gains in terms of length of hospital stay and return to normal function [12]. Laparoscopic appendicectomy is also associated with a lower rate of adhesional bowel obstruction compared with the open approach [13]. Initial worries regarding rates of intra-abdominal abscess formation seem to have been refuted by recent studies [3], and it is the authors viewpoint that good peritoneal irrigation is actually aided by the improved intra-abdominal view offered with laparoscopy. Single incision laparoscopic surgery (SILS) is a new technique that has now been utilised in many centres for appendicectomy. We have previously detailed our initial experiences with the use of SILS for appendicectomy and cholecystectomy in the adult population [14,15]. The major difficulty with this new technique is the sacrifice that has to be made in terms of comfort and ergonomics. As all instruments and camera are inserted through the same incision, the ability to triangulate your instruments around the target is lost. Although this can be partially rectified by the use of roticulator instruments, the surgeon ends up working with his hands very close together, and finds himself often being impeded by the laparoscope and the assistant. Similarly, the surgeon's right hand will control the left-sided instrument on the screen and the left hand controls the right-sided instrument on screen. These technical difficulties do make SILS a more demanding procedure on the operating surgeon than normal laparoscopic techniques. In our experience this led to an initial significant increase in the operation time. However, with increasing exposure to the technique, operating times have been reduced significantly, and are now very similar to the average time taken for a standard laparoscopic appendicectomy. Future improvements in instrumentation may help to reduce operating times further. Although the small size, and limited age range of the patients in this series, precludes any meaningful statistical analysis, it does demonstrate that the SILS approach may be feasible for particular cohorts within the pediatric population. This supports the results of other groups using the SILS approach in pediatric patients [16,17]. However, applicability of SILS to very young patients was not assessed in this Diagnostic and Therapeutic Endoscopy 3 paper. This series also adds further to the current literature demonstrating the applicability of SILS in uncomplicated appendicitis. In the future prospective studies with sufficient power are now warranted to demonstrate any statistically significant benefits over the standard laparoscopic method. These are most likely to be in terms of postoperative pain, port site complications, cosmesis, and patient satisfaction.
2018-04-03T05:06:26.011Z
2010-02-04T00:00:00.000
{ "year": 2010, "sha1": "407e7c02d85f57bbe3f1d5b213833432ec381bbe", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/dte/2010/294958.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0979802cc31ebd0214dcdb1a956723b5f70c3209", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250650337
pes2o/s2orc
v3-fos-license
Freezing of Gait as a Complication of Pallidal Deep Brain Stimulation in DYT‐KMT2B Patients with Evidence of Striatonigral Degeneration Mutations in KMT2B were fi rst identi fi ed in individuals with early-onset complex dystonia. 1 Since then, it is emerging as one of the most common causes of genetic childhood-onset dystonia. 2 Additional features include short stature, dysmorphism, developmental delay, psychiatric features, endocrinopathy and others. 3 Patients with DYT- KMT2B refractory generalized dystonia maintain signi fi cant bene fi t from internal globus pallidus deep brain stimulation (GPi-DBS) therapy as previously reported, except for laryngeal dystonia and gait. 3 We wish to contribute our observation of freezing of gait (FOG) in KMT2B related dystonia by reporting fi ve subjects (four females) with protein truncating variants (PTV) (Table 1). Clinical DBS 3 The mean age at dystonia onset was 3.6 years (range: 2 6 years). The median age at DBS implant was 23 years 8 30.3 years), and patients were for a median of 14.5 years (IQR: 8.5 – 24 years) after DBS. the 1, 2, and 3, in all, from 14 43 years of age (range: 1 15.5 years 123 I-io fl upane) in patients undertaken years Mutations in KMT2B were first identified in individuals with early-onset complex dystonia. 1 Since then, it is emerging as one of the most common causes of genetic childhood-onset dystonia. 2 Additional features include short stature, dysmorphism, developmental delay, psychiatric features, endocrinopathy and others. 3 Patients with DYT-KMT2B refractory generalized dystonia maintain significant benefit from internal globus pallidus deep brain stimulation (GPi-DBS) therapy as previously reported, except for laryngeal dystonia and gait. 3 We wish to contribute our observation of freezing of gait (FOG) in KMT2Brelated dystonia by reporting five subjects (four females) with protein truncating variants (PTV) ( Table 1). Clinical presentation and response to DBS was previously reported. 3 The mean age at dystonia onset was 3.6 years (range: 2-6 years). The median age at DBS implant was 23 years (IQR: 8-30.3 years), and patients were followed for a median of 14.5 years (IQR: 8.5-24 years) after DBS. As illustrated in the Videos 1, 2, and 3, FOG was documented in all, occurring from 14-43 years of age (range: 1-15.5 years after GPi-DBS). DaTscan (SPECT for 123 I-ioflupane) was abnormal in 4/5 patients undertaken from 2.5 years before, to 24 years after DBS insertion (Table 1). Subject 3, despite normal DaTscan at age 20.5 years (2.5 years pre-DBS), exhibited mild FOG when turning with lower limb dystonia from 1 year after DBS insertion. DaTscan repeated 13 years later in subject 3 identified bilateral decrease of putaminal uptake. Prior to DBS, dystonia was unresponsive to L-dopa in all subjects, as was FOG post-DBS in 2/5. Only 1/5 has maintained independent gait at last follow-up, despite 4/5 having recovered autonomous gait at steady state under DBS. Feuerstein et al. 4 reported on the emergence of parkinsonism and abnormal brain DaTscan imaging in a patient with a heterozygous loss-of-function KMT2B variant (c.974_979del, p.Ser325*). He presented at aged 3 years with typical features of DYT-KMT2B, necessitating GPi-DBS aged 23 years. Though, initial benefit was evident, by 33 years, generalized dystonia, parkinsonism with rigidity, bradykinesia and FOG predominated. Extensive DBS reprogramming and switch-off did not modify symptoms. DaTscan showed bilateral decreased putaminal uptake. In this patient, Rotigotine (but not L-dopa) significantly improved FOG. In our DYT-KMT2B group, FOG occurred across a broad age range, from an early post-operative presentation to more than 15 years after DBS insertion. All patients had PTVs. In a previous publication, in DYT-KMT2B, dystonia severity scores appeared to be comparable and more severe in PTVs and chromosomal deletions versus missense variants, possibly suggestive of a relationship between motor severity and type of KMT2B variant. 3 To date, FOG has not been reported in patients with missense variants; identification of further cases will determine whether this is a true genotype-phenotype correlation. Many dystonia-parkinsonism disorders are associated with reduced striatal dopamine due to degeneration of substantia nigra pars compacta neurons. 5 Our finding of bilateral short putamen on DaTscan is suggestive of striatal dopaminergic denervation in DYT-KMT2B. Contrary to levodopa-induced dyskinesia and wearing-off phenomena in Parkinson's disease (PD), the pattern of striatal dopamine depletion does not seem to affect the risk of FOG in PD. 6 Nevertheless, in de novo PD patients, those with severe reduction of DaT uptake in the caudate and putamen have a significantly higher incidence of FOG than those with mild or moderate uptake reduction. 7 Therefore, it is possible that the specific striatal anatomy and reduced dorsolateral (motor) putaminal DaT uptake in DYT-KMT2B patients could potentially propagate network alterations to drive the DBS-mediated switch to hypokinesia, despite ongoing therapeutic effect for dystonia. The relationship between DYT-KMT2B, FOG and DBS intervention remains yet to be fully elucidated; nevertheless, and contrary to other monogenic dystonias, in DYT-KMT2B, DaTscan abnormalities and FOG is an emerging phenomenon, at least in those patients with PTVs. Long-term GPi-DBS is reported to lead to hypokinetic gait disorders in patients treated for dystonia. 8 In DYT-TOR1A, physiological parameters such as the paired associative stimulation response were almost absent and short-interval intracortical inhibition reduced. 9 This pattern, resembles untreated PD may in part explain the observed hypokinesia in DBS-treated dystonia without abnormal DaTscan imaging. 8 In conclusion, the potential risk of hypokinetic gait disorders in DYT-KMT2B should be considered in patients undergoing GPi-DBS, which warrants strict monitoring of the motor phenomenology post-procedure. Serial DaT SPECT imaging may aid identification of striatal dopaminergic denervation in DYT-KMT2B and a clinical trial of levodopa or dopaminergic agonist Video 3. Additional subjects (not included in the manuscript): Subject 1: four consecutive video sequences document gait evolution after DBS; gait was not available pre-DBS since she was in Status Dystonicus; just after DBS insertion, she was still wheelchair bound, unable to stand and to walk; lower limb dystonia improved allowing standing and gait with support, without FOG. Subject 2: two consecutive video sequences are presented; the first sequence documents dystonic features involving lower limbs during gait pre-DBS; the second sequence shows occurrence of FOG early after DBS insertion. Video content can be viewed at https://onlinelibrary.wiley.com/ doi/10.1002/mdc3.13519 Video 2. Video sequences for the patients included in the manuscript illustrating gait previous to the occurrence of FOG. Sequence 1: illustrates subject 1 from the manuscript, early after DBS insertion; lower limbs dystonia is improved compared to preoperative assessment, allowing standing and walking; gait is still impaired by residual dystonia, but the patient does not present yet FOG; the two first video sequences from Video 1 illustrate the occurrence and worsening of FOG over time under DBS for the patient. Sequences 2 and 3: illustrate subject 4 from the manuscript, pre-DBS and with DBS, respectively. Despite obvious significant improvement of her dystonia under DBS, she was unable to walk because of severe skeletal deformities involving the lower limbs. Sequences 4, 5 and 6: illustrate subject 5 from the manuscript, pre-DBS and after DBS insertion, before the occurrence of FOG. In sequence 4 (pre-DBS), gait is very unsteady because of severe dystonia involving trunk and lower limbs; sequence 5 documents significant improvement under DBS, without FOG. Sequence 6 shows altered cadence with mild FOG at turn around. Video content can be viewed at https://onlinelibrary.wiley.com/doi/ 10.1002/mdc3.13519 may be useful. More evidence is needed to improve understanding of the etiological basis and efficacy of different interventions for DYT-KMT2B-related hypokinetic movement disorders. Disclosures Ethical Compliance Statement: The study was approved by the Internal Review Board of Montpellier University Hospital (Ethics Board number 2018_IRB-MTP_11-11). Written informed consent was obtained for all participants in whom research genetic testing was undertaken and for publication of videos. We confirm that we have read the Journal's position on issues involved in ethical publication and affirm that this work is consistent with those guidelines. Funding Sources and Conflicts of Interest: No specific funding was received for this work. The authors declare that there are no conflicts of interest relevant to this work. Financial Disclosures for the Previous 12 Months: L.C., D.D., X.V., D.D.V., P.C., K.G. and M.A.K. declare that there are no additional disclosures to report. ■
2022-07-20T15:28:30.329Z
2022-07-18T00:00:00.000
{ "year": 2022, "sha1": "ea94f4b33fff456b135cd6faf514c3adc9e6267e", "oa_license": "CCBYNC", "oa_url": "https://discovery.ucl.ac.uk/id/eprint/10155352/1/Freezing%20of%20Gait%20as%20a%20Complication%20of%20Pallidal%20Deep%20Brain%20Stimulation.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "d296699e316200d758b70389ac83f43a98edcce0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
146058587
pes2o/s2orc
v3-fos-license
Performance evaluation of newly developed variety of menthol mint at farmer’s field – A case study of mint cultivation in Central Uttar Pradesh The present study for performance evaluation newly developed varieties of CIM –Kranti and other varieties of menthol mint cultivation has been carried out at farmers’ field of central Uttar Pradesh. Mints are commonly used as the source of fragrance, flavor and pharmaceuticals industry. During the study period 2017-18, 100 farmers cultivating CIM-Kranti and other varieties have been selected from the region of central Uttar Pradesh. The primary data were collected from the selected farmer’s field on profitability comparison between CIM-Kranti and other varieties under cultivation. The highest area and production has been observed during 2012 and 2013. Simple statistical tools and techniques have been used for data analysis of the cost of cultivation and profitability. It has been observed during the study that CIM-Kranti gives higher returns (.98491/- ha/year) over other varieties (.70977/-ha/year). However, the input cost of CIM-Kranti is higher than other varieties of the crop but the net return of CIM-Kranti was more profitable than other varieties. The benefit cost ratio has been observed 1.45 and 1.74 of other varieties and CIM-Kranti respectively. The new variety “CIM-Kranti” of menthol mint is cold and frost tolerant and has the potential to produce 10-15% more oil i.e. 145-160 kg/ha in summer season as compared to all other popular commercial cultivars of menthol mint.It is suggested from the study that maximum profit is generated through CIM-Kranti cultivation followed by other varieties crop. INTRODUCTION Mints belong to the genus Mentha (family Lamiaceae) varying in their aroma and have been known as useful plant species from the time immemorial. Mints are commonly used as the source of fragrance, flavor and pharmaceuticals especially for culinary preparations. Although many species of mints are being cultivated all over the world , among them only four species are predominantly cultivated in India. These include Menthol mint (MenthaarvensisL.var.piperascens), Peppermint (M.piperita), Bergamot mint (M.citrata) and Spearmint (M.spicata). India is a leading supplier of Menthol mint oil in the world and a large number of farmers in India are being benefitted by its cultivation. Generally, the crop is cultivated during January to July either directly by suckers or by transplanting plantlets for production of mint oil.The cultivation of menthol mint in this country dates back to about 48 years. Prior to 1960s, the requirement of menthol in India was met by import. It was introduced as a crop in India through the efforts of CSIR's Central Indian Medicinal Plants Organization (now CIMAP) and Regional Research Laboratory, Jammu (now IIIM). The project on the Mentha cultivation was taken up at the CIMAP's Pantnagar Research Centre which was established in 1962 near Haldwani in Uttarakhand state. As a result of transfer of technology through this centre large areas were brought under this crop especially in terai region, i.e. Kashipur, Moradabad, Rampur, etc. Some companies like Richardson Hindustan Ltd (now P&G) and Bhavana Chemicals Ltd. also organized cultivation and processing of the Japanese mint crop for its oil and menthol in terai UP. The cultivation of mint became popular progressively since then and spread gradually in vast areas of Uttar Pradesh, and small to large areas of Punjab, Haryana, Madhya Pradesh and Bihar (Khanuja et.al., 2005). About five year back few multinational companies (e.g. Symrise, BASF) have also started production of menthol through synthetic route posing a serious threat to the natural menthol. Besides this, climate change may also affect the cropping pattern and yield potential of the existing cultivars. It is, therefore, imperative to steer the research for breeding improved and high yielding varieties which can be grown in adverse climatic conditions and improvised agro-technology for cultivation of this industrial crop with minimum inputs. The diversified usage of menthol has shown that this commodity will be required consistently in large volume to meet the domestic as well as global requirements. Recently the new variety CIM-Kranti of menthol mint was released by CSIR-CIMAP to make the sustainability in production and to compete with synthetic menthol. MATERIALS AND METHODS Three districts of Uttar Pradesh viz. Barabanki, Sitapur and Raebareli were selected purposively for the study where the new cultivar is adopted by farmers. The data were collected randomly from the 100 farmers in these districts, out of which 48were cultivating other varieties and rest 52 were cultivating Kranti variety. The data were collected through personal interview using a pre-tested questionnaire, while the secondary data were collected from the publication of government and other agencies. To study the economics of menthol mint, simple cost accounting method was followed. RESULTS AND DISCUSION The last 24 years data from 1994 to 2018 has been shown in the graph on area and production of menthol mint. The highest area and production has been observed during 2012 and 2013. After introduction of synthetic menthol in the market the area was reduced drastically from 3.25 lakhs hectares in 2013 to 2.50 lakhs hectares in 2015. Due to the introduction of new cultivars CIM Kranti and new scientific production techniques at farmers field, the cultivation of this crop shown and increasing trend in area and production. The economics of new cultivar over existing cultivars as discussed in the paper. The trend is shown in following graph. Cropping patterns: As per the data obtained from the study area, it can be understood that farmers are shifting their focus towards the medicinal and aromatic crops from the traditional crops. About 29.14 percent farmers are growing medicinal and aromatic crops and remaining farmers are cultivating the traditional crops like paddy, wheat, potato, mustard and sugarcane etc. (Table 2). Cost of cultivation: The cost of cultivation of menthol mint in the study area shown in Table 3. The cost of cultivation was observed higher 56509/-per hectare per year in CIM Kranti than the existing varieties which is 49023/-per hectare per year. The major portion of costs in both the cultivars was shared by irrigation, manures and fertilizers, intercultural operations, transplanting and harvesting. It can be concluded from the table that cost of cultivation in CIM-Kranti is higher than the existing varieties. Comparative economics of other varieties and cimkranti: The data shown in Table 4 indicate that the new variety CIM-Kranti came out as a beneficial enterprise for the farmers. The yield of CIM-Kranti was more than other varieties under cultivation. Hence, the net profit of CIM-Kranti was found 38.76 percent higher than the other varieties. It is observed that CIM-Kranti is more profitable over variable cost than other varieties under cultivation in the study areas. Difference between other varieties of menthol mint and cim-kranti: CIM-Kranti variety has several superior characters than other varieties of mentha. The cost of cultivation of CIM-Kranti was higher than other varieties but at the same time, CIM-Kranti variety has another excellences over other varieties like it is tolerant to cold and frost, suitable for kharif season, Number of cuttings which can be taken is twice the other varieties and also the oil yield, oil percentage and menthol percentage is higher than other varieties of mint under cultivation. CONCLUSION The new variety "SIM Kranti" of menthol mint is cold and frost tolerant and has the potential to produce 10-15% additional oil i.e, 145-160 kg/ha in summer season as compared to all other popular commercial cultivators of menthol mint. It is also observed from study that during winter season (September-January) when all other mint varieties suffer senescence due to cold and frost condition, variety "SIM Kranti" remains green in the field and grow vigorously to yield two or three times more essential oil (100gm/ha) have with sukers production (200-250 q/ha). The new variety "CIM Kranti" of menthol mint has widened its scope of cultivation in khariff seasons also and thus reducing the cost of cultivation in terms of less irrigation requirement. However, newly developed variety is suitable for commercial cultivation of menthol mint to generate additional income to farmers without any additional input for cultivation during winter besides its usual cultivation as a summer crop.
2019-05-07T14:05:58.296Z
2019-01-16T00:00:00.000
{ "year": 2019, "sha1": "9e2a2001e311e2a5aea1dd3b3d11cf6ea2cab657", "oa_license": null, "oa_url": "https://doi.org/10.18805/ag.d-4842", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7972e78ee8aaf8ab6edf9afced0ddfd4b049528d", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
239635541
pes2o/s2orc
v3-fos-license
Three-Dimensional CA-LBM Numerical Model and Experimental Verification of Cs2AgBiBr6 Perovskite Single Crystals Grown by Solution Method A three-dimensional cellular automata-lattice Boltzmann (CA-LBM) coupling model is established to simulate the facet growth process and the controlled cooling growth process of Cs2AgBiBr6 perovskite single crystals. In this model, the LBM method is used to calculate the real-time solute field, the CA method is used to simulate the crystal growth process driven by supersaturation of solute, and the geometric parameter g related to the adjacent grid is introduced to reduce the influence of grid anisotropy. The verification of the model is achieved by comparing the simulation results with the experimental results. The comparison results show that a smaller cooling rate is helpful for the growth of large-size single crystals, which verifies the rationality and correctness of the model. Introduction Recently, perovskite materials have received widespread attention due to their excellent photoelectric properties [1]. A perovskite single crystal has extremely low defect density and minimal interface defects compared to polycrystalline materials [2][3][4][5]. However, the application of solution methods to perovskite single-crystal materials has strict solvent requirements [3], so it is difficult to obtain single crystals with large sizes. Largesized perovskite single crystals are of great significance for improving their optical and electrical properties [5]. Lead-free double perovskite Cs 2 AgBiBr 6 is a stable and nontoxic photoelectric material [16][17][18][19] with suboptimal photon to charge carrier conversion efficiency. Compared with other lead-free double perovskites, Cs 2 AgBiBr 6 perovskite has better stability in response to moisture, air, heat, and light [20], which is very suitable for high-energy photon detection applications [21]. So far, scholars have conducted much research on the growth of perovskite crystals by the solution method. The most common method is the solution temperature-lowering (STL) route. Su et al. [15] synthesized high-purity polycrystalline Cs 2 AgBiBr 6 powder by the solution method by using hydrobromic acid as a solvent; millimeter-sized Cs 2 AgBiBr 6 crystals were obtained. Yin et al. [22] obtained Cs 2 AgBiBr 6 single crystals with smooth surfaces and relatively high resistivity with good reproducibility by controlling cooling. Zhang et al. [23] grew Cs 2 AgBiBr 6 crystals with maximum dimensions of 10 mm and a flat plane by the solution cooling method with the addition of toluene. Zhu et al. [24] proposed an additive CH 3 COONa-controlled nucleation route toward the generalized growth of all inorganic double perovskite single crystals. A Cs 2 AgBiBr 6 single crystal with a maximum size of 13 mm was successfully grown by this method. Dang et al. [25] successfully grew centimeter-sized Cs 2 AgBiBr 6 single crystals by the TSSG method, using MABr as the flux in the mother solution. The above methods are all experimental studies that do not fully reveal the growth mechanism of Cs 2 AgBiBr 6 perovskite crystals. The numerical simulation method can reproduce the evolution process of crystal growth from the perspective of the full space and time domain and provide process guidance for crystal preparation. For example, Zhang et al. [26][27][28] studied the directional growth of polycrystalline silicon by numerical simulation based on the cellular automaton (CA) method and studied the growth mechanism of facet silicon crystals in detail. Chen et al. [29], based on the phase field (PF) method, studied the growth of arbitrary symmetry facet dendrites. Although the driving force for the growth of silicon crystals is temperature and that of Cs 2 AgBiBr 6 perovskite crystals is solution supersaturation, both of them are facet crystals, so the same numerical method can be used to reveal the growth mechanism of Cs 2 AgBiBr 6 perovskite. In this paper, the solution temperature-lowering growth process of Cs 2 AgBiBr 6 single crystals is simulated by the CA-LBM method and compared with the experiment to verify the correctness of the model. Then, in order to guide the preparation of larger-size single crystals, the CA-LBM method is used to simulate the growth behavior of single crystals at different cooling rates. In this paper, the experimental and simulation results are respectively compared qualitatively and quantitatively to further verify the model. Facet Growth Procedure: The concentration was calculated from the solution volume and the mass of the solute evaporated from the sample. First, the precursor of a target concentration (0.1 M, M is the concentration unit: mol/L) was prepared by CsBr, AgBr, BiBr 3 , and 3 mL HBr acid (CsBr:AgBr:BiBr 3 = 2:1:1) in a 5 mL glass bottle. Every point was averaged from three samples for a more accurate result. The bottle was sealed by a silicone plug and silicone grease and heated at 120 • C in an oil bath. Second, after the solute fully dissolved, the temperature was lowered at a rate of 0.5 • C·h −1 to 60 • C to avoid excessive nucleation and crystal growth. Then, the crystals were obtained by pouring out the residual solution. Controlled Cooling Rate Growth Procedure: The Cs 2 AgBiBr 6 crystals were grown by the method of controlling the cooling rate. First, 3 mL of the respective precursor solution with the same concentration (0.1 M) was prepared in four 5 mL glass bottles. The preparation method of the precursor solution followed the Facet Growth Procedure. Second, the sealed glass bottles were heated at 120 • C in an oil bath to fully dissolve the solute and then cooled to 60 • C at the cooling rates of 2 • C·h −1 , 1.5 • C·h −1 , 1 • C·h −1 , and 0.5 • C·h −1 , respectively. Then, the crystals were obtained by pouring out the residual solution. Numerical Model of Cs 2 AgBiBr 6 Single-Crystal Growth Based on the CA Method A three-dimensional CA-LBM coupling model with interface energy anisotropy was established to simulate Cs 2 AgBiBr 6 single-crystal growth. The solute field was calculated by the LBM method, the solid-liquid interface advancing process was calculated by the CA method, and the CA method and LBM method were coupled according to the solute content decreased by the solid-state rate growth of Cs 2 AgBiBr 6 crystals. The experimental results were compared with the simulation results to verify the rationality and correctness of the model. LBM Model The D3Q15 model [28,30], with the single relaxation time lattice Bhatnagar-Gross-Krook (LBGK) method, was used to calculate the solute field. where r, t, ∆t, and τ c represent location, time, time step, and the relaxation time of the solute field, respectively; g i is the distribution function of particle solute content; and g i eq is the equilibrium distribution function of particle solute content. G i is the solute sink term, which represents solute content consumed due to crystal growth. ρ s and M represent the solid density and molar mass, respectively, of Cs 2 AgBiBr 6 perovskite. (C max − C min ) represents the difference between the initial content and the end content, which is used for the dimensionless solid density. CA Model In this paper, the three-dimensional CA model was used to simulate the growth of perovskite single crystals. Due to the slow growth rate of the Cs 2 AgBiBr 6 single crystals and the limitations of computation capacity, it was difficult to simulate the growth of the perovskite crystals in the macro scale. In order to reduce the calculation time as much as possible under the condition of ensuring calculation accuracy, 61 × 61 × 61 cube cells were selected in the calculation area, and the cell size was 1 µm. The following is a three-dimensional model to simulate the nucleation, growth, and capture of Cs 2 AgBiBr 6 single crystals. Model for the Nucleation The continuous nucleation model based on Gaussian distribution is used in the nucleation model [31]. Here, S is supersaturation, and dn/dS is the nucleation distribution function. n max , S σ , and S mea represent the maximum nucleation density, standard deviation supersaturation, and average nucleation supersaturation, respectively. These three parameters are determined by the experimental conditions. For the nucleation on the wall of a three-dimensional container, the nucleation probability in a time step can be expressed as where δn is the increasing nucleation density in one time step, and V C is the volume of a cell. When the supersaturation of a cell is higher than the critical supersaturation of nucleation, and the probability of nucleation is greater than a random number of 0-1, the cell nucleates. The nucleation position is randomly selected in the cell space that satisfies the basic conditions of nucleation. Crystal Growth Model In the process of crystal growth, the transition from interface cell to solid cell is determined by the growth of the solid fraction. For solution growth, without considering the evaporation of the solution, the solid fraction increment can be expressed by where S is the supersaturation of the solution, and M and ρ s represent the molar mass and solid density of Cs 2 AgBiBr 6 , respectively. In order to maintain the solid-liquid interface of the single layer as well as the solidification and growth, the geometric parameter g related to the state of the adjacent grid is introduced [28]. Here, ζ χ 1 is the state parameter of the nearest six grid cells, and ζ χ 2 is the state parameter of the next nearest 12 grid cells. In this model, the driving force of growth is the saturation (S) of the solution, which is the difference between the solution concentration (C) and solubility (C sat ) at a certain temperature. The nucleation and growth concentrations are two key factors to control crystal growth. The temperature-concentration distribution curve of Cs 2 AgBiBr 6 in HBr can be divided into three zones by the solubility curve and supersolubility curve [22], as shown in Figure 1. In the process of crystal growth, the transition from interface cell to solid cell is determined by the growth of the solid fraction. For solution growth, without considering the evaporation of the solution, the solid fraction increment can be expressed by ∆fs = g · S · M/1000ρs (6) where S is the supersaturation of the solution, and M and ρs represent the molar mass and solid density of Cs2AgBiBr6, respectively. In order to maintain the solid-liquid interface of the single layer as well as the solidification and growth, the geometric parameter g related to the state of the adjacent grid is introduced [28]. Here, ζ χ 1 is the state parameter of the nearest six grid cells, and ζ χ 2 is the state parameter of the next nearest 12 grid cells. In this model, the driving force of growth is the saturation (S) of the solution, which is the difference between the solution concentration (C) and solubility (Csat) at a certain temperature. The nucleation and growth concentrations are two key factors to control crystal growth. The temperature-concentration distribution curve of Cs2AgBiBr6 in HBr can be divided into three zones by the solubility curve and supersolubility curve [22], as shown in Figure 1. In Figure 1, nucleation can occur when the concentration is in the nucleation zone. When the concentration is in the growth zone, the crystal can grow stably without nucleation. Therefore, in order to grow larger single crystals, the concentration should be controlled in the growth range as much as possible. By fitting the above two curves, the solubility and supersolubility of Cs2AgBiBr6 can be expressed as polynomials of temperature as Equations (10) and (11), respectively. In Figure 1, nucleation can occur when the concentration is in the nucleation zone. When the concentration is in the growth zone, the crystal can grow stably without nucleation. Therefore, in order to grow larger single crystals, the concentration should be controlled in the growth range as much as possible. By fitting the above two curves, the solubility and supersolubility of Cs 2 AgBiBr 6 can be expressed as polynomials of temperature as Equations (10) and (11), respectively. Crystals 2021, 11, 1101 5 of 12 Here, C sat is solubility, and C ssat is supersolubility. T t is the equivalent temperature after considering the influence of interface energy anisotropy and the curvature of the interface, which can be calculated using Equation (12). Here, T is the liquidus temperature, Γ is the Gibbs-Thomson coefficient, and wmc is the weighted mean curvature. For cubic crystals with a fourfold anisotropy, wmc can be calculated from Equation (13) [32]: where ε is the degree of anisotropy of the surface energy, and n x = ∂ x f s /|∇f s |, n y = ∂ y f s /|∇f s |, Model for the Capture During the CA simulation, the transition of the cell state from liquid to interface is governed by the capture rule. Since the change of cellular state has a great influence on the subsequent growth process, it is particularly important to choose an appropriate capture rule. For the three-dimensional CA model, the number of adjacent cells is large, so the 3D capture rule is more complex than the 2D capture rule. The traditional capture rules are Von Neumann's and Moore's rules, but no matter which capture rule, it will inevitably lead to artificial anisotropy. Therefore, based on Von Neumann's rule, this paper introduces the geometric parameter g [28] to reduce the artificial anisotropy, and a three-dimensional low anisotropy capture model is established. The Physical Parameters of Cs 2 AgBiBr 6 Precursor Solution The physical parameters used in the present computations are listed in Table 1. Table 1. Physical properties and calculation parameters used in the present model. Property Value Simulation and Verification of Facet Growth In order to reveal the growth rule of Cs 2 AgBiBr 6 , a Cs 2 AgBiBr 6 facet growth morphology is simulated firstly. Since both the simulation and experiment adopt the all-area synchronous cooling method, 61 × 61 × 61 cubic cells with the same cooling conditions are selected in the calculation area, and the cell size is 1 µm. The initial simulation conditions are set as central point nucleation, and the all-area temperature is cooled by 0.5 • C every 1800 time steps. The physical parameters used in the calculation are listed in Table 1, which contains both the initial temperature and the initial concentration. Crystal growth is calculated by the CA model driven by solute supersaturation, and the temperature-solubility curve used is shown in Figure 1. The calculation results when the time steps are 100,000, 140,000 and 180,000 are shown in Figure 2a-c, respectively. The real time after conversion is 55.5 h, 77.7 h, and 100 h, respectively. The upper part of each drawing shows the 3D simulation result, and the lower part shows the solute distribution and the solid-liquid interface of the Y-Z section corresponding to the 3D drawing. Since both the simulation and experiment adopt the all-area synchronous cooling method, 61 × 61 × 61 cubic cells with the same cooling conditions are selected in the calculation area, and the cell size is 1 µm. The initial simulation conditions are set as central point nucleation, and the all-area temperature is cooled by 0.5 °C every 1800 time steps. The physical parameters used in the calculation are listed in Table 1, which contains both the initial temperature and the initial concentration. Crystal growth is calculated by the CA model driven by solute supersaturation, and the temperature-solubility curve used is shown in Figure 1. The calculation results when the time steps are 100,000, 140,000 and 180,000 are shown in Figure 2a-c, respectively. The real time after conversion is 55.5 h, 77.7 h, and 100 h, respectively. The upper part of each drawing shows the 3D simulation result, and the lower part shows the solute distribution and the solid-liquid interface of the Y-Z section corresponding to the 3D drawing. The facet morphology cannot be seen clearly in Figure 2a because it is in the early stage of growth after nucleation. The corresponding solute cross-section figure shows that the solute distribution is quite uniform, and the anisotropy is not relatively distinct. Figure 2b shows that the crystal is in an octahedral shape, and the morphology of facet growth appeared after a period of growth. The corresponding solute cross-section in Figure 2e shows that under the influence of anisotropy and geometric parameters g, the growth of the <111> direction (45° direction) is restrained, and crystal growth occurs along the <100> direction (axial direction). The solute in the <100> direction is poorer than that in the <111> direction, resulting in faster growth in the <111> direction, thus maintaining the facet growth morphology. It can be seen from Figure 2c that the crystal is characterized by fourfold anisotropic symmetry and continues to grow in an octahedral shape. The analysis of the above simulation results reveals the growth mechanism of Cs2Ag-BiBr6; the <100> and <111> directions restrict each other during the growth process due to The facet morphology cannot be seen clearly in Figure 2a because it is in the early stage of growth after nucleation. The corresponding solute cross-section figure shows that the solute distribution is quite uniform, and the anisotropy is not relatively distinct. Figure 2b shows that the crystal is in an octahedral shape, and the morphology of facet growth appeared after a period of growth. The corresponding solute cross-section in Figure 2e shows that under the influence of anisotropy and geometric parameters g, the growth of the <111> direction (45 • direction) is restrained, and crystal growth occurs along the <100> direction (axial direction). The solute in the <100> direction is poorer than that in the <111> direction, resulting in faster growth in the <111> direction, thus maintaining the facet growth morphology. It can be seen from Figure 2c that the crystal is characterized by four-fold anisotropic symmetry and continues to grow in an octahedral shape. The analysis of the above simulation results reveals the growth mechanism of Cs 2 AgBiBr 6 ; the <100> and <111> directions restrict each other during the growth process due to the influence of solute content, forming alternating growth behaviors of the <100> and <111> directions, and finally forming a typical octahedral morphology. The concentration of solute has a great influence on the process of grain growth. As shown in Figure 3, the grain growth is divided into two stages, namely nucleation and growth. The concentration curve in Figure 3 does not change, and the grain length is 0 before 42 h (about 95 • C). After 42 h, nucleation occurs and enters the growth stage, and the solute concentration decreases with the increase in grain length. It is worth noting that the solute consumption rate has an obvious acceleration process and then gradually slows down. This is due to the increase in solute capture rate at the solid-liquid interface with the cooling growth process, and with the gradual consumption of solute, the supersaturation of solution decreases, which is not enough to maintain rapid growth. When the cooling is stopped, the supersaturation tends to 0, and the solution concentration tends to a certain value. Crystals 2021, 11, x FOR PEER REVIEW 7 of 12 the influence of solute content, forming alternating growth behaviors of the <100> and <111> directions, and finally forming a typical octahedral morphology. The concentration of solute has a great influence on the process of grain growth. As shown in Figure 3, the grain growth is divided into two stages, namely nucleation and growth. The concentration curve in Figure 3 does not change, and the grain length is 0 before 42 h (about 95 °C). After 42 h, nucleation occurs and enters the growth stage, and the solute concentration decreases with the increase in grain length. It is worth noting that the solute consumption rate has an obvious acceleration process and then gradually slows down. This is due to the increase in solute capture rate at the solid-liquid interface with the cooling growth process, and with the gradual consumption of solute, the supersaturation of solution decreases, which is not enough to maintain rapid growth. When the cooling is stopped, the supersaturation tends to 0, and the solution concentration tends to a certain value. It can be seen from the above analysis that the concentration curve in Figure 3 shows that the trend of constant-gradient decline tends to be stable. This trend can be explained by comparison with Figure 1. At the beginning stage of cooling growth, the solute concentration is lower than the solubility curve and supersolubility curve, indicating that the concentration is in the dissolution zone. During this period, neither nucleation nor growth can occur. As the temperature gradually lowers, the solute concentration enters the nucleation zone, where nucleation occurs and consumes part of the solute and then enters the growth zone. In the subsequent cooling process, it alternately enters the nucleation zone and growth zone, so the concentration curve in Figure 3 shows a gradient declining trend. In the final stage of growth, the solute concentration is close to the solubility, and the growth stops and reaches a stable state, as seen in Figure 3. The above analysis shows that the concentration curve in Figure 3 is consistent with the change trend of the temperaturesolubility curve in Figure 1, which verifies the correctness of the CA model driven by supersaturation in this paper. The grain length curve in Figure 3 does not increase linearly but has some steps. This is because the supersaturation of the solution is consumed after a certain period of growth at a certain temperature, resulting in temporary slow growth. When the solution is cooled, It can be seen from the above analysis that the concentration curve in Figure 3 shows that the trend of constant-gradient decline tends to be stable. This trend can be explained by comparison with Figure 1. At the beginning stage of cooling growth, the solute concentration is lower than the solubility curve and supersolubility curve, indicating that the concentration is in the dissolution zone. During this period, neither nucleation nor growth can occur. As the temperature gradually lowers, the solute concentration enters the nucleation zone, where nucleation occurs and consumes part of the solute and then enters the growth zone. In the subsequent cooling process, it alternately enters the nucleation zone and growth zone, so the concentration curve in Figure 3 shows a gradient declining trend. In the final stage of growth, the solute concentration is close to the solubility, and the growth stops and reaches a stable state, as seen in Figure 3. The above analysis shows that the concentration curve in Figure 3 is consistent with the change trend of the temperature-solubility curve in Figure 1, which verifies the correctness of the CA model driven by supersaturation in this paper. The grain length curve in Figure 3 does not increase linearly but has some steps. This is because the supersaturation of the solution is consumed after a certain period of growth at a certain temperature, resulting in temporary slow growth. When the solution is cooled, the supersaturation increases again, resulting in the acceleration of growth again. Therefore, a stepped curve is formed. Analysis of the above simulation results shows that the grain will continue to grow with an octahedral morphology. Therefore, in this paper, the simulation results can be approximately compared with the experimental results to verify the rationality and correct- Figure 4 shows the comparison between simulation results (Figure 4a,b) and experimental results (Figure 4c,d) of Cs 2 AgBiBr 6 single-crystal growth. the supersaturation increases again, resulting in the acceleration of growth again. Therefore, a stepped curve is formed. Analysis of the above simulation results shows that the grain will continue to grow with an octahedral morphology. Therefore, in this paper, the simulation results can be approximately compared with the experimental results to verify the rationality and correctness of the model. Figure 4 shows the comparison between simulation results (Figure 4a,b) and experimental results (Figure 4c,d) of Cs2AgBiBr6 single-crystal growth. Figure 4c is the single-crystal morphology prepared by the author of this paper by the solution method, while Figure 4d is the single crystal prepared in Reference [16]. The comparison between Figure 4a and c show that the (111) characteristic surface of the simulated single crystal is in good agreement with the experimental results. The comparison between Figure 4b and d show that the simulation results are also in good agreement with the experimental results in Reference [16]. The two sets of results are cross-validated, which demonstrate that the model established in this paper can simulate the facet growth morphology of Cs2AgBiBr6 correctly. The surface of the single crystal in Figure 4c is comparatively rough, which is caused by the simultaneous growth of many nucleation sites, while in Figure 4a, only one nucleation site is set, so the single-crystal surface is smooth. Simulation and Verification of Controlled Cooling Growth In order to reveal the effect of solution cooling rate on the size and number of the Cs2AgBiBr6 single crystal, the grain growth process at different cooling rates is simulated. The calculation area also selects 61 × 61 × 61 cubic cells with the same cooling conditions, and the cell size is 1 µm. In order to facilitate the observation of the grain size distribution, the initial conditions are set as random point nucleation on the Z = 30 plane Figure 4c is the single-crystal morphology prepared by the author of this paper by the solution method, while Figure 4d is the single crystal prepared in Reference [16]. The comparison between Figure 4a and c show that the (111) characteristic surface of the simulated single crystal is in good agreement with the experimental results. The comparison between Figure 4b and d show that the simulation results are also in good agreement with the experimental results in Reference [16]. The two sets of results are cross-validated, which demonstrate that the model established in this paper can simulate the facet growth morphology of Cs 2 AgBiBr 6 correctly. The surface of the single crystal in Figure 4c is comparatively rough, which is caused by the simultaneous growth of many nucleation sites, while in Figure 4a, only one nucleation site is set, so the single-crystal surface is smooth. Simulation and Verification of Controlled Cooling Growth In order to reveal the effect of solution cooling rate on the size and number of the Cs 2 AgBiBr 6 single crystal, the grain growth process at different cooling rates is simulated. The calculation area also selects 61 × 61 × 61 cubic cells with the same cooling conditions, and the cell size is 1 µm. In order to facilitate the observation of the grain size distribution, the initial conditions are set as random point nucleation on the Z = 30 plane (central horizontal plane), and the all-area temperature is cooled by 2 • C, 1.5 • C, 1 • C, and 0.5 • C every 1800 time steps, respectively. The physical parameters used are listed in Table 1. The crystal growth process is also calculated by the CA model driven by the supersaturation. Figure 5 shows the simulation and experimental results when the cooling rate is 2 • C·h −1 , 1.5 • C·h −1 , 1 • C·h −1 , and 0.5 • C·h −1 . The upper part of each drawing shows the 3D simulation results, and the lower part shows the experimental results corresponding to the 3D drawing. Crystals 2021, 11, x FOR PEER REVIEW 9 of 12 (central horizontal plane), and the all-area temperature is cooled by 2 °C, 1.5 °C, 1 °C, and 0.5 °C every 1800 time steps, respectively. The physical parameters used are listed in Table 1. The crystal growth process is also calculated by the CA model driven by the supersaturation. Figure 5 shows the simulation and experimental results when the cooling rate is 2 °C·h −1 , 1.5 °C·h −1 , 1 °C·h −1 , and 0.5 °C·h −1 . The upper part of each drawing shows the 3D simulation results, and the lower part shows the experimental results corresponding to the 3D drawing. From the above analysis, it can be predicted that the number of final grains will gradually decrease as the cooling rate decreases, and the maximum grain size will gradually increase. The comparison results of Figure 5d,h verify this prediction. When the cooling rate is reduced to 0.5 °C·h −1 , the number of final grains is reduced to 4, and the maximum grain size increases considerably. There are some very small grains in Figure 5d,h, which is because the number of initial nucleation is small, resulting in the solute mass required for grain growth being insufficient to deplete the supersaturated solute mass increased by cooling and then precipitate in the form of a few grains. However, the solute content at this time can no longer maintain its rapid growth, so the subsequent precipitated grain size is much smaller than the initial grain. The qualitative comparison between the above simulation results and the experimental results verifies the rationality of the model established in this paper. In order to verify the correctness of the model, Figure 6 shows the quantitative comparison between the simulation results and experimental results. Due to the limitation of the amount of calculation, the volume of the calculation area is smaller than the volume of the actual growth solution, and the quantitative comparison results cannot be clearly presented by using the maximum grain size. Therefore, the dimensionless method is adopted for treatment-the grain size corresponding to different cooling rates is divided by the maximum grain size corresponding to 0.5 °C·h −1 to obtain the dimensionless grain size corresponding to different cooling rates (grain size ratio). From the above analysis, it can be predicted that the number of final grains will gradually decrease as the cooling rate decreases, and the maximum grain size will gradually increase. The comparison results of Figure 5d,h verify this prediction. When the cooling rate is reduced to 0.5 • C·h −1 , the number of final grains is reduced to 4, and the maximum grain size increases considerably. There are some very small grains in Figure 5d,h, which is because the number of initial nucleation is small, resulting in the solute mass required for grain growth being insufficient to deplete the supersaturated solute mass increased by cooling and then precipitate in the form of a few grains. However, the solute content at this time can no longer maintain its rapid growth, so the subsequent precipitated grain size is much smaller than the initial grain. The qualitative comparison between the above simulation results and the experimental results verifies the rationality of the model established in this paper. In order to verify the correctness of the model, Figure 6 shows the quantitative comparison between the simulation results and experimental results. Due to the limitation of the amount of calculation, the volume of the calculation area is smaller than the volume of the actual growth solution, and the quantitative comparison results cannot be clearly presented by using the maximum grain size. Therefore, the dimensionless method is adopted for treatment-the grain size corresponding to different cooling rates is divided by the maximum grain size corresponding to 0.5 • C·h −1 to obtain the dimensionless grain size corresponding to different cooling rates (grain size ratio). It can be seen intuitively from Figure 6a,b that as the cooling rate increases, the number of grains gradually increases, while the maximum grain size shows a decreasing trend. The two curves in Figure 6a are in good agreement, and there is a slight deviation when the cooling rate is larger. This is because the random nucleation model is adopted in this paper, the number of nucleations will be within a certain range, and the nucleation position is random, so it is more in line with the real grain growth situation. In Figure 6b, when the cooling rate is larger, the simulation results are lower than the experimental results. This is because the simulation results have more grains than the experimental results when the cooling rate is larger, resulting in a smaller size of the simulated grains than the experimental grains. The quantitative comparison between Figure 6a,b shows that the simulation results are in good agreement with the experimental results, which verifies the correctness of the model established in this paper. Conclusions In this paper, a three-dimensional CA-LBM coupling model is established to simulate the facet growth process and controlled cooling growth process of Cs2AgBiBr6 perovskite single crystals. The CA model takes the supersaturation of solute as the driving force and considers the influence of interface energy anisotropy on the morphology of the liquidsolid interface. The geometric parameter g is introduced to reduce the influence of grid anisotropy. The qualitative and quantitative comparison between the simulation results It can be seen intuitively from Figure 6a,b that as the cooling rate increases, the number of grains gradually increases, while the maximum grain size shows a decreasing trend. The two curves in Figure 6a are in good agreement, and there is a slight deviation when the cooling rate is larger. This is because the random nucleation model is adopted in this paper, the number of nucleations will be within a certain range, and the nucleation position is random, so it is more in line with the real grain growth situation. In Figure 6b, when the cooling rate is larger, the simulation results are lower than the experimental results. This is because the simulation results have more grains than the experimental results when the cooling rate is larger, resulting in a smaller size of the simulated grains than the experimental grains. The quantitative comparison between Figure 6a,b shows that the simulation results are in good agreement with the experimental results, which verifies the correctness of the model established in this paper. Conclusions In this paper, a three-dimensional CA-LBM coupling model is established to simulate the facet growth process and controlled cooling growth process of Cs 2 AgBiBr 6 perovskite single crystals. The CA model takes the supersaturation of solute as the driving force and considers the influence of interface energy anisotropy on the morphology of the liquidsolid interface. The geometric parameter g is introduced to reduce the influence of grid anisotropy. The qualitative and quantitative comparison between the simulation results and the experimental results verified the rationality and correctness of the model, indicating that the model established in this paper can reproduce the solution growth process of the Cs 2 AgBiBr 6 perovskite single crystal well and can guide the preparation of larger-size Cs 2 AgBiBr 6 single crystals.
2021-10-21T15:18:16.138Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "dbc141c57c2fdb40d2898fd7f0c9fbe449fe2e39", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4352/11/9/1101/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e78c88af2547029536240bda7533dc1e18f2d0f5", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
14097454
pes2o/s2orc
v3-fos-license
Hippocampus-dependent learning influences hippocampal neurogenesis The structure of the mammalian hippocampus continues to be modified throughout life by continuous addition of neurons in the dentate gyrus. Although the existence of adult neurogenesis is now widely accepted the function that adult generated granule cells play is a topic of intense debate. Many studies have argued that adult generated neurons, due to unique physiological characteristics, play a unique role in hippocampus-dependent learning and memory. However, it is not currently clear whether this is the case or what specific capability adult generated neurons may confer that developmentally generated neurons do not. These questions have been addressed in numerous ways, from examining the effects of increasing or decreasing neurogenesis to computational modeling. One particular area of research has examined the effects of hippocampus dependent learning on proliferation, survival, integration and activation of immature neurons in response to memory retrieval. Within this subfield there remains a range of data showing that hippocampus dependent learning may increase, decrease or alternatively may not alter these components of neurogenesis in the hippocampus. Determining how and when hippocampus-dependent learning alters adult neurogenesis will help to further clarify the role of adult generated neurons. There are many variables (such as age of immature neurons, species, strain, sex, stress, task difficulty, and type of learning) as well as numerous methodological differences (such as marker type, quantification techniques, apparatus size etc.) that could all be crucial for a clear understanding of the interaction between learning and neurogenesis. Here, we review these findings and discuss the different conditions under which hippocampus-dependent learning impacts adult neurogenesis in the dentate gyrus. INTRODUCTION It was previously believed that no new neurons were added to the adult mammalian brain. However, thanks to observations, in both adolescent and in middle aged rats, made by Joseph Altman in the 1960s (Altman and Das, 1965) it was recognized that certain areas of the adult brain, the subventricular zone and the subgranular zone of the hippocampus, continue to produce new neurons throughout life. Now, adult neurogenesis in these areas has been observed in all mammalian species examined including non-human primates and humans (however, see Amrein et al., 2007 for a possible exception in some bat species). Although adult neurogenesis has been seen in other areas of the brain (Gould et al., 1999b;Gould, 2007;Cameron and Dayer, 2008) it is still controversial and it occurs at a relatively low rate compared to neurogenesis in the hippocampus. This review will concentrate on adult neurogenesis in the hippocampus which is now widely accepted. We will use the term "mature neuron" to refer to granule cells from both developmental and adult origin that no longer possess the characteristics of immature neurons and "immature neuron" to refer to adult generated neurons that have not yet completed their developmental process. Immature neurons can be identified with a variety of labeling strategies (Figure 1). For example endogenous proteins such as doublecortin can be labeled using immunohistochemical techniques. Doublecortin is a protein expressed in immature neurons from the time of cell division until approximately 21 days of age (Brown et al., 2003;Couillard-Despres et al., 2005). Doublecortin expression gives a broad measure of the age of immature neurons but a more precise age can be determined by administering the DNA synthesis marker Bromodeoxyuridine (BrdU). BrdU is incorporated into cells that are in S-phase but is only biologically active for approximately 2 h (Packard et al., 1973;Nowakowski et al., 1989) so it is incorporated into dividing cells only during that time window. Once labeled, BrdU remains incorporated in cells and the number of surviving immature neurons of a particular age can be measured at different times after BrdU administration. The function of adult neurogenesis in the hippocampus remains a matter of debate. It is possible that adult neurogenesis is merely a developmental byproduct and serves no special function in the adult brain. According to this view, the adult generated neurons serve the same functions as developmentally generated neurons. Others believe that adult neurogenesis is an important mechanism of plasticity in the adult brains and may be related to learning and memory or even emotional or stress regulation (Jacobs et al., 2000;. Various specific Doublecortin (green) and zif268 (red) shows the immature neurons that have been activated in response to spatial memory retrieval. mnemonic functions have been proposed to fall within the special domain of adult generated neurons including, a mechanism for encoding time (Aimone et al., 2006), or pattern separation (Clelland et al., 2009). Still another theory has proposed that adult generated neurons resupply the active pool of neurons in the dentate gyrus while the more mature neurons no longer function (Lisman, 2011). Regardless of what the function may be it is important to note that adult generated neurons are in fact functional (van Praag et al., 2002). Once mature, adult generated neurons exhibit electrophysiological and morphological properties that are practically indistinguishable from developmentally generated neurons. In mice, this maturation process is complete by 4 months of age but possibly as early as 7 weeks (van Praag et al., 2002;Laplagne et al., 2006) although it is important to note that the timing of maturation of immature neurons in the hippocampus is faster in rats than in mice (Snyder et al., 2009a) and is likely different in other species as well. Adult generated immature neurons do differ from mature neurons in terms of morphological and electrophysiological properties. Beginning as early as 4-10 days after cell division in rats and 10-11 days in mice, immature neurons extend axons into CA3 and dendrites into the molecular layer Markakis and Gage, 1999;Zhao et al., 2006). In mice, the growth of these projections and the subsequent formation of synapses continue over a period of several weeks culminating with adult generated neurons that have the same soma size as mature granule cell by 4 months (van Praag et al., 2002;Esposito et al., 2005;Zhao et al., 2006). Initially for a period of 3-4 weeks in mice and rats these immature neurons are highly excitable . The difference in excitability between immature and mature cells in the adult brain recapitulates a phenomenon that occurs during early brain development. During development, the inhibitory transmitter GABA does not exert inhibitory control (Wang et al., 2000). Immature adult generated neurons are also insensitive to inhibition by GABA. In fact, there is evidence that GABA can depolarize immature neurons due to the presence of high levels of the chloride transporter NKCC1 which causes a high internal chloride concentration (Ben-Ari, 2002;Ge et al., 2006). As the cells mature there is a switch in expression from NKCC1 to the chloride exporter KCC2 which causes a decrease in internal chloride concentration and the effect of GABA on the cell becomes hyperpolarization. Thus, for a period of time immature neurons are highly excitable compared to mature neurons and as a result may confer a degree of excitability to a region that is otherwise relatively silent. Long-term potentiation (LTP) is a putative mechanism of associative learning. A specific type of this plasticity, described by Snyder and colleagues (2001), can be induced in hippocampal slices in the absence of GABAergic inhibition. They determined that the unique excitability of immature neurons was responsible for LTP induced without GABAergic inhibition because, either blocking the NR2B subunit of the NMDA receptor (expressed highly during development) or using irradiation to block neurogenesis, prevented the expression of LTP (Snyder et al., 2001). Thus, while mature neurons may not respond to weak stimulation, immature neurons in the dentate gyrus are not under the same type of inhibition and are more likely to be excited. There is further evidence that immature neurons may be preferentially recruited for the storage of hippocampusdependent learning. A study using the immediate early gene products c-fos and Arc has demonstrated that immature neurons in male mice between 4 and 8 weeks of age are activated in response to spatial memory retrieval (Kee et al., 2007). Similarly, new neurons were activated when spatial training occurred 5 weeks following BrdU injection in male mice (Stone et al., 2011). However, Stone and colleagues (2011) found, that if mice were trained 1 week after 5 days of BrdU injections and examined at 5 weeks, cells were less likely to be activated during memory retrieval, suggesting that one week old neurons are not preferentially incorporated into the spatial memory trace, and that, similar to cell survival, cell activation is also dependent on the age at which learning occurs. A recent study demonstrated that optogenetic silencing of 4 week old newborn neurons in female mice impaired spatial and contextual memory retrieval suggesting that immature neurons of this age are involved in memory retention (Gu et al., 2012). In rats, immature neurons appear to become involved in spatial memory at an earlier time point, as early as 15-20 days of age (Epp et al., 2011a;Snyder et al., 2012). Together, these findings suggest that adult neurogenesis is an important component of hippocampus-dependent learning with different timelines in rats and mice. Numerous studies have investigated the role of immature neurons in learning and memory by experimentally manipulating levels of neurogenesis. Neurogenesis can be ablated or increased by various techniques prior to learning or memory retrieval in order to examine the impact of adult generated neurons (van Praag et al., 1999;Malberg et al., 2000;Shors et al., 2002;Snyder et al., 2005;Kitamura et al., 2009;Singer et al., 2009). This methodology has been used extensively and has produced a great deal of evidence for the role of adult neurogenesis. We will not discuss these studies here but they have been reviewed elsewhere (Wojtowicz, 2006;Wojtowicz et al., 2008;Deng et al., 2010). Instead, here we will review the existing data on the regulation of neurogenesis in the dentate gyrus by hippocampusdependent learning and the factors that are known to regulate this relationship. SPATIAL LEARNING MODIFIES SURVIVAL OF IMMATURE NEURONS IN THE HIPPOCAMPUS The first demonstration that neurogenesis responded to learning came from Elizabeth Gould and colleagues (Gould et al., 1999a). In this study, BrdU was given to label dividing cells one week prior to training. Male rats trained in the Morris water maze, a spatial learning task that depends on the hippocampus, had a greater number of BrdU-labeled cells in the dentate gyrus. In the young adult rat the rate of cell proliferation is very high relative to the number of immature neurons that survive to maturity. Many of the immature cells die during the first 1-2 weeks (Cameron et al., 1993). However, as Gould and colleagues (1999a) demonstrated, hippocampus-dependent learning was able to rescue these cells and promote their long-term survival and incorporation into the dentate gyrus. This initial study provided compelling evidence of an interaction between learning and adult neurogenesis and supported the possibility of a functional role for adult generated neurons. This result has been supported by a number of studies that also investigated the effects of spatial learning on the survival of immature cells (Ambrogini et al., 2000;Hairston et al., 2005;Epp et al., 2007Epp et al., , 2010Epp et al., , 2011b. However, some studies have produced contradictory findings, either showing that spatial learning decreased the survival of immature neurons (Dobrossy et al., 2003;Ambrogini et al., 2004;Mohapel et al., 2006;Epp et al., 2011a), or that spatial learning had no effect on cell survival (Ehninger and Kempermann, 2006;Mohapel et al., 2006;Van der Borght et al., 2006). The lack of consistent outcomes from the studies described above strongly suggested that although spatial learning can positively influence cell survival, the effect is not a universal one. There must be certain conditions under which cell survival is enhanced and certain conditions under which cell survival is decreased or is not affected. An examination of these studies turns up a variety of methodological factors that could potentially explain the different outcomes, including, age of immature neurons at time of exposure to spatial learning, species/strain differences, sex differences, and strength/difficulty of the training protocol (Epp et al., 2007(Epp et al., , 2011bChow et al., in press). Indeed we now know that most of these factors influence the effect of spatial learning on cell survival and these are reviewed here. FACTORS THAT REGULATE THE EFFECTS OF SPATIAL LEARNING ON CELL SURVIVAL: AGE OF IMMATURE NEURONS ON EXPOSURE TO SPATIAL TRAINING One of the key differences between many of the studies that showed different effects of learning on cell survival was the time course of the experiment or the age of the immature neurons being examined at the time of learning. Gould and colleagues trained their rats on days 7-10 following BrdU injection (day 0) and this scenario led to an increase in cell survival (Gould et al., 1999a). On the other hand, Ambrogini et al. (2004) trained their rats on days 10-14 after BrdU injection and found survival of this population of cells to be decreased. Dupret and colleagues (2007) showed that spatial learning increased cell survival when training occurred 7-12 days after BrdU injection, but at the same time, decreased the survival of cells that were 3 days old at the start of training. Specifically, Dupret and colleagues show that it is the late phase of learning that induces death of 7-9 day old neurons, presumably those that have not received stimulation during training. Taken together, these studies show that the timing of spatial training relative to cell birth is important in determining cell survival. Furthermore, the effect of learning on adult neurogenesis is to selectively stabilize a group of neurons while removing and replacing unused new neurons. A possible interpretation of this is that a critical period exists during the development of immature neurons. This was demonstrated to be true in a study that systematically explored the effects of spatial learning on cell survival during three periods of immature neuron development in the rat (Epp et al., 2007). In this study, rats were trained in the Morris water maze on days 1-5, 6-10, or 11-15 following BrdU administration (day 0) and perfused on day 16. The results showed that cell survival was enhanced only when training occurred during days 6-10 following BrdU injection indicating that this intermediate time period appears to be a critical window during which spatial learning can modulate the survival of immature neurons. This time also corresponds, in rats, to the period when new axons have just reached and are beginning to form connections to area CA3 (Stanfield and Trice, 1988;Hastings and Gould, 1999;Markakis and Gage, 1999). Thus, it is plausible that in order for activity dependent enhancement of cell survival to occur the learning must occur around the time that immature neurons are connecting into the existing circuitry. Based on the idea of critical periods to influence survival of immature neurons after exposure to spatial learning and the results of the (Ambrogini et al., 2004) study, we predicted that training on days 11-15 should have resulted in a decrease in cell survival but no significant change in cell survival was observed in our study (Epp et al., 2007). A key difference between the Epp et al. (2007) and the Ambrogini et al. (2004) study was the timing of perfusion AFTER spatial training. Epp et al. (2007) perfused rats 24 h after training on days 11-15 after BrdU injection while Ambrogini et al. (2004) waited 3 days after training to perfuse the rats. In a follow up study we trained rats on days 11-15 and perfused them 5 days following training, or on day 20 following BrdU administration (Epp et al., 2011a). Confirming their results, we showed that immature neuron survival was decreased by spatial learning using this paradigm which more closely conformed to the original Ambrogini study (2004). These results suggest that late training occurring after the critical (6-10) window may decrease neuron survival possibly due to competitive integration of the 6-10 day old neurons. Although the population of cells being examined was approximately 11 days old at the start of training there was also an un-labeled population of cells that were 6-10 days old, the survival of which was likely increased by spatial learning. The 11-15 day old population may lose the competition because they fall outside the critical age and are therefore gradually removed. The loss of these older cells may not be evident immediately after training but may be detected a few days later. This hypothesis fits nicely with a study which demonstrated that survival of immature neurons is dependent on activation of the immature cells and that there is a competitive process that occurs among cells (Tashiro et al., 2006). Furthermore, they showed that the death of cells that do not receive NMDA-receptor activation occurred at about 18 days, similar to the spatial learning studies (Ambrogini et al., 2004;Epp et al., 2011a). These studies further suggest that there is a critical time window for spatial learning to increase cell survival 6-10 days after birth but also show that there is another time window 11-15 days after birth for spatial learning to decrease cell survival (Figure 2). The population of cells that are rescued by spatial learning are also activated later on by spatial memory retrieval suggesting that these immature neurons are part of the memory trace (Figure 3). If rats were trained in the Morris water maze 11-15 days after BrdU injection and given a probe trial on day 20, FIGURE 2 | Critical periods for spatial learning induced changes in immature cell survival in the dentate gyrus. Spatial learning does not impact the survival of immature neurons that are 1-5 days old at the time of learning. Survival of immature neurons that are 6-10 days old during training is selectively enhanced [although this can depend on task difficulty (Epp and Galea, 2009) and quality of learning (Epp et al., 2007;Sisti et al., 2007)]. Survival of immature neurons that are 15-20 days old at the time of learning is decreased. This effect cannot be detected if animals are perfused the day following training but can be observed if histological examination is delayed until day 20 following a probe trial 90 min before perfusion. Collected from findings from Epp et al. (2007 and2011a). Described changes in neurogenesis are in comparison to rats trained on a cued version of the task. there was a significant increase in the percentage of BrdU-labeled immature neurons that were co-labeled with c-fos . Furthermore, the co-expression of BrdU and c-fos correlated positively with the strength of the spatial memory. Several other studies have also shown that spatial learning increases the activation of immature neurons (Snyder et al., 2009bEpp et al., 2011a;Chow et al., in press). There are regional differences in activation of immature neurons within the hippocampus. Immature neurons in the ventral dentate gyrus, specifically in the suprapyramidal blade, are activated more readily by spatial learning (Snyder et al., 2009b), although we have also shown that immature neurons in the dorsal dentate gyrus are more activated in response to spatial memory retrieval when using a different training paradigm (Chow et al., in press). Recently, it has also been demonstrated in rats that immature neurons in the septal pole of the dentate gyrus become activated by stimulation at a younger age than immature neurons in the temporal pole (Snyder et al., 2012). These studies demonstrate the importance of segmentation of data across different regions of the dentate gyrus in order to observe more specific changes in cell survival and activation. FACTORS THAT REGULATE THE EFFECTS OF SPATIAL LEARNING ON CELL SURVIVAL: TASK DIFFERENCES/ DIFFICULTY In addition to spatial learning, training on other hippocampus dependent tasks also enhances the survival of immature neurons. A number of studies have shown that trace eyeblink conditioning can enhance cell survival, at least under certain conditions (Gould et al., 1999a;Shors et al., 2002;Leuner et al., 2006). Tracey Shors and colleagues have shown that the rate of acquisition of the trace eyeblink task is critical for enhancing survival. Faster acquisition was related with increased cell survival while slower acquisition did not result in a significant increase in cell survival (Waddell FIGURE 3 | Time course of activation of immature neurons in response to spatial memory. Spatial learning occurred either 1-5, 6-10, or 11-15 days following BrdU administration. The rats were then tested with a probe trial 5 days later and were then perfused 2 h later. No activation was seen in 10-day-old neurons. Rats trained on the spatial version of the task on days 6-10 had a small percentage of 15 day old neurons were activated but no difference existed between rats that received spatial versus non-spatial training (Epp et al., 2011a). Rats trained the spatial version of the task on days 6-10 (Chow et al., in press) or 11-15 showed enhanced activation of 20 day old neurons compared to rats that were trained on the non-spatial version of the task (Epp et al., 2011a). N/A, no activation; IEG, immediate early gene. Described changes in activation are in comparison to rats trained on a cued version of the task. Frontiers in Neuroscience | Neurogenesis April 2013 | Volume 7 | Article 57 | 4 and Shors, 2008). Importantly, the increase in cell survival following trace eyeblink conditioning appears to occur during the same critical period as during spatial learning. Anderson and colleagues treated rats with BrdU, either 30 min, 1 week or 3 weeks before trace conditioning. An increase in cell survival was found only at the 1 week time point (Anderson et al., 2011). In contrast with our spatial learning studies (Epp et al., 2007 cell survival was decreased when BrdU was administered just prior to learning. Another hippocampus-dependent task, social transmission of food preference, also increases the survival of immature cells that are 1 week old at the time of learning (Olariu et al., 2005). However, survival was only enhanced after a single, but not multiple, training trials. In addition to the type of task used, variables that alter the difficulty of a given task can also change how learning influences neurogenesis. In rats trained in the Morris water maze within the 6-10 day time window with four trials per session and ample distal cues in the environment cell survival is increased. However, when the number of trials was reduced to 2 per day cell survival was no longer enhanced . This procedure slowed learning due to an increase in task difficulty and/or changing the demands of the task such that it may have become dependent on other brain regions. Furthermore, when training took place in an environment with few distal cues, learning became more difficult to achieve and the survival of 6-10 day old cells is decreased. In addition, a more difficult spatial working memory task appears to decreases neurogenesis in comparison to the more standard reference memory version on the Morris water maze (Xu et al., 2011). Trace eyeblink conditioning also has different effects on cell survival as a result of different task demands. Tracey Shors and colleagues demonstrated that spaced trials produces stronger memory and a greater increase in cell survival compared to massed training for trace conditioning (Sisti et al., 2007). This could be a result of task difficulty, type of training or a result of the quality of learning. In a subsequent study they also demonstrated that interfering with learning in order to slow the rate of acquisition is associated with a greater enhancement in cell survival but only in good and not poor learners in trace conditioning (Dalla et al., 2007;Curlik and Shors, 2011). Interestingly, we have observed that cell survival was increased in the Morris water maze in poor learners but not in good learners (Epp et al., 2007). Although there appears to be an interesting interaction between quality of learning and cell survival it is not yet clear how these factors interact, and further study is warranted as it appears that the type of task (trace conditioning or spatial learning) may interact with these factors. FACTORS THAT REGULATE THE EFFECTS OF SPATIAL LEARNING ON CELL SURVIVAL: SEX DIFFERENCES There are sex differences in cognition as well as adult hippocampal neurogenesis. For example, the most widely reported sex difference in both the human and animal literature is that males outperform females in spatial tasks (Williams et al., 1990;Galea and Kimura, 1993;Galea et al., 1996;Gron et al., 2000;Beiko et al., 2004;van Gerven et al., 2012). Optimal performance in spatial tasks, such as the Morris water maze, requires the integrity of the hippocampus (Morris et al., 1990). Interestingly, the hippocampus is activated in different extents in men and women during spatial tasks, and this sex difference is dependent on the menstrual cycle. Indeed, imaging studies show that in men, the hippocampus is more active during mental rotation (Butler et al., 2006) and spatial navigation tasks (Gron et al., 2000) compared to women. Furthermore, the menses phase alters both spatial ability and activation levels as measured using fMRI in women. During the menses phase (a period of reduced ovarian hormone levels), women performed better on the spatial rotation test, and their activation levels when performing spatial rotation tasks were more closely patterned to the male response compared to women in the midluteal phase (Hampson, 1990;Dietrich et al., 2001). Sex differences in neurogenesis levels in the hippocampus have also been reported (Galea and McEwen, 1999;Tanapat et al., 1999). Galea and McEwen found that there were sex differences in cell proliferation favoring males during the breeding season (when gonadal hormone levels are elevated), but not during the non-breeding season, in wild meadow voles, suggesting that gonadal hormones mediate the sex difference in cell proliferation. Tanapat and colleagues (1999) found that proestrous females, with elevated estradiol levels, showed greater levels of cell proliferation compared to males and non-proestrous females. Additionally, when females were injected with BrdU during proestrus, they showed significantly higher levels of cell survival compared to males and non-proestrous females for up to 14 days after injection. Interestingly, studies are more equivocal in mice, as one study did not find sex or estrous cycle differences on neurogenesis in the dentate gyrus in mice (Lagace et al., 2007) but other studies do (Ma et al., 2012;Roughton et al., 2012), perhaps due to strain differences. Therefore, gonadal hormone level, timing of BrdU injection and tissue examination, and the animal species or strain are important methodological considerations when examining sex differences in adult hippocampal neurogenesis. To our knowledge only two studies have directly examined how sex affects the relationship between hippocampus-dependent learning and neurogenesis, and the first study was conducted by Dalla and colleagues (2009). Using the trace eyeblink conditioning task, the authors showed that female rats learned the task faster and also showed a greater increase in cell survival compared to male rats. A second study used a task that favored learning in males, the Morris water maze, and produced the opposite pattern of results showing that male rats outperformed female rats during spatial training and subsequently showed increased cell survival compared to female rats (Chow et al., in press). In both of these studies (Dalla et al., 2009;Chow et al., in press), the sex differences in learning performance were only observed in the early phases of training. Therefore, in both of these studies, sex differences in performance during the initial acquisition stage corresponded to the direction of the sex difference in cell survival. Due to the similar levels of task mastery in both studies, as reflected by a lack of sex difference in performance toward the end of training, the relationship between learning and neurogenesis may be mediated by sex differences in learning strategy rather than learning ability. For instance, www.frontiersin.org April 2013 | Volume 7 | Article 57 | 5 during spatial navigation tasks, males generally attend to geometric/spatial cues (e.g. relative distance between extramaze cues and the hidden platform), which is a strategy that engages the hippocampus. In contrast, females tend to focus on landmark cues, which is a more striatum-dependent strategy (Williams et al., 1990;Galea and Kimura, 1993;McDonald and White, 1994;Miranda et al., 2006). Therefore, the extent to which the hippocampus is activated during learning, as mediated via strategy choice, could influence neurogenesis. It is also possible that sex differences in sensitivity to certain aspects of a task could indirectly influence learning and neurogenesis. For instance, females, but not males, show elevated levels of the stress hormone, corticosterone, after one Morris water maze trial, an effect associated with poorer spatial performance relative to males (Beiko et al., 2004). This sex difference in performance, however, disappeared when animals were given the chance to acclimatize to the task apparatus prior to training. Therefore, it may be that alterations to task procedures that abolish the sex difference in learning performance may alter or even abolish the sex difference in neurogenesis, and would be an interesting point of investigation in future studies. Intriguingly, activation of immature 20-day old neurons (quantified by co-labeling BrdU with the IEG product zif268) in the dorsal dentate gyrus during spatial memory retrieval was positively correlated with spatial performance during training in females, but not males (Chow et al., in press). Additionally, McClure and colleagues showed that estradiol significantly increased activation of immature neurons in females relative to the control group (McClure et al., 2012). Thus it would be interesting to further investigate the sex differences in activation patterns of younger versus older neurons during spatial learning and how those differences relate to adult neurogenesis. FACTORS THAT REGULATE THE EFFECTS OF SPATIAL LEARNING ON CELL SURVIVAL: STRATEGY DIFFERENCES Studies in humans found that females chose to use spatial strategies at least as often as males, but were less adept in strategy execution (Galea and Kimura, 1993;van Gerven et al., 2012). Previous studies in our laboratory using a cue competition paradigm have shown that the same learning strategy can have sexually dimorphic effects on neurogenesis. In the cue competition task, rats are trained to both locate a hidden platform using spatial strategies and locate a visible platform using cue strategies. During the probe trial, the platform is visible and moved to a new quadrant opposite the old platform location, and strategy preference is elucidated based on whether the rat swims to the new location (cue strategy preference) or to the old location (spatial strategy preference). In males, animals that favored the spatial strategy showed a reduction in cell proliferation compared to cue strategy users, while in females, the opposite was true (Epp and Galea, 2009;Rummel et al., 2010). Furthermore, studies in mice have shown that proteins that regulate neurogenesis, such as Cdk5 (Jessberger et al., 2008;Lagace et al., 2008) and the cAMP responding element binding (CREB) protein (Dworkin and Mantamadiotis, 2010), can differentially facilitate or impair the acquisition of hippocampus-dependent tasks such as the Morris water maze (Ris et al., 2005;Hebda-Bauer et al., 2007) and contextual fear conditioning (Kudo et al., 2003) in males and females. Therefore, the same type of learning paradigm may influence neurogenesis in the hippocampus through different mechanisms in males and females. FACTORS THAT REGULATE THE EFFECTS OF SPATIAL LEARNING ON CELL SURVIVAL: STRAIN/SPECIES DIFFERENCES The majority of the studies examining cell survival following spatial learning were conducted in rats (Gould et al., 1999a;Ambrogini et al., 2004Ambrogini et al., , 2000Epp et al., 2007Epp et al., , 2011a. Given the tremendous increase in the popularity of mice as a model system it is important to consider whether this effect is similar in rats and mice. A notable exception to the common use of rats was a study conducted by Ehninger and Kempermann (2006) that used female C57Bl/6 mice. In this study, although spatial learning occurred during the 6-10 day time period, a critical window in rats, there was no change in cell survival. Therefore, it is possible that either spatial learning does not have the same effect on cell survival in mice that it does in rats, that the time period during which survival may be enhanced is different or that, as we have shown in rats, females do not show the same increase in cell survival with spatial learning (Chow et al., in press). There is some supporting evidence that either of these theories may be true. Recently it has been demonstrated that compared to rats, adult generated neurons in mice mature more slowly and do not appear to be as important to hippocampal function (Snyder et al., 2009a). Further, spatial training caused a greater increase in the proportion of immature neurons that expressed the immediate early gene product zif268 in rats compared to mice. Additionally, abolishing neurogenesis in the dentate gyrus with irradiation impaired fear conditioning in rats but not mice (Snyder et al., 2009a). It should also be pointed out that exposure to a complex environment can cause a similar increase in survival of immature neurons as shown by Tashiro and colleagues (Tashiro et al., 2007). However, the critical window in that study occurred between 2 and 3 weeks after cell division, slightly later than seen with spatial learning (1-2 weeks). It is possible that the later critical window is a result of the task used or it may have been because mice were used. Within species there are numerous strains of both rat and mice that are commonly used and not all have similar neurogenic responses to spatial learning. For example, Long-Evans and Sprague-Dawley rats show similar increases in BrdU-labeled cells following spatial learning. However, when examining the maturation rate of immature neurons following spatial learning Sprague-Dawley rats show an increased percentage of doublecortin labeled cells with a mature phenotype compared to Long-Evans rats (Epp et al., 2011b) This suggests that spatial learning had a strain dependent effect of the rate of neuronal maturation in addition to a more generalized effect on cell survival. In addition, despite having equal levels of doublecortin-labeled neurons in untrained rats, Sprague-Dawley rats showed an increase in doublecortin following spatial learning while Long-Evans rats did not. In mice, baseline differences in neurogenesis do exist across various strains (Kempermann and Gage, 2002) and as a result it stands to reason that many strains may show different regulation of neurogenesis by learning. Although little else is known about strain differences in the response of neurogenesis to spatial learning, there is evidence that different strains respond differently to other treatments such as chronic mild stress. A recent study showed that Lewis rats, characterized by a hypoactive hypothalamus-pituitary-adrenal response, showed an increase in doublecortin labeling following chronic mild stress while Sprague-Dawley and Fischer 344 rats did not (Wu and Wang, 2010). THE FUNCTION OF LEARNING-INDUCED ADULT NEUROGENESIS The role of new neurons that are rescued by learning is still largely unexplored. Based on studies using immediate early genes as a marker for cell activation, new neurons that are approximately 4-10 weeks old in mice (Kee et al., 2007;Stone et al., 2011;Gu et al., 2012) and 16-20 days old in rats (Epp et al., 2011a) appear to be involved in memory retrieval, provided that learning occurred at the critical stage in cell development (at least 4 weeks of age in mice; 11-15 days in rats). This age-dependent incorporation of cells into the memory trace may be due to the fact that prior to the critical age, cells have not yet formed the appropriate connections necessary for processes related to memory consolidation, such as LTP. Indeed, Bruel-Jungerman and colleagues (2006) found that, in rats, LTP is not induced in cells that are less than 2 weeks of age, and in mice, new neurons do not begin to receive synaptic input until approximately 2 weeks of age (Esposito et al., 2005). Interestingly, neurons that are 1 week old at the time of learning have been found to remain in the circuitry for up to 60 days after training in rats (Leuner et al., 2004). Further studies to examine the electrophysiological properties of new neurons at various stages of maturity during learning may provide more definite answers. It is important to keep in mind that when comparing studies in mice and rats Snyder and colleagues have demonstrated that new neurons are more likely to involved with behavior in rats than mice. Future studies examining the contributions of adult generated neurons to hippocampal as well as brain wide network dynamics will be crucial to determine the precise functional contributions of adult neurogenesis. CONCLUSIONS Hippocampus-dependent learning can modify the survival of adult generated neurons although this relationship is a complicated one as several important factors and critical time windows that must be considered. Perhaps the most critical factor to consider when examining the effects of spatial learning on neurogenesis in the hippocampus is the effect of the age of the immature neurons at the time of learning. In rats, learning that occurs approximately one week after training shows the greatest potential to increase cell survival. Within this time window, it also appears critical for the learning to proceed with a steep learning curve. Furthermore there exists at least one other critical time period, 11-15 days when learning must occur to see a decrease in cell survival. The difficulty of the task, quality of learning, species strain and sex being tested must also be taken into consideration. Stronger relationships between behavior and neurogenesis exist in rats compared to mice and in male, compared to female, rats for spatial learning. However, when learning does increase the survival of immature neurons, they can be activated by spatial memory retrieval suggesting that they are an important part of the spatial memory trace. Future experiments aimed at understanding how and why spatial learning increases cell survival should attempt to discover a unified framework of the conditions that control this relationship.
2016-05-04T20:20:58.661Z
2013-02-06T00:00:00.000
{ "year": 2013, "sha1": "e813876bad09452ad17192044d65fbafa32de56c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fnins.2013.00057", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e813876bad09452ad17192044d65fbafa32de56c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
41815880
pes2o/s2orc
v3-fos-license
Common genetic variant rs3802842 in 11q23 contributes to colorectal cancer risk in Chinese population A genome-wide association study identified a common genetic variant rs3802842 at 11q23 to be associated with CRC risk with OR=1.1 and P = 5.80E-10 in European population. In Chinese population, several genetic association studies have investigated the association between rs3802842 variant and CRC risk. However these studies reported both positive and negative association results. It is still necessary to evaluate a specific variant in a specific population, which would be informative to reveal the disease mechanism. Until recently, there is no a systemic study to evaluate the potential association between rs3802842 and CRC risk in Chinese population by a meta-analysis method. Here, we aim to evaluate this association in Chinese population by a meta-analysis method using 12077 samples including 5816 CRC cases and 6261 controls. We identified the T allele of rs3802842 to be significantly related with an increase CRC risk (P=2.22E-05, OR=1.14, 95% CI 1.07-1.21) in Chinese population. Evidence shows that allele frequencies, specific linkage disequilibrium structure, and special genetic and environmental backgrounds may cause the risk alleles variation to CRC risk in different populations [14]. Meanwhile, the incidence of CRC is different in populations [15][16][17]. In Chinese population, several genetic association studies have investigated the association between rs3802842 variant and CRC risk. However these studies reported both positive [18][19][20] and negative [21][22][23] association results. It is still necessary to evaluate a specific variant in a specific population, which would be informative to reveal the disease mechanism [14]. Until recently, there is no a systemic study to evaluate the potential association between rs3802842 and CRC risk in Chinese population by a meta-analysis method. Here, we aim to evaluate this association in Chinese population by a meta-analysis method. Study characteristics In the PubMed database, we got 36 potential studies using the key words 'rs3802842' + 'colorectal cancer' (up to June 26, 2017). We screened the 36 potential article abstracts, and excluded 20 articles. We further screened the remaining 16 potential full articles, and excluded 11 articles. Meanwhile, we got another one article using Google Scholar database. In the end, we selected six independent case-control association studies in Chinese population [18][19][20][21][22][23]. All these six studies evaluated the potential association between rs3802842 and CRC risk in Chinese population with a total of 11210 samples including 4794 CRC cases and 6416 controls. All these studies did not depart from Hardy-Weinberg equilibrium. The main characteristics of these six studies are described in Table 1. Heterogeneity test Using C vs. A model, we identified significant heterogeneity in all the selected six studies with Chi 2 = 15.03, df = 5 (P = 0.01); I 2 = 67%. Using CC vs. CA+AA model, we did not identified significant heterogeneity in four of these six studies with Chi 2 = 1.90, df = 3 (P = 0.59); I 2 = 0%. Using CC+CA vs. AA model, we identified significant heterogeneity in four of these six studies with Chi 2 = 10.43, df = 3 (P = 0.02); I 2 = 71%. The detailed information is described in Figure 1. Meta-analysis In C vs. A model, we applied the random-effect model to perform the meta-analysis, which indicated significant association between rs3802842 C allele and CRC risk with P=3.00E-04, OR (odds ratio) =1.21, and 95% CI (confidence interval) [1.09, 1.35]. In CC vs. CA+AA model, we applied the fixed-effect model to perform the meta-analysis, which indicated significant association between rs3802842 CC genotype and CRC risk with P=2.22E-07, OR=1.39, and 95% CI [1.23, 1.57]. In CC+CA vs. AA model, we applied the random-effect model to perform the meta-analysis, which indicated significant association between rs3802842 CC+CA genotype and CRC risk with P=9.00E-03, OR=1.37, and 95% CI [1.08, 1.74]. The detailed information is described in Figure 1. Publication bias analysis The possible publication bias of meta-analysis is evaluated by both funnel plot and a regression based statistical approach. Based on the shapes of funnel plots, we did not observe any asymmetric signal in all these three models as described in Figure 2 ( Figure 2 illustrates no publication bias for the association of the rs3802842 with CRC risk.). The regression method also did not display any evidence of obvious publication bias with P=0.81 for C vs. A model. Sensitivity analysis A leave-one-out sensitivity analysis showed that the pooled ORs were not significantly changed when all these studies were excluded one by one, which indicated that the meta-analysis results were robust and reliable (data not shown). Subgroup analysis In Han Chinese subgroup, we did not identified significant heterogeneity in these four studies with Chi 2 = 4.21, df = 3 (P = 0.24); I 2 = 29%. We applied the fixedeffect model to perform the meta-analysis, which indicated significant association between rs3802842 C allele and CRC risk with P=9.19E-15, OR=1.31, and 95% CI [1.22, 1.40]. In the combined Hong Kong Chinese and Taiwan Chinese subgroup, we did not identified significant heterogeneity in these four studies with Heterogeneity: Chi 2 = 0.00, df = 1 (P = 0.98); I 2 = 0%. We applied the fixed-effect model to perform the meta-analysis, which indicated no significant association between rs3802842 C allele and CRC risk with P=0.08, OR=1.08, and 95% CI [0.99, 1.19]. DISCUSSION Tenesa et al. identified rs3802842 to be significantly associated with CRC risk [8]. In 2014, Closa et al. analyzed 144 samples and successfully identified that CRC risk loci identified in large-scale GWAS may regulate the expression of nearby genes, which may be candidate targets for developing new strategies for prevention or therapy [24]. Interestingly, rs3802842 in 11q23.1 could significantly regulate the expression of C11orf53, COLCA1 (C11orf92) and COLCA2 (C11orf93) [24]. In 2014, Peltekova et al. analyzed 1,030 CRC cases and 1,061 controls [25]. They also reported COLCA1 and COLCA2 to be regulated by rs3802842 variant [25]. Using tissue microarray analysis, they further showed that rs3802842 was significantly associated with levels of COLCA1 and COLCA2 in the lamina propria [25]. All these findings indicate that rs3802842 is associated with CRC risk and regulate the expression of COLCA1 and COLCA2 genes, which may be involved in pathogenesis of CRC. Until recently, six independent case-control association studies have been conducted to investigate the association between rs3802842 and CRC risk in Chinese population. Three studies reported positive association results [18][19][20], and another three studies reported www.impactjournals.com/oncotarget negative association results [21][22][23]. In this study, we evaluated this association by a meta-analysis using 11210 samples including 4794 CRC cases and 6416 controls, and identified significant association between rs3802842 and CRC in Chinese population. In our study, we identified significant heterogeneity in these six genetic association studies. We think this may be caused by the substantial genetic variation in Han Chinese population [26]. Chen et al. analyzed 350,000 genetic variants in over 6000 Han Chinese samples from ten provinces of China [26]. Their results showed a onedimensional "north-south" population structure and a correlation between geography and the genetic structure of the Han Chinese [26]. Considering the significant heterogeneity, we further performed a subgroup analysis in the Han Chinese subgroup, and the combined Hong Kong and Taiwan Chinese subgroup. The results are consistent with previous findings. The heterogeneity in Han Chinese subgroup (I 2 = 29%) is higher compared with that in combined Hong Kong and Singapore Chinese subgroup (I 2 = 0%). Metaanalysis further showed the rs3802842 variant to be significantly associated with CRC risk in Han Chinese subgroup, but not in the combined Hong Kong and Taiwan Chinese subgroup. In 2012, Zou et al. performed a replication study and meta-analysis [19]. In their study, the only selected 4 independent studies in Asian population including 3 independent studies in Chinese population [19]. Here, we selected 6 independent studies in Chinese population to evaluate the association between rs3802842 variant and CRC risk with lager sample size compared with previous study [19]. Our results are consistent with previous findings that there is obvious between-study heterogeneity [19]. Search strategy Two reviewers independently selected the potential studies by systematically searching the PubMed database (https://www.ncbi.nlm.nih.gov/pubmed/) using the key words 'rs3802842' + 'colorectal cancer' (n=36, up to June 26, 2017). We also manually examined additional studies from the references cited in the original literature using Google Scholar database (https://scholar.google.com/), especially all associated publications citing the original CRC GWAS [8]. Here, we limit the following analysis in Chinese population including a native or inhabitant of China or a person of Chinese ancestry. If any two casecontrol studies overlap with each other, we select the one with the largest sample size in meta-analysis. More detailed information is described in Figure 3, which is a flow diagram of the process used to select eligible studies. Study inclusion criteria The potential genetic association studies should (1) be a case-control design in Chinese population; (2) evaluate the association between rs3802842 and CRC risk; (3) provide the original genotype number, or allele number, or odds ratio (OR) with 95% confidence interval (CI) for one of the three genetic models; or (4) provide sufficient data to calculate the genotype number, or allele number, or OR and 95% CI for one of these three genetic models. We excluded those studies that did not meet the inclusion criteria in following meta-analysis. Statistical analysis In brief, we used Review Manager 5.1 to investigate the potential heterogeneity in all the selected studies by a Cochran's Q test, calculate the pooled OR by a fixed effect model or a random-effect model based on the potential heterogeneity, determine the significance of pooled OR by a Z test. We calculated the Hardy-Weinberg equilibrium by a chi-square test in R program, if one study provides the control genotype number [45,46]. If not, we extracted the Hardy-Weinberg equilibrium information from the original studies. Here, three genetic models were selected including C vs. A, CC vs. CA+AA, and CC+CA vs. AA. More detailed information has been widely described in previous studies using meta-analysis methods [27-44, 47, 48]. We investigate potential publication bias by a funnel plot based approach, and a regression based statistical approach proposed by Egger. We performed a sensitivity analysis by a leave-one-out method [49]. We evaluated the influence of each study on pooled OR by omitting each study one at a time [49]. All statistical analyses were performed using Review Manager 5.1 or R, and the significance level is 0.05. Subgroup analysis We performed a subgroup analysis in the Han Chinese subgroup including four studies, and in the combined Hong Kong Chinese and Taiwan Chinese subgroup including two studies using C vs. A model.
2018-04-03T05:10:04.644Z
2017-07-31T00:00:00.000
{ "year": 2017, "sha1": "3b3ddff2f64af001c30fa895cf1f29fd71869aa7", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=19702&path[]=62938", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b3ddff2f64af001c30fa895cf1f29fd71869aa7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56444731
pes2o/s2orc
v3-fos-license
Corporate Governance Codes and Their Role in Improving Corporate Governance Practice Good corporate governance (CG) is primarily the responsibility of every company, and both hard law and soft law should provide comprehensive corporate governance framework, thereby encouraging the introduction of high governance standards and best practices in the companies' corporate governance system. The aim of this contribution is to broaden understanding on the role of codes of good governance in improving corporate governance practice on the case of Slovenia. The findings of research studies and analyses of the content of the Slovenian CG Code and its adoption in Slovenian companies show that the code has been playing an important role in developing corporate governance practice in Slovenia. Additionally, such analyses provide important cognitions on the adoption of the CG Code in Slovenian companies by revealing improvements in the governance practice and indicating those areas where changes are required. That is a way such monitoring and analyses should be done on the regular basis together with reporting on the monitoring results. This can considerably contribute to better understanding of the code's recommendations among companies, promote debate and thus foster awareness of the underlying issues. Future analyses should address not only the statements on compliance but also how companies actually implement the code's recommendations. Introduction Numerous research studies in the corporate governance (CG) field are based on a universal model outlined by principal-agent theory where central premise is that shareholders and managers have different objectives and different access to firm-specific information. Self-interested managers as agents of shareholders (principals) have the opportunity to take actions that benefit themselves, and shareholders are those that bear the costs of such actions (i.e. agency costs) [1,2]. In many countries, not only managers but also controlling shareholders can expropriate minority shareholders and creditors [3,4]. Several mechanisms are proposed to resolve principal-agent problems such as monitoring by boards of directors or large outside shareholders, equity-based managerial incentives or the market for corporate control [1,2,5]. These different types of control and monitoring in companies are referred to as corporate governance [2,6]. Many cases of corporate fraud, accounting scandals and other organizational failures leading to lawsuits, resignations or even bankruptcy have made corporate governance as especially important and often discussed topic among professionals and scholars. The main feature of many of these cases is the assumption that the system of checks and balances designed to prevent potentially self-interested managers from engaging in activities detrimental to the welfare of shareholders and stakeholders failed [2]. Several formal regulations and informal guidelines, recommendations, codes and standards of corporate governance have been established or improved in order to determine good corporate governance. These efforts to improve corporate governance practices have raised an important dilemma within the corporate governance field, whether to develop hard law (i.e. mandatory requirements, hard regulations and regulatory approach) or soft law (i.e. voluntary recommendations, soft regulations and market-based approach) in order to improve corporate governance across countries [7,8]. In this contribution, we explore governance codes that are a form of soft regulations (i.e. soft law) presenting a set of voluntary best governance practices without the force of law [7,9,10]. The issue of the Cadbury Report and the Code of Best Practices in the UK importantly affects the diffusion of codes around the world after 1992 [7], and similar effects on new codes' creation or revision of the existing ones can be observed after 2008 due to the global financial crisis [10]. The number of research studies on codes of good governance has considerably expanded after 1992 and especially in the early 2000s [7,10]. Because of the voluntary nature of the majority of codes, there has been a considerable debate in the literature on whether the code recommendations affect the corporate governance quality [7,8]. Research studies demonstrate that the introduction of corporate governance standards in the form of a code has positive effects on the evolution of governance practices [10] and especially on transparency and disclosure [8,11]. The aim of this contribution is to broaden our understanding on the role of corporate governance codes in improving corporate governance practice. We explore how the introduction of corporate governance code influences the corporate governance quality in the case of Slovenia. We selected the case of Slovenia due to the lack of research that would address codes' evolution and their adoption in the transition economies [12]. Slovenia is one of the transition countries that present a large sub-category of emerging economies [13]. As a new European state, it was founded in 1991, and has been in last decades under several transition processes [12,14,15]. Even though some authors [16] claim that Slovenia is no more a transition countries since it joined the European Union (EU), several indicators show that economic transition from routine to innovative economy and society has not been finished yet in this country [17,18]. In the case of Slovenia, we limit our research on the corporate governance codes, which were created at the national level as the result of joint efforts of the Ljubljana Stock Exchange (LJSE), the Managers' Association of Slovenia and the Slovenian Directors' Association. We did not explore any other codes and their adoption in the governance practice of Slovenian companies that are relatively free in selecting their governance code. The paper is divided into several sections. Following the introduction section, the literature review on corporate governance codes is conducted followed by the study of the case of Slovenia. In order to provide a comprehensive insight into the introduction of corporate governance codes in Slovenia and their impact on the quality of corporate governance practice in Slovenian companies, we explored the corporate governance framework in Slovenia and conducted comparable analysis of data on the codes adoption in Slovenian companies. Concluding section highlights the most important findings, implications for research and practice, and future research directions. Theoretical background 2.1. Institutional environments and corporate governance systems A universal model outlined by principal-agent theory dominates the corporate governance research field. Its central premise is that shareholders and managers have different interests and objectives as well as different access to specific information of a company. That is a way self-interested managers have the opportunity to take actions that benefit themselves, and shareholders are those that bear the costs of these actions. Such costs are referred to as agency costs [1,2]. In many countries, it has been noted that not only managers but also controlling shareholders (both are also referred as insiders) can expropriate minority shareholders and creditors (referred also as outsiders) [3,4]. Several scholars [1,5] criticize the closed-system approach within agency theory that implies a universal and direct linkage between corporate governance practices and performance and devotes little attention to the distinct contexts in which companies function. They claim that the structure of governance systems is influenced by several external forces such as efficiency of local capital markets, legal tradition, and reliability of accounting standards, regulatory enforcement, and societal and cultural values [2,19,20]. Research studies show that there are substantial variations in institutional environments that shape the degree and nature of agency conflicts and the effectiveness of corporate governance mechanisms [21,22]. The historical path dependence among country-and firm-level mechanisms results in a variety of country-and organization-specific governance systems that are effective within the institutional environments in which they have been developed [23]. Therefore, we believe that understanding of attempts in distinguishing and describing different institutional environments and corporate governance systems enables us to more appropriately asses the role of corporate governance codes in improving corporate governance practices. When distinguishing corporate governance systems, two perspectives should be considered based on the role of companies in the society [2,24]. Taking a shareholder perspective, where a company's primary obligation is to maximize shareholder value, effective corporate governance should protect shareholders from being expropriated by the management [2,24]. The system of corporate governance in the Anglo-Saxon countries is characterized as a shareholder-based system [2,24] and the law strongly protects shareholders [20]. Anglo-Saxon countries' firms are relatively widely held (low ownership concentration). It is estimated that in the USA and the UK, the largest five shareholders hold on average 20-25% of the outstanding shares. Due to this fact, on one hand less mechanisms shareholders can use effectively to influence managerial decision-making in a direct manner [24], but on the other hand 'interdependence among institutions may lead to substitution among functionally equivalent corporate governance mechanisms' [5, p. 980]. Examples include takeover markets in the USA and the UK, where external governance in the form of the market for corporate control with the takeover threat presents disciplining mechanisms for managers [5]. In most European and Asian countries, the stakeholder-based systems prevail [2,24]. From stakeholder perspective, where a company has a societal obligation that goes beyond increasing shareholder value, effective governance should 'support policies that produce stable and safe employment, provide acceptable standard of living for workers, mitigate risk for debt holders, and improve the community and environment' [2, p. 9]. In the majority of these countries, ownership concentration is significantly higher than in Anglo-Saxon countries [25]. For example, in Germany the largest five shareholders hold on average 41% of the outstanding shares [24]. Concentrated ownership on one hand may reduce agency costs stemming from the separation of ownership and control, but on the other hand may induce new conflicts that arise between majority and minority shareholders. The primary agency problem in this institutional context is the possible expropriation of minority shareholders by the controlling shareholders such as related-party transactions [5]. Therefore, in countries where a vast majority of companies has a concentrated ownership and control structure, the function of corporate governance regulation is to minimize the extent of agency problems between majority and minority shareholders and that between shareholders and creditors [24]. As noted by Larcker and Tayan [2, p. 9] 'the governance system that maximizes shareholder value might not be the same as the one that maximizes stakeholder value'. In relation to the previously discussed perspectives, scholars often made division of corporate governance systems between the Anglo-American and the Continental European system. While short-term equity finance, dispersed ownership, stronger shareholder rights, active market for capital control and flexible labour market characterize the first one, the Continental European corporate governance system is characterized by long-term debt financing, concentrated block-holder ownership, weak shareholder rights, inactive markets for capital control and rigid labour markets [19]. Combination of the Continental European capitalism characterized by large controlling shareholders and elements of entrepreneurial or founders capitalism mostly associated with the USA is characteristic of new emerging capitalism not only in the transitions economies of Central and Eastern Europe but also in other parts of the world [26]. Many transition economies (i.e. former socialist countries) are characterized by a relatively high level of ownership concentration leading to the agency problems between majority and minority shareholders. Concentration of ownership in the hands of a few or even one block-holder assures a significant control and direct influence on the nomination and control of management team, which for this reason cannot be expected to be independent [26,27]. A legal system and tradition also has important implications for corporate governance system [2]. According to institutional theory, legal rules and norms are important component of national institutional systems [5]. In terms of legal origins, common-law and non-common-law (i.e. civil-law) countries are distinguished [2], even though this corporate governance research stream has been criticized due to its simplistic theoretical and empirical grounds [5]. Noncommon-law countries (such as Germany, Scandinavia and French countries) are countries with poorer investor protection, and have smaller and narrower capital (both equity and debt) markets and less widely held companies (more ownership concentration) than common-law countries (such as UK). Countries whose legal system is based on a tradition of common law afford more rights to shareholders and more protection to creditors than countries whose legal systems are based on civil law (or code law) [28]. In common-law countries, there are mainly information asymmetry and agency problems between managers and (majority) shareholders; in non-common-law countries, these are mainly information asymmetry and agency problems between minority and majority shareholders. The research findings of Bauwhede and Willekens [29] showed that the level of corporate governance disclosure was significantly lower in non-common-law countries than in common-law countries due to the greater pressure that shareholders can put on managers in comparison to the pressure minority shareholders can put on majority shareholders. Codes as a form of soft governance regulations Several authors [7,8] identified as an important dilemma within the corporate governance field on whether to develop hard law (i.e. mandatory requirements, hard regulations and regulatory approach) or soft law (i.e. voluntary recommendations, soft regulations and marketbased approach) in order to improve corporate governance across countries. Hard laws are legal requirements regarding governance mechanisms in a country issued by a government in order to improve governance practice and prevent conflict of interests [2,30]. According to the opinion of several researchers, one of the most important pieces of formal legislation is the Sarbanes-Oxley Act of 2002 (Sox) issued in the USA as a reaction on several cases of failure along many legal and ethical dimensions [2,7,30]. Regarding corporate governance legislation, Larcker and Tayan [2] identify an important issue on how such legislation is preparedwhether it has its origins in rigorous corporate governance theory and empirical research or it is more the product of political adequacy. Berglöf and Pajuste [26, p. 182] also address this issue by claiming that large controlling owners 'tend to get involved in politics influencing the legislative and regulatory processes as well as the enforcement of adopted laws and regulation'. Corporate governance codes are a form of soft regulations or the so-called soft law. They comprise a set of voluntary best governance practices and do not have the force of law [7,9,10]. Governance codes are established to 'address deficiencies in the corporate governance system by recommending a comprehensive set of norms on the role and composition of the board of directors, relationships with shareholders and top management, auditing and information disclosure, and the selection, remuneration, and dismissal of directors and top managers' [31, pp. 417-418]. Cromme [30, p. 364] claims that the governance codes' key function is transparency as 'there can be no better form of control than transparency, for open explanation of management decisions is a major plus point for company credibility'. High-quality disclosure on companies' corporate governance arrangements and increased transparency to the market provides information to investors, facilitates their investment decisions and brings 'reputational benefits for companies, and more legitimacy in the eyes of stakeholders and society as a whole' [32,Article 4]. Research studies show that the introduction of a code positively influences the evolution of companies' governance practices [10]. Even though there are several reasons behind companies' decision to comply with the codes' recommendations, especially two reasons are in front, and that are to increase companies' legitimation among investors and to improve the effectiveness of companies' governance practices. Corporate governance codes 'do encourage companies to implement stronger corporate governance structures and release more information in a timelier manner to market participants' [8, p. 475]. According to Nowland [8,p. 477], the success of codes 'relies on market mechanisms enticing or pressuring companies to improve their governance and disclosure practices'. Corporate governance codes can be developed at the national level, the level of an individual company and at the international level [10]. Individually or jointly, governments, stock exchanges, employer associations and director associations can issue governance codes in order to address corporate governance specifics in a particular country and to improve the national corporate governance system, especially in the case when other governance mechanisms fail to do that [31]. In authors' opinion, the issuer's ability to enforce changes in governance of companies importantly affects the codes' role in improving governance practice. Governance codes that are issued by governments and stock exchanges have stronger enforceability since they present a norm of operation and therefore might have a stronger impact on improving governance practice. Codes that are developed by professional associations, directors or management associations have lower enforceability due to their voluntary nature and therefore have lower impact on the promotion of good governance practice. When a firm introduces its own code, the main objective of such a code is 'to establish, and to communicate to investors and other stakeholders, the governance principles adopted by the firm' [10, p. 223]. In this case, a code applies only to that company. Transnational institutions such as the World Bank, the Organization for Economic Cooperation and Development (OECD) and the International Corporate Governance Network (ICGN) have also created governance codes. The introduction of such codes highlights their importance for prosperity of national economies and specific geographic regions. They are usually more general than a governance code established at the national or firm level [7,10]. The issue of this type of codes started at the end of the 1990s (first such code was issued in 1996), and there were 14 transnational institutions that issued 21 corporate governance codes by the end of 2014 [10]. A majority of codes issued by transnational institutions are developed for listed companies. However, an increasing number of codes have been issued for non-listed companies, for special types of companies (e.g. state-owned and family ones) or for different types of financial institutions [10,33]. Governance codes issued by transnational organizations are important for two reasons according to Aguilera and Cuervo-Cazurra [7]. Firstly, they emphasize the importance of good corporate governance and provide best governance practices across several countries. Secondly, they can provide basis for the creation of national governance codes. There is evidence that the creation of national governance codes usually accelerates after the issue of influential transnational codes and the occurrence of corporate scandals and frauds [10]. National governance codes 'tend to be adapted to the country's economic environment and address the country's most salient governance problems' [31, p. 436]. The so-called domestic forces influencing the development of governance codes refer to demands from investors who prefer better protection of their interests. Codes are then introduced to improve governance practice and to close the perceived gap in the domestic national governance system and improve its efficiency. That is often in the cases when other governance mechanisms (e.g. takeover markets and legal environment) fail to protect adequately shareholders' rights [31]. In some countries, corporate collapses and scandals triggered the issues or revisions of corporate governance codes. For example, in Cyprus, the Cyprus Stock Exchanges introduced the Cypriot Corporate Governance Code in 2002 as a response to the major stock exchange collapse [34]. The role of codes in convergence of governance practices Some evidence [7,30,35] demonstrate that governance codes can be viewed as mechanisms facilitating governance convergence across countries. Such convergence is the result of several external forces among which the most powerful are globalization, market liberalization and influential foreign investors [7,30]. Namely, globalization, the internalization of markets and deregulation have led to rapid changes in traditionally grounded models of corporate governance [19]. These external forces 'lead to pressure on national governments, institutions and companies, to conform to internationally accepted best practices of corporate governance at the international level' [12, p. 54], thereby influencing the attractiveness of countries and companies for foreign investors. Countries that are more exposed to other national economic systems experience greater pressure to change governance practice not only to improve efficiency of domestic companies but also 'to harmonize the national corporate governance system with international best practices' [9, p. 4]. Several research findings on corporate governance codes revealed the governance convergence towards the Anglo-Saxon model (i.e. shareholder model) [30,34,35]. Governance codes, which are more in line with the Anglo-Saxon model, can be found not only in the established European economies [30] but also in emerging economies [34]. The explanation for this convergence may lay in the efforts of transnational organizations (e.g. the World Bank and the OECD) to promote those global standards of corporate governance that are more in line with Anglo-Saxon model [7]. The European Commission (EC) also encourages the convergence of governance practice in European countries by issuing recommendations in the area of corporate governance [7,30,32]. According to Cromme [30], the governance guidelines at the European level are highly aligned with the country codes. This can be due to the fact that certain governance issues (e.g. stakeholders rights and responsibilities) have been taken more seriously in countries of Continental Europe since 'their former weak capital markets are strengthened and institutional investors become more assertive in promoting more effective governance measures such as higher accountability and better disclosure' [7, p. 381]. Berglöf and Pajuste [26] claim that the introduction and the contents of governance codes of the Eastern European countries were the result of external pressure in terms of the EC corporate governance recommendations. The codes in these countries were largely determined by the demands that resulted from the EU accession process; many contents of the codes were also more or less copied from the UK and the USA codes. However, based on the research results on the comparison of the codes contents of the Eastern European countries, which are the EU member states, Hermes et al. [12] claim that domestic forces (e.g. the extent of enterprise restructuring, large-scale privatization and stock market development) in some of the analysed countries played an important role in shaping the codes' content. Several scholars [1,2,20,25,36] raised doubt about 'one size fits all' corporate governance regulations. It is highly unlikely that a single set of best practices exist for all companies since corporate governance is a very complex and dynamic system and not all mechanisms may work well in all governance contexts [2]. The corporate governance practices and regulations should reflect particularities of companies' ownership and control structures that differ across countries and industries and determine the type and severity of agency costs [36]. 'Comply or explain' approach The codes' voluntary nature is realized by the 'comply or explain' approach [7,10] that is 'an enforcement mechanism that allows companies to deviate from the code norms, but at the same time requires them to disclose these deviations' [37, p. 255]. The basic idea of this approach is that a company has to disclose the compliance with recommendations of a particular code adopted by a company, or in the case of non-compliance, a company must explain the reasons for it [8]. The 'comply or explain' approach enables a company to adapt its governance practices to its particular circumstances [36], its size and shareholding structure [32,Article 7], to consider sectoral specifics [37], thereby allowing flexibility in choosing 'which corporate governance structure to adopt to better pursue their objectives' [10, p. 223]. Departing from the codes recommendations enables companies to govern themselves more effectively by adapting their corporate governance practice to their particularities [32, Article 7]. Differences exist among countries regarding the implementation of this approach. There are two ways of implementing the 'comply or explain' approach and that are mandatory and voluntary one [10]. The mandatory disclosure on the adaptation of code's recommendation or explanation of deviations is required by listing authorities (e.g. in Australia, Canada, Estonia, Luxemburg, Malta, Malaysia, Russia, Singapore and the UK) or by law (e.g. in several EU countries). The voluntary disclosure is present in some emerging economies (e.g. Algeria, Lebanon, Tunisia and Yemen). However, such lack of disclosure may decrease the effectiveness of governance codes since investors cannot understand 'if the company does not adapt the best practices or adopts the best practices, but does not disclose their adoption' [10, p. 224]. In the recent World Bank analysis [33] of corporate governance codes, 112 codes were found. Of the 112 codes, some 27 were purely voluntary with no link to regulatory frameworks, eight were mandatory and seven countries appeared to have some level of mandatory provisions. All other codes were variations of the 'comply or explain' approach. The 'comply or explain' mandatory disclosure requirement is implemented by most stock exchanges. Companies listed on the stock exchange must explain the reasons for non-compliance with the (country) governance code in their annual report [30,31,36]. By realizing mandatory 'comply or explain' approach, the code 'helps companies exercise greater selfresponsibility in their dealings with the capital market' [30, p. 364] and 'promotes culture of accountability, encouraging companies to reflect more on corporate governance arrangements' [32, Article 7]. Luo and Salterio [36, p. 460] claim that the disciplining power of this approach 'is the required public disclosure of governance practices that allows market participants to evaluate the effectiveness of the firm's governance system and to make informed assessments of whether noncompliance is justified in particular circumstances'. Appropriate disclosure of non-compliance with the code recommendations and of the reasons for these is very important for ensuring that stakeholders can make informed decisions about companies and for reducing information asymmetry between companies' management and shareholders, thus decreasing the monitoring costs [32, Article 17]. Several research findings demonstrate that listed companies tend to comply with codes recommendations [25,36] which might be due 'to the market forces and pressures to comply with legitimating practices or "doing the right thing"' [31, p. 419]. Since the best governance practices are generally recognized as value enhancing, listed companies try to make clear explanation on why they do not comply with particular codes' recommendations [25]. Empirical evidences revealed some other factors that influence the rate of compliance with the codes' recommendations-see Ref. [10]. One of them is the firm size-larger companies require more sophisticated governance practices, their ownership structure is more dispersed and they are more under the control from the external environment (i.e. their greater visibility) [37]. Important factors are also the overall institutional environment, especially the legal norms and cultural values, and the development of national economy-the level of compliance with codes' recommendations is higher in developed than in developing countries that lack a tradition of sound corporate governance-see Ref. [10]. Even though analysis indicate gradual improvement in the way companies in the EU member states apply corporate governance codes, shortcomings were identified in the application of the 'comply or explain' approach. There are critiques of this approach as being ineffective due to 'the poor quality of explanations and because it provides a rather soft option, which proved in the financial crisis that it could not be trusted' [33, p. 70]. There are also interesting observations and empirical evidences regarding the explanations for deviations from codes' recommendations. In some European countries (e.g. UK, Netherlands and Germany), companies often use standard explanations for deviations, and often firms complying with the same recommendations use similar explanations for non-compliance. As the level of compliance increases over time, the quality of explanations for non-compliance remains very low showing only marginal improvements-see Ref. [10]. The diffusion of codes of good governance around the world The first code came into being in the late 1970s in the USA. That was a period of 'transition from the conglomerate merger movement of the 1960s … to the empire-building behaviour by management through hostile takeovers … and to the shareholder rights movement of the late 1980s and early 1990s' [31, p. 418]. The year 1992 presents an important landmark in the development of governance codes around the world. That year, the Cadbury Report and the Code of Best Practices were issued in the UK, and since then the number of countries issuing governance codes has been increasing [7]. The Cadbury Report was a result of several financial scandals in the UK in the 1980s and early 1990s. This was the first corporate governance code adopted by the London Stock Exchange. The Cadbury Report is recognized in the literature and in the governance practice as one of the most influential codes, and several dimensions of that code were introduced into corporate governance systems not only in the UK but also around the world, including the USA and Germany [11]. After the issue of the Cadbury Report, the diffusion of codes has been rather slow and accelerated after the issue of both the OECD Principles of Corporate Governance and the ICGN Statement on Global Corporate Governance Principles in 1999. There were only nine countries that issued a code by 1997, while a further 34 countries issued their first code by 2002 [10]. Another important landmark in the diffusion of codes around the world presents the recent financial crisis (with beginnings in 2007-2008) and accompanying scandals that brought attention to the importance of introducing adequate governance mechanisms. The number of corporate governance codes has increased especially between 2009 and 2010 [10]. The recent analysis revealed that since the financial crisis codes have been and are being revised more often than before crisis. For example, the website of the European Corporate Governance Institute (ECGI) reported on 14 code revisions since 1 January 2015 [33]. The research findings show that first countries that issue governance codes, that is, the USA as first, followed by Hong Kong, Ireland, the UK and Canada, were countries with a common law, or English-based, legal system [7]. This is a more flexible legal system in contrast to civillaw system since judicial precedent shapes the interpretation of laws and their application. In the civil-law system, judiciary must base its decisions on strict interpretation of the laws that are issued by legislative bodies [2,28]. Three types of the civil-law system exist and that are French, Scandinavian and Germanic. Research findings of Aguilera and Cuervo-Cazurra [31] indicate that codes are more likely to be issued in countries with a common-law system. In authors' opinion, there are two explanations for their research findings. Firstly, commonlaw countries, where strong shareholder rights are embedded in their legal system, are more likely to emphasize continuously good governance practice introduced by codes. Secondly, the common-law legal system's characteristics facilitate the enforceability of the codes. Even though in the common-law countries the good governance practice 'tend to reach the level of enforceability in courts, in civil-law system such practices do not have enforceability through the courts unless they become codified into law' [31, p. 434]. This cognition is confirmed by the research results of Zattoni and Cuomo [9] which show that countries with civil-law system issue codes later than common-law countries, and create fewer codes that often comprise ambiguous recommendations. Their research results suggest that 'the issuance of codes in civil-law countries is prompted more by legitimation reasons than by determination to dramatically improve the governance practices of national companies' [9, p. 12]. Aguilera and Cuervo-Cazurra [31] identified three exogenous pressures on the development of codes. The first pressure can be explained by the economic integration of a country in the world economy that positively influences the adoption of governance codes. The second pressure that is positively related to the code's adoption is the processes of government liberalization in a particular country. The withdrawal of government presence in the economy creates a need to establish new governance system in the newly privatized companies. The third pressure refers to the presence of foreign institutional investors that positively influence the code's adoption. Institutional investors search for companies with good governance practice since they need assurance for their investment to be protected. Important research findings on codes' diffusion refer to the relationship between the development of capital markets and the number of governance codes. Countries with larger and deeper capital markets have more governance codes since 'the need for good governance increases as the number of public firms grows because agency problems between disperse owners and managers, or between majority and minority shareholders emerge' [7, p. 379]. Research findings show that developed countries issued more codes than developing countries that are more reluctant to revise their first code. Recent data show that 91 countries issued 345 codes by the end of 2014, of which 91 were first codes and 254 codes were revisions of previous codes. Developed European countries issued more than half of codes issued by all countries (174 out of 345), thereby playing a significant role in the diffusion of codes [10]. A majority of national codes are designed for listed companies. Recently, an increasing number of codes have been issued for specific types of companies (e.g. state-owned or familyowned), for different types of financial institutions (e.g. commercial banks, institutional investors and mutual funds) and for voluntary and charitable organizations [10]. The total number of codes issued in European countries increased after the publication of two important reports and that are The European Union Action Plan on 'Modernizing Company Law and Enhancing Corporate Governance in the EU' published in 2003 and the report by the High-Level Group on Financial Supervision in the EU published in 2009. The aims of both reports were encouraging the convergence of company law and corporate governance practices within the EU [10]. In the EU countries, governance codes are recognized to have a significant role in establishing principles of good corporate governance. Especially listed companies are required to include a corporate governance statement (CG Statement in their management report. In this statement, a company should disclose its corporate governance arrangement [Article 4(1) (14) of Directive 2004/39/EC of the European Parliament and of the Council of 21 April 2004]. Since the 'comply or explain' approach is the key principle in the EU governance system, a company is required to explain in its corporate governance statement the deviations from the code's recommendations and the reasons for doing so [32,Article 4,6]. A company is required to describe the alternative measure taken 'to ensure that the company action remains consistent with the objectives of the recommendation, and of the code' [32,Article 17]. In this respect, the EC emphasizes the required high quality of explanations on non-compliance reported by companies [32,Article 8,11]. The EC recommendation on the quality of corporate governance reporting predominantly addresses listed companies. However, it suggests that other companies might also benefit by following the EC recommendation [32,Article 14]. In Germany, a corporate governance code was found as being unnecessary until 2002 when the German Corporate Governance Code (GCGC) was adopted [30]. This code contains standards of good governance that represent internationally and nationally recognized best practice. German companies are not required to follow these standards with the exception of listed companies. Listed companies have to disclose the (non)-compliance with the GCGC recommendations [37]. The research study by Talaulicar and Werder [37] showed high degrees of compliance with the GCGC, especially among German-listed companies. The authors were also able to identify some recommendations (i.e. 24 recommendations) that many companies do not comply with. However, in authors' opinion low rates of acceptance of these recommendations do not necessary imply that they need to be changed. On the contrary, such a situation may reflect 'that firms take advantage of the flexibility the code grants and disregard certain code norms in order to address company-specific peculiarities' [37, p. 268]. Hermes et al. [12] conducted a research on codes adoption in seven Eastern European countries or the so-called transition economies (i.e. Czech Republic, Hungary, Lithuania, Poland, Romania, Slovak Republic and Slovenia) that were at the time of the research already the EU member states. Romania was one of the first countries that issued a code in (2006); some of these countries published the new version of the code in the following years [12]. Hermes et al. [12] conducted analysis of the contents of the governance codes that are based on the analysed seven transition countries on the 'comply or explain' principle. They focused their research on three areas and that were disclosure rules, strengthening shareholder rights and modernizing boards. Since in many cases these codes were adopted as listing requirement for stock exchanges, this gave codes a formal and compulsory character. The research results show that the codes of the Eastern European countries on average cover only around half of the EC recommendations. For some of the countries included in the research (especially Romania, Hungary and Poland), domestic forces related to country-specific characteristics of corporate governance system influenced considerably the contents of corporate governance codes. Codes in other countries covered a majority or almost all the EC recommendations of the governance principles [12]. Several research findings show that the adoption of corporate governance codes considerably affects the level of disclosures. Sheridan et al. [11] found this in the case of the UK where the introduction of governance standards in terms of reports concerning best practice and codes of good corporate was accompanied by a significant increase in the number of news announcements. The research in eight East Asian (i.e. Hong Kong, Indonesia, Malaysia, the Philippines, Singapore, South Korea, Taiwan and Thailand) countries indicates that voluntary national codes had both direct and indirect effects on companies' disclosure improvements. That is especially the case in those countries where codes have special sections designated to disclosure [8]. Development of corporate governance codes Slovenia is a new European state that was founded in 1991 and is nowadays one of the EU member states. Since the early 1990s, Slovenia like other Eastern European countries has made considerable efforts in transition to market economy [12,26]. After its foundation, Slovenia has been faced with a three-way transition process [14]: (1) the transition to an independent state, (2) the reorientation from the former Yugoslav to Western-developed markets and (3) the transition to the market economy. These include several developments such as privatization of companies, trade liberalization, development of domestic financial markets and their integration to global capital markets, and development of the institutional framework in terms of regulations and law systems. All these developments have triggered the need to regulate the governance of companies in order to mitigate agency problems [12]. The transformation of companies' ownership from social into private one was realized based on the law on ownership transformation that came into force in 1992. The first Companies Act was adopted in 1993. Since then, corporate governance has been regulated by a number of acts that have been amended and supplemented as response to changes in legislation, market conditions and cases of good governance practice [15]. Corporate governance codes present an important element of corporate governance regulations in Slovenia. The first governance code was introduced for public joint stock companies. The Slovenian corporate governance code for public companies (in continuation: the Slovenian CG Code) was the result of joint efforts of the Ljubljana Stock Exchange, the Managers' Association of Slovenia and the Slovenian Directors' Association in creating recommendations on the best governance practices. The Slovenian CG Code came into force in 2004. Since then, the code has been revised several times: in 2005, 2007, 2009 and the last revised version of the code came into force in January 2017 [38][39][40][41]. Not only listed companies can apply the Slovenian CG Code's recommendations but also all those companies that would like to establish a transparent and understandable governance system [42]. First versions of the code included besides recommendations of the best governance practices also legal provisions on corporate governance. The Slovenian CG Code, which came into force in 2009, comprised only recommendations that are not legally binding (i.e. soft law). It is based on Slovenian legislation and incorporates 'the guidelines and recommendations of the European Union, principles of business ethics, internal bylaws of the three institutions (authors comment: the Ljubljana Stock Exchange, the Managers' Association of Slovenia, and the Slovenian Directors' Association) and the internationally recommended standards of diligent and sound corporate governance' [42, p. 2]. Companies, which are listed on the Slovenian-regulated market, must disclose to which code they adhere, any deviations from the code and reasons for them in their corporate governance statement, thus realizing the 'comply or explain' principle [43]. Shareholders have a right to demand additional explanations from a management board regarding the content of the statement at the shareholders meeting [40,46]. According to the LJSE Rules [44] and LJSE Guidelines [45], prime and standard market companies are requested to disclose (non)compliance with the code in the Statement on Compliance with the code that is the part of the CG Statement. The CG Statement was introduced by the Slovenian Companies Act [46] in 2009 and must be published as a part of annual reports. It is recommended that listed companies published it separately on their website [45]. This is in line with Article 20(1) of Directive 2013/34/EU that requires listed companies to provide information of their corporate governance arrangements as well as how they applied the relevant corporate governance code recommendations. It is believed that such requests would improve transparency for shareholders, investors and other stakeholders [32]. From 2015, the CG Statement is obligatory not only for listed companies but also for those companies that are bound to auditing. Companies in Slovenia are relatively free in choosing a governance code to which they adhere. However, it is expected that companies listed on the prime and standard market of the Ljubljana Stock Exchange will largely follow the Slovenian CG Code [42,47]. Companies can also create their own code, which might be a reasonable approach in the cases of adopting more codes. The selection of a code can also be influenced by the expectations or preferences of the company's shareholders [40]. However, other codes are not the subject of this contribution. Since several research studies discussed in the next section explored the adoption of the CG Code from 2009 as well as this code has been adopted in the practice of Slovenian companies longer than any other Slovenian code did, we explain in more detail the content of this code. The recommendations of the CG Code from 2009 [42] cover several broad areas of corporate governance and that are corporate governance framework, relations with shareholders, supervisory board, management board, independence and loyalty of members of supervisory board and management board, audit and system of internal controls, and transparency of operation. Recommendations in the area of the corporate governance framework: • the management board together with supervisory board creates and adopts a Corporate Governance Policy (CG Policy); • with the CG Policy they lay down major corporate governance outlines that should be compliant with the long-term goals of a company [42]. Recommendations in the area of the relations with shareholders: • a company should ensure such a corporate governance system that treats equally all shareholders as well as encourage a responsible enforcement of shareholder rights; • shareholders should be informed about the convening and progress of general meetings in a timely and accurate manner; • a company should provide shareholders with reliable data that enables them to make informed assessments of the items on the general meeting's agenda [42]. Recommendations in the area of the supervisory board: • the composition of the supervisory board should ensure responsible supervision and decision-making that are in the best interest of a company (i.e. re-members of the board should have professional expertise, experiences and skills); • the selection procedure of the board members should be transparent and well defined in advanced; • the board monitors a company, evaluates the work of the management board and actively cooperates in creating CG Policy; • members of the supervisory board sign a statement in which they disclose whether they meet the criteria of independence and the possession of relevant professional training and know-how required to act as the supervisory board member; • the president of the board is elected by simple majority; • members of the board should be adequately paid for their work; • the supervisory board sets us special committees dealing with special issues; • the supervisory board assesses its work and work of its committees once a year [42]. Recommendations in the area of the management board: • a company is managed by the management board that should ensure long-term performance by defining values and strategies; • the composition of the management board should ensure decision-making in the best interest of a company and functioning in compliance with high ethical standards considering the interests of diverse groups of stakeholders; • a remuneration system should enable composing of the managements board that best suits the needs of a company and ensures the alignment of the board's and the company's longterm interests [42]. Recommendations in the area of the independence and loyalty of members of supervisory board and management board: • members of both boards make independent decisions taking into consideration the goals of a company [42]. Recommendations in the area of the audit and system of internal controls: • an auditor is appointed in order to ensure an independent and impartial audit of the company's financial statements; • an efficient system of internal controls is set up that also ensures a quality-risk management; together with its auditing committee, a company ensures periodical and impartial professional surveillance of the system of internal controls [42]. Recommendations in the area of the transparency of operation: • a corporate communication strategy should be defined as a part of the CG Policy dictating high-quality standards in preparing and publishing accounting, financial and non-financial information; • informing both shareholders and public should be set up in a manner providing equal, timely and economical access to information related to all aspects of a company; • the company's governance practice is presented in the CG Statement taking into consideration the Companies Act; • the Statement is a part of the annual report published as an independent document on the company's website [42]. At the beginning of 2017, a new version of the CG Code was issued where the purpose remains the same. That is to provide corporate governance recommendations for joint stock companies that are listed on the Ljubljana Stock Exchange. Other companies may also follow the CG Code's recommendations, thereby establishing transparent governance system in order to increase companies' legitimation among different groups of stakeholders (i.e. domestic and international investors, employees, banks, public, etc.). There are three main reasons for renewal of the previous version of the CG Code [48]: • The regulatory framework has changed in the last 7 years. Several changes in legislation, especially in the area of corporate governance, reporting and public disclosure on governance system of a company came into force. • Several changes in international and domestic recommended governance practice also importantly influenced a decision to renew the CG Code from 2009. In 2015, the OECD adopted new principles of corporate governance. Consequently, several countries have issued new codes (e.g. Austria, Finland, Germany, Denmark, Sweden, UK, Romania and Baltic countries). At the same time, advanced recommended governance practice has been developed in Slovenia (e.g. corporate governance codes for non-public companies in 2016, recommendation for auditing committee in 2016, practical advices for quality explanations in Statement on Compliance in 2015, etc.). The EC recommendations on the quality reporting on governance issued in 2014, which propose the EU members to monitor the codes' compliances, also importantly influence the development of the new CG Code in Slovenia. • The results of the latest analysis of disclosures of compliances with the Slovenian CG Code from 2009 for the 2011-2014 periods (which are in more detail discussed in the next section) were also one of the reasons for introducing changes in the corporate governance recommendations of the CG Code. The analysis revealed those recommendations that the majority of companies complied with and which recommendations were among those that companies reported on non-compliances. The analysis and the issuers of the CG Code tried to improve those recommendations that were recognized as being described not clear enough, and therefore their introduction in the company's governance practice caused unnecessary problems. Therefore, several changes were introduced in the new CG Code. We present the major changes by organizing them according to the major areas of the Slovenian CG Code from 2017 [48,49]: • Corporate governance framework: additional recommendation regarding diversity of the boards' membership and representation of both sex (i.e. women and men) in the boards and committees. The recommendations on the CG Statement now include additional explanation on how to prepare this statement. Previous analyses (LJE Analysis 2012 and 2015 that are discussed in the next section) revealed that several companies still did not understand the 'comply or explain' principle. The new CG Code also introduces the recommendation on external monitoring of the CG Statements, thus following the EC recommendation on the quality of corporate governance from 2014. • Relations with shareholders: recommendations on equal treatment of shareholders were supplemented. Previous analyses indicate that recommendations in this respect were not comprehensive and clearly stated in the previous version of the code. • Supervisory board: recommendations on self-evaluation of the supervisory board were updated. Recommendations on the audit committee were updated as well taking into consideration the new provisions of EC directives and the Slovenian Companies Act. Recommendations on additional training of the supervisory board members were added. These changes should positively influence the work of the supervisory board. • Management board: recommendations on planning a succession in the management board were updated. Additionally, the tasks of the management board regarding the management system are recommended; the system should be transparent in terms of jurisdiction, connected with the risk management system and should encourage ethical and responsible behaviour of key stakeholders in a company. • Independence and loyalty of members of supervisory board and management board: the definition of the independence is updated in order to make a clearer distinction between independence and conflict of interests. Criteria for conflict of interests are updated and more clearly presented. The recommendation on independence of the supervisory board members in this new CG Code extends to all members of the board (in previous version, this recommendation refers to only half of the board's members). • Transparency of operation: several changes were made in this important area of corporate governance by taking into consideration the changes in legislation and the rules of the Ljubljana Stock Exchange. These updated recommendations also enable better comparisons among companies and transparency for all stakeholders. The role of codes in improving corporate governance practice in Slovenia Even though research studies on the adoption of governance codes in the corporate governance practice in Slovenia are scarce, the existent research results provide an important insight into the development of governance practice in Slovenia and the role of the corporate governance codes in this process. In this section, we analyse the findings of the previous research studies on governance codes in Slovenia. The structured content analysis was done chronically, starting with the research that explored the Table 1. The Slovenian CG Code that came into force in 2005 (i.e. the revised version of the first code) was included in the comparative analysis of the codes contents of seven Eastern European countries (i.e. Czech Republic, Hungary, Lithuania, Poland, Romania, Slovak Republic and Slovenia) that was conducted by Hermes et al. [12]. The research covered three areas of recommendations that were disclosure rules, strengthening shareholder rights and modernizing boards. The main findings of the research were explained in one of the previous sections. In this section, we focus on the research findings for Slovenia. The comparison of the Slovenian CG Code with respect to the EU recommendations on enhancing corporate governance disclosure showed that the Slovenian CG Code included five out of nine analysed recommendations. Those recommendations that were not included in the code were those not addressed in the codes of the majority of other six analysed countries as well. Research findings showed that 'openness from shareholders in general and from institutional investors in particular, with respect to their holdings and policies as major owners of companies' [12, p. 65] were not recommended by the analysed codes. The results can be explained by the corporate governance systems in these countries where important features are controlling shareholders and block-holdings. According to Berglöf and Pajuste [26], companies with controlling shareholders are less prone to disclose information. The comparison of the Slovenian CG Code with respect to the EU recommendations on strengthening shareholder's rights shows that the Slovenian CG Code included two out of three analysed recommendations. Those were the recommendation dealing with providing shareholders information for evaluation of a company's performance and operations (not included only in the Romanian code), and the recommendation on shareholder democracy (i.e. the one shareone vote democracy). The last recommendation was found only in Slovenia and Lithuania [12]. Regarding the EU recommendations on modernizing the boards of directors, the Slovenian CG Code included four out of six analysed recommendations. Recommendations not included in the code were the recommendation on disclosure of the remuneration policy (included in three analysed codes) and the recommendation on prior approval by the shareholder meeting of share and share option schemes in which directors participate (included in four analysed Main findings: • codes' content of some countries differ from the best governance practices, • domestic forces related to specifics of national corporate governance systems shape the codes' content. Findings with respect to Slovenian CG Code: • recommendations on disclosure rules (five out of seven), • recommendations on strengthening shareholder rights (two out of three), • recommendations on modernizing board of directors (four out of six). Ljubljana Stock Exchange with the support of the Slovenian Director's Association • Half of the most frequent deviations were deviations from the principles on the transparency. The SEECGAN Index [38,39] The adoption of codes in the listed companies in Slovenia was explored by the following questions as a part of the SEECGAN Index methodology: • Has the company developed and publicly disclosed its own Corporate Governance (CG) Code? • Has the company adopted some official CG Code (CG Code of the Chamber of Commerce, CG Code of the Stock Exchange or similar)? • Does the Company disclose the extent to which it is complying with its Corporate Governance Code (does it explain possible deviations from the Code)? All prime and standard market that were listed on the Ljubljana Stock Exchange in June 2014; in total 22 companies. Any corporate governance code. • More than three-quarter of prime and standard market companies disclose a code. • Companies refer to one of the official codes. • 88.9% of prime and all standard market companies disclose the compliance with the code and explain the deviations from it. Table 1. Scope (subject) of analysis and major findings of the research on corporate governance codes in Slovenia. Corporate Governance and Strategic Decision Making codes). The Slovenian CG Code was the only code of the analysed ones that included the recommendation on the recognition in the annual accounts of the costs of share and share option schemes of directors [12]. These findings indicate that the Slovenian CG Code from 2005 already included recommendations not only on disclosure of fixed and variable remuneration of individual directors but also on disclosure of more sensitive information on how option and share schemes are constructed and how much that costs a company. These results indicate that the Slovenian CG Code introduced at that time many of the best governance practices at least in listed joint stock companies in Slovenia. In 2012, the Ljubljana Stock Exchange initiated the analysis of the companies' disclosures on the compliance with the Slovenian CG Code from 2009. The analysis was conducted with the support of the Slovenian Director's Association. Ten companies listed on the prime market of the Ljubljana Stock Exchange were included in the analysis since these companies were expected to adhere to high governance standards. Companies were obliged to disclose any deviation from the code's recommendation together with the explanations in their annual reports. The main goal of the analysis was to explore the level of credibility and quality of explanations of deviations in accordance with the 'comply or explain' principle in order to identify those factors that could improve the quality of information on corporate governance system for investors [40]. The analysis and its results directly addressed the agency problem since an active monitoring of the governance mechanisms could present an additional pressure on the companies' management to consider the interests and goals of a company and not their personal interests and goals. The disclosures on compliance for 2010 and 2011 published in the annual reports in 2011and 2012 were the subject of the analysis. The major findings of the analysis were that both the level of compliances with the best corporate governance practices and the quality of explanations of deviations from the CG Code's recommendations have increased in the observed period [40]. The results of the analysis indicated that two companies in 2011 and one company in 2010 complied with all code's recommendations. The rest of the analysed companies disclosed on average a compliance with 81% of the code's recommendations. More than half of total 112 code's recommendations were identified as those that all analysed companies complied with. The comparisons for the observed 2-year period show that the level of compliance with the CG Code from 2009 has been improving [40]. Such results suggested that companies have been trying to consider the CG Code's recommendations as much as possible, thereby also raising the quality of their governance practice. The analysed listed companies disclosed deviations especially from the following six recommendations of the Slovenian CG Code from 2009 [40]: • definition of goals in the company's statute, • use of information technology to inform and conduct sessions of a supervisory board in a safe way, • self-evaluation of the supervisory board composition, functioning, conflict of interests, cooperation with the management board and its committees once per year, • the principle regarding payments of the supervisory board members that are determined at the shareholder meeting, • appointment of an audit committee and a remuneration committee and a nomination committee as soon as possible after the constitutive meeting of a supervisory board, • disclosure of payments of the members of a management board and a supervisory board. Effectiveness of the CG Code in practice depends also on the transparency of deviations that should be reliable and complete. The detailed analysis of explanations of deviations shows that only 27% in 2010 and 44% in 2011 were such that could be classified as specific, highquality explanations on deviations describing besides deviations also alternative governance practice and reasons for its adoption by a company [40]. Even though the quality of explanations of deviations has increased in the observed period, the comparisons of the reported disclosures and the actual governance practices raised an important question of the quality, completeness and credibility of these disclosures. The research results revealed that companies did not disclose all deviations mainly for two reasons. Firstly, companies did not interpret a particular recommendation correctly, and secondly, companies did not find a particular recommendation as relevant [40]. The • neither disclosed the code in their annual report nor published the CG Statement, • referred to legislative and other regulations, internal rules and/or their Corporate Governance Policy, • developed their own governance practice without adopting any official code, • companies' shares were not traded at the market, • referred to invalid CG Code (e.g. from 2007), and so on. 6%) were found that all analysed companies complied with in the observed period [50]. The percentage is lower than in the 2012 analysis, but we should take into consideration that in 2012 analysis only primer market companies were explored that are expected to adhere to the majority of the code's recommendation. As stated in the report of the analysis [50], the number of companies complying with all code's recommendations is still low. However, this does not necessarily indicate lower quality of companies' governance practice. The main purpose of the analysis of compliance with the code's recommendations is not to ensure compliance with all recommendations if that is not an optimal solution for a company. If deviations are explained and alternative solutions are presented, such non-compliance indicates that a company has developed an alternative practice that best suits its specifics. Therefore, in-depth analysis of explanations was conducted. The findings demonstrate that the share of specific, high-quality explanations of deviations that described both deviations and alternative solutions has increased. The share of such highquality explanations was 23.5% in 2011 and 27.8% in 2014 [50]. Even though the results suggest that companies have becoming aware of the importance of good disclosure practices, the quality of disclosures on deviations still needs to be improved. Contrary to the 2012 analysis, this analysis did not include investigation on whether companies really disclosed all deviations. The results of the analysis also provide a comprehensive insight into those recommendations that companies did not comply with. The list of the most frequently disclosed deviations is organized per main areas of the CG Code accompanied with the data on the share of companies that disclosed deviations from a particular recommendation [50]: A closer look at the results shows that the share of companies disclosing deviations from particular recommendations has increased in the observed period. Half of the statistically most frequent deviations were those from the recommendations on the transparency. Another research on corporate governance codes in Slovenia was conducted as a part of research on measuring the corporate governance quality by applying the SEECGAN Index of Corporate Governance [38,39]. All companies of the prime and standard market, in total 22 companies, that were listed on the Ljubljana Stock Exchange in June 2014 were explored. The main source of data was annual reports for the year 2013. Additionally, reports and documents published on the companies' websites were analysed. Research results revealed that more than three-quarter of the prime and the standard market companies disclosed a code. The majority of companies referred to one of the official codes. All standard market companies and 88.9% of the prime market companies disclosed the compliance with the corporate governance code and explained the deviations from it. Even though the disclosure of compliance with the chosen code is obligatory for the prime and the standard market companies in Slovenia [45,46], one of the prime market companies did not disclose compliance with the code [39]. Conclusions Good corporate governance is primarily the responsibility of every company and regulations at the national level taking into consideration specifics of the national economy, and the latest developments of governance practices and regulations at the European or even global level should ensure that certain governance standards are respected. Therefore, it is important that both hard law and soft law (i.e. especially corporate governance codes) provide comprehensive corporate governance framework, thereby encouraging the introduction of high governance standards and best practices in the companies' corporate governance system. This is of key importance for performance, growth and long-term sustainability of companies. The findings of research studies and analyses discussed in this contribution show that the Slovenian CG Code has been playing an important role in developing corporate governance practice in Slovenia. Especially this is true for Slovenian-listed companies supporting cognitions by Aguilera and Cuervo-Cazurra [31] about the issuer's ability to enforce changes in the companies' governance system. Codes that are developed by the stock markets have strongest enforceability, since they are designed as the norm of operation, and thus having a greater impact on the promotion of good governance. The CG Code itself as well as mandatory disclosure of compliance with the code's recommendations serves as a guideline to different groups of stakeholders by clearly describing the particularities of the Slovene business world. Disclosures in the CG Statement based on 'comply or explain' approach should be specific and of high quality so that shareholders, investors and other stakeholders get a transparent and a reliable picture of the company's governance system. The LJSE analyses from 2012 and 2015 of disclosures of compliances with the Slovenian CG Code [40,50] show that the number of specific, high-quality explanations of deviations describing besides deviations also alternative solutions has increased. Even though these results indicate that companies have become aware of the importance of good disclosure practices, their share is still relatively low and therefore improvements are needed in this respect. Such situation is not specific for Slovenia, but has been noticed in other European countries where companies often use standard explanations of deviations, see [10]. The 'comply or explain' approach is effective only when high-quality explanations of deviations are disclosed. That is a way we find of crucial importance to raise awareness of the companies' key stakeholders on the main features of the high-quality explanations. According to the EC Recommendations, the high-quality explanations of deviations mean [32, Article 18] • avoiding the use of the standardized language, • focusing on the specific company context explaining the departure from a recommendation and • the explanations should be structured and presented in such a way that they can be easily understood and used. EC recommends companies [32; Section III, paragraph 8, 33; Section III, paragraph 8] to clearly state which specific recommendations they do not comply with and for each deviation, they should • explain in what manner the company has departed from a recommendation; • describe the reasons for the departure; • describe how the decision to depart from the recommendation was taken within the company; • where the departure is limited in time, explain when the company envisages complying with a particular recommendation; • where applicable, describe the measure taken instead of compliance and explain how that measure achieves the underlying objective of the specific recommendation or of the code as a whole, or clarify how it contributes to good corporate governance of the company. The research findings show that the level of compliance with the codes' recommendations has been increasing in Slovenia. However, as stated in both reports [40,50], we cannot make a firm conclusion on the actual level of compliance with the CG Code's recommendations. Companies may not disclose all deviations or may find them as unimportant in their attempt to disclose compliance with as many recommendations as possible. That is a way companies should be made aware of the main purpose of the corporate governance code and accompanying 'comply or explain' approach since 'departing from a provision in the code could in some cases allow a company to govern itself more effectively' [32, p. 44]. A non-compliance with the 'best practice' which is accompanying with an explanation of how the alternative approach achieves the goal of the non-adopted recommendation can present significant benefit when creating the governance system that best suits the company's specific circumstances, see [36]. Companies should be aware of the flexibility enabled by the 'comply or explain' approach, and develop a governance system that in the best possible way addresses the company's specifics. The practice of not disclosing all deviations could be a very dangerous one since it can raise doubt about the implementations of the rest recommendations for which a company disclose compliance, see [40]. Both analyses on disclosures of compliances with the CG Code [40,50] provide important cognitions on the adoption of the CG Code in Slovenian companies. Findings of such analyses reveal improvements in the governance practice and indicate those areas where changes are required. That is a way such monitoring and analysis should be done on the regular basis. Since we can observe high concentrated ownership in Slovenia [50] and companies with controlling shareholders are found to be less prone to disclose information [26], a regular monitoring of disclosures is of great importance. The EC recommends that public or specialized bodies should regularly monitor corporate governance statements published by companies in order to make 'comply or explain' approach effective [32,Article 19]. Shareholders should also perform effective monitoring in order to encourage good-quality explanations [32,Article 20]. Shareholders should play an active role in improving the quality of explanations in Slovenia as well. A dialogue between shareholders, a management board and a supervisory board is of great importance in the process of creating a suitable governance system. External institutions as professionals in monitoring the quality of disclosures [40] cannot do this. However, such professionals can play an important role in the process of monitoring due to knowledge and expertise they possess. Reporting on the monitoring results can considerably contribute to better understanding of the code's recommendations among companies, promote debate and thus foster awareness of the underlying issues, see [26]. Regular monitoring of the codes adoption can provide legislators, policy makers and stock exchanges with an important insight into the effectiveness and efficiency of the codes, thus providing basis for developing and updating the recommendations 'in order to address the potential failures of corporate governance mechanisms' [10, p. 222]. Such monitoring can be the opportunity for regulators to 'make the rules less ambiguous' [26, p. 196] as it is the case with the last revision of the Slovenian CG Code that considered the findings of analysis of disclosures of compliances with the Slovenian CG Code from 2009 for the 2011-2014 periods. The research studies and analysis not only in Slovenia but also in other surroundings deal mainly with the disclosures of compliance with the codes' recommendation. However, we believe that future research should address not only the statements on (non)-compliances but also how companies implement and practice the code's recommendations. Effective governance is demonstrated by the implementation of the regulations and recommendation in the practice and 'whether a code really contributes to improving practices depends on the extent to which companies actually comply with the recommendations in the code and to what extent compliances leads to changes in corporate behaviour' [12, p. 63]. We believe that a more in-depth analysis of the declared and implemented governance arrangements and their consequences is needed. An important contribution in this direction can be the SEECGAN Index that enables to measure how the codes' recommendations and national regulations contribute to the quality of corporate governance practice. The SEECGAN Index enables the comparisons of governance practices among South Eastern European countries, thereby creating a platform for debate about the best governance practices considering the specifics on national economies in that part of Europe. Future research should also address the relationship among the code's compliance and the company's performance in general as well as in Slovenia. None of the researches and analyses conducted in Slovenia have addressed this question yet. We find this issue to be of great importance especially since mixed results about the impact of the level of compliance with the code's recommendations on companies' performance can be found in the literature, see [10]. Some research studies even showed that financial performance could justify non-compliance [7]. Diverse and mixed results can be explained by cognitions of several authors that corporate governance is a complex construct influenced by many factors, see [37]. Both the research and the practice regulated by different forms of hard and soft laws should adequately address the complexity of corporate governance construct. We hope that findings presented in this paper contribute to better understanding on how the codes of good governance as a form of soft law address this complexity and where improvements are required.
2018-12-18T05:21:23.529Z
2017-09-20T00:00:00.000
{ "year": 2017, "sha1": "794040cbbf41b61b2f07f0d34cfb17ece31a5a17", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/56123", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "d9efe5cfc904886355f76744779628cd5ec7ca1c", "s2fieldsofstudy": [ "Law", "Business" ], "extfieldsofstudy": [ "Business" ] }
256031796
pes2o/s2orc
v3-fos-license
The Complete Mitogenomes of Three Grasshopper Species with Special Notes on the Phylogenetic Positions of Some Related Genera Simple Summary The complete mitogenomes of three grasshopper species were sequenced and annotated. The phylogenetic positions of the genera Emeiacris and Choroedocus are clarified based on both complete mitogenome and morphological evidences. The results show that Emeiacris consistently has the closest relationship with the genus Paratonkinacris of the subfamily Melanoplinae, and Choroedocus has the closest relationship with the genus Shirakiacris of the subfamily Eyprepocnemidinae, respectively. In addition, the genera Conophymacris and Xiangelilacris, as well as Ranacris and Menglacris, are two pairs of the closest relatives, but their phylogenetic positions need further study to clarify. Abstract Clarifying phylogenetic position and reconstructing robust phylogeny of groups using various evidences are an eternal theme for taxonomy and systematics. In this study, the complete mitogenomes of Longzhouacris mirabilis, Ranacris albicornis, and Conophyma zhaosuensis were sequenced using next-generation sequencing (NGS), and the characteristics of the mitogenomes are presented briefly. The mitogenomes of the three species are all circular molecules with total lengths of 16,164 bp, 15,720 bp, and 16,190 bp, respectively. The gene structures and orders, as well as the characteristics of the mitogenomes, are similar to those of other published mitogenomes in Caelifera. The phylogeny of the main subfamilies of Acrididae with prosternal process was reconstructed using a selected dataset of mitogenome sequences under maximum likelihood (ML) and Bayesian inference (BI) frameworks. The results showed that the genus Emeiacris consistently fell into the subfamily Melanoplinae rather than Oxyinae, and the genus Choroedocus had the closest relationship with Shirackiacris of the subfamily Eyprepocnemidinae in both phylogenetic trees deduced from mitogenome protein coding genes (PCGs). This finding is entirely consistent with the morphological characters, which indicate that Emeiacris belongs to Melanoplinae and Choroedocus belongs to Eyprepocnemidinae. In addition, the genera Conophymacris and Xiangelilacris, as well as Ranacris and Menglacris, are two pairs of the closest relatives, but their phylogenetic positions need further study to clarify. The genus Longzhouacris was first erected based on the type species Longzhouacris rufipennis You and Bi, 1983 [31],and currently includes 11 species distributed across tropical to subtropical areas [32]. Originally, Longzhouacris was placed under the family Catantopidae and not assigned to a definite subfamily. Later, it was considered to be a member of the subfamily Habrocneminae [32][33][34]. The subfamily Habrocneminae contains four genera in Li and Xia's monograph [33]: Habrocnemis, Longzhouacris, Menglacris, and Promesosternus, but the genus Promesosternus is assigned now in the subfamily Oedipodinae in the Orthoptera Species File (OSF) [32]. The genus Ranacris was established with Ranacris albicornis You and Lin, 1983, as the type species [35] and consists of three known species so far [32,34,36]. Like the genus Longzhouacris, Ranacris was also placed originally under the family Catantopidae without a definite subfamily assignment [35]. However, later, the subfamily Ranacridinae was proposed to contain the single genus Ranacris [37]. This was followed by some later scholars [33,34,38]. Storozhenko [39] then synonymized Ranacridinae with Mesambriini, and the genus was transferred to the tribe Mesambriini of Catantopinae. Although the genera Ranacris and Menglacris are placed in two different subfamilies at present, they exhibit high similarity superficially except for the difference in the presence or absence of the tegmen, i.e., the genus Ranacris is apterous, but the genus Menglacris is micropterous with narrow scaly tegmina. In addition, the genus Habrocnemis is also very similar to Ranacris and Menglacris. The genus Conophyma is the largest one of the subfamily Conophyminae, with 102 known species distributed mainly in the mountainous and plateau areas from Central Asia to the Himalayas [32]. Most Conophyma species inhabit above the altitude of 2000 m. Conophyminae had been placed under the family Acrididae before Otte [40], but Eades [41] transferred it to the family Dericorythidae based on the comparison of the male genitalia structure. Dericorythidae lacks the arch sclerite characteristic of Acrididae. Instead, it has the distinctive pseudoarch that is easily mistaken for the true arch if dissection is not completed by opening up the spermatophore sac. However, Eades' [41] study sampled only one species of the subfamily Conophyminae, Plotnikovia lanigera, and did not examine any materials of the genus Conophyma, the largest group of Conophyminae. Although the three genera mentioned above have a debatable phylogenetic position and a complicated relationship with other genera, there is no complete mitogenome data available for phylogeny inference. Similar problems are also explicit in the genera Emeiacris, Choroedocus, Conophymacris, and Xiangelilacris. Emeiacris is recognized as a member of the subfamily Oxyinae in OSF [32], but was placed in the subfamily Melanoplinae by Li and Xia [33], and Mao et al. [34], and this was always supported by molecular evidences [24,25,27]. Choroedocus is currently placed in the subfamily Catantopinae in OSF [32], but was obviously regarded as a member of the subfamily Eyprepocnemidinae in nearly all published literatures. Conophymacris is presently in the subfamily Conophyminae of the family Dericorythidae in OSF [32], but belongs to the subfamily in some monographs [33,34] and always has a closest relationship with Xiangelilacris in previous molecular studies [24,25,27]. In this study, the complete mitogenome of Longzhouacris mirabilis, Ranacris albicornis, and Conophyma zhaosuensis were sequenced and annotated. Additionally, the phylogeny of the grasshoppers in Acrididae with prosternal process was reconstructed using a selected dataset of mitogenome sequences of 90 species, including the three newly sequenced mitogenomes and the ones of 87 related species downloaded from GenBank (https://www.ncbi.nlm.nih.gov/ (accessed on 12 March 2022)). The phylogenetic position of some related genera are discussed in combination with morphological characters. Taxon Sampling Three species, i.e., Longzhouacris mirabilis, Ranacris albicornis, and Conophyma zhaosuensis, were selected as representatives of the genera Longzhouacris, Ranacris, and Conophyma, respectively. Data of the materials for generating mitogenomes were: (1) [33] and Maoet al.'s [34] monographs. They were preserved in 100% anhydrous ethanol and stored in a refrigerator(Thermo Fisher Scientific, Waltham, MA, USA) at −80 • C in the Insect Collection of the Central South University of Forestry and Technology. All materials were collected under appropriate collection permits and approved ethics guidelines. The morphological terminology followed that of Uvarov [42] and Storozhenko et al. [43]. The terminology of male genitalia followed that of Woller and Song [44]. All photographs were taken using a Nikon D600 digital camera(Nikon Corp., Minato-ku, Tokyo, Japan) or Leica DFC 5500 system(Leica Microsystems Inc., Wetzlar, Germany), the stacking images were combined using Helicon Focus ver. 6.0, (Helicon Soft, Kharkiv, Ukraine) and the plates were edited in Photoshop CS (Adobe Inc., San Jose, CA, USA). To clarify the phylogenetic positions of as many taxa as possible, we included in this analysis nearly all presently available mitogenome data of species with a prosternal process in Acrididae and Dericorythidae (Table 1), representing 2 families, 12 subfamilies, 50 genera and 88 species/subspecies in total. Coryphistes ruricola (MG993389, MG993390, MG993403, MG993406) in Catantopinae and Kosciuscola tristis (MG993402, MG993408, MG993414) in Oxyinae were not included in this analysis because they have only partial mitogenome sequences available. Gesonula punctifrons (MN046214) with complete mitogenome was also excluded from this analysis due to the inaccuracy of the sequence, possibly derived from the inadequate sequencing data (only 2 Gb data was generated through next-generation sequencing (NGS)) [24]. Sequencing, Assembly and Annotation A hind femur for each sample was sent to Berry Genomics (Beijing, China) for genomic sequencing using NGS, and the remainder of the specimen was deposited as a voucher specimen at the Central South University of Forestry and Technology. Whole genomic DNA was extracted from muscle tissue of the hind femur using a modified routine phenol and chloroform method. Separate 400 bp insert libraries were created from the whole genome DNA and sequenced using the Illumina HiSeq X Ten sequencing platform. A total of 20 Gb of 150 bp paired-end (PE) reads were generated in total for each sample. Raw reads were filtered to remove reads containing adaptor contamination (>15 bp matching to the adaptor sequence), poly-Ns (>5 bp Ns), or >1% error rate (>10 bp bases with quality score < 20). The mitogenome sequence was assembled from clean reads in Mitobim (ver. 1.9.1, see https://github.com/chrishah/MITObim (accessed on 8 May 2022)) [67]. Two runs were implemented independently using the same reference with different starting points (one point is trnI and another is COXI) to improve the sequence quality of the control region. The assembled raw mitogenome sequences were primarily annotated online using the MITOS Web Server (http://mitos.bioinf.uni-leipzig.de/index.py; accessed on 20 May 2022) [68] and then checked and corrected in Geneious (ver. 8.04, see https: //www.geneious.com(accessed on 5 June 2022)) [69]. The secondary structure of the RNA encoding genes predicted in MITOS were visualised and checked manually using VARNA (ver.3.93, see http://varna.lri.fr(accessed on 13 June 2022)) [70]. The three newly sequenced mitogenomes have been deposited in GenBank under accession number ON943039 for Ranacris albicornis, ON931612 for Longzhouacris mirabilis, and ON943040 for Conophyma zhaosuensis, respectively (Table 1). Base composition, A−T-and G − C-skews, and codon usage were calculated in MEGA X [71]. The formulas used to calculate the skews of the composition were (A − T)/(A + T) for the A−T-skew and (G − C)/(G + C) for the G−C-skew. Phylogenetic Analyses To explore the phylogenetic position of the genera Longzhouacris, Ranacris, Conophyma and some related taxa, 96 complete mitogenome sequences in total, representing 85 species in Acrididae and 3 species in Dericorythidae, were selected as ingroups, and 2 species in Pamphagidae served as outgroups. The complete mitogenome dataset consists of the 13 protein coding genes (PCGs). The two rRNA genes were not used for the phylogeny inference due to their limited resolution above the genus level [27,29]. In order to involve more species of the genus Conophyma in our analysis, partial mitochondrial COX1 sequences of 6 Conophyma species were downloaded from NCBI (Supplementary Materials Table S1) and the corresponding fragment was extracted from the complete mitogenome to generate a new dataset of partial COX1 fragment. The PCGs dataset was divided into 39 data blocks (13 PCGs divided into individual codon positions). Best-fit models of nucleotide evolution and best-fit partitioning schemes were selected using ModelFinder (see http://www.iqtree.org/ModelFinder/ (accessed on 9 July 2022)) [74]. The best-fitting models used for the phylogenetic analyses of the mitochondrial PCGs and partial COX1 datasets are shown in Supplementary Materials Table S2 and Supplementary Materials Table S3, respectively. The phylogenies were reconstructed in maximum likelihood (ML) and Bayesian inference (BI) frameworks. The ML phylogenies were reconstructed using IQ-TREE (ver. 1.6.12, see http://www.iqtree.org (accessed on 13 July 2022)) [75]. The approximately unbiased branch supportvalues were calculated using UFBoot2 [76]. The analysis was performed in W-IQ-TREE (see http://iqtree.cibiv.univie.ac.at (accessed on 13 July 2022)) [77] using the default settings. Nodes with a bootstrap percentage of at least 70% were considered well supported in the ML analyses [78]. BI analyses were accomplished in MrBayes (ver. 3.2.1, see http://morphbank.Ebc.uu.SE/mrbayes/ (accessed on 15 July 2022)) [79], with two independent runs, each with four Markov Chain Monte Carlo (MCMC) chains. The analysis was run for 1 × 10 7 generations, sampling every 100 generations, and the first 25% of generations were discarded as burn-in, whereas the remaining samples were used to summarize the Bayesian posterior probabilities. All of the above analyses were implemented in Phylosuite (see http://phylosuite.jushengwu.com/ (accessed on 15 July 2022)) [73]. For the phylogenetic trees reconstructed from partial COXI dataset, the cutoffs of 0.95 posterior and 90 bootstrap were used to collapse the nodes below these cutoffs to a polytomy. To overcome, at least partially, some of the issues in mtDNA, such as the generally high saturation and the among-lineages and/or among-sites compositional bias, the mitochondrial PCGs were translated into amino acids, and then the amino acids were used to run an ML and a Bayesian analysis via MtOrt [24], the taxa-specific amino acid substitution model for Orthoptera, and MtRev [74], the best-fit model chosen according to Bayesian Information Criterion (BIC), respectively. Characteristics of the Newly Sequenced Mitogenomes The mitogenomes of Longzhouacris mirabilis, Ranacris albicornis, and ered well supported in the ML analyses [78]. BI analyses were accomplished in MrBayes (ver. 3.2.1, see http://morphbank.Ebc.uu.SE/mrbayes/ (accessed on 15 July 2022)) [79], with two independent runs, each with four Markov Chain Monte Carlo (MCMC) chains. The analysis was run for 1 × 10 7 generations, sampling every 100 generations, and the first 25% of generations were discarded as burn-in, whereas the remaining samples were used to summarize the Bayesian posterior probabilities. All of the above analyses were implemented in Phylosuite (see http://phylosuite.jushengwu.com/ (accessed on 15 July 2022)) [73]. For the phylogenetic trees reconstructed from partial COXI dataset, the cutoffs of 0.95 posterior and 90 bootstrap were used to collapse the nodes below these cutoffs to a polytomy. To overcome, at least partially, some of the issues in mtDNA, such as the generally high saturation and the among-lineages and/or among-sites compositional bias, the mitochondrial PCGs were translated into amino acids, and then the amino acids were used to run an ML and a Bayesian analysis via MtOrt [24], the taxa-specific amino acid substitution model for Orthoptera, and MtRev [74], the best-fit model chosen according to Bayesian Information Criterion (BIC), respectively. Characteristics of the Newly Sequenced Mitogenomes The Most PCGs have a typical initiation codon of ATN (Table 2). However, COX1 in Longzhouacris mirabilis and Ranacris albicornis initiates from a non-standard initiation codon of ACC, COX1 in Conophyma zhaosuensis initiates from CAA, and ATP6 in Longzhouacris mirabilis initiates from GTG. Seven PCGs (ND2, COX2, COX3, ND4, ND4L, ND6, and CYTB) initiated from ATG. The initiation codon ATT has the second highest frequency of usage, followed by ACC and ATA. With respect to termination codons, the majority of PCGs have a typical termination codon of TAA in most species ( Table 2). The complete termination codon TAG occurs in ND1 in all of the three species. The incomplete termination codon TA occurs only in CYTB of Longzhouacris mirabilis and ND6 of Conophyma zhaosuensis. COX1 in all of the three species, ND4 in Longzhouacris mirabilis, and ND4 and ND5 in Conophyma zhaosuensis are terminated by T. Table 2. Initiation and termination codons of protein coding genes (PCGs) of the newly sequenced complete mitogenomes. Termination Codons mt1825 mt1826 mt1938 mt1825 mt1826 mt1938 ND2 ATG ATG ATG TAA TAA T COX1 ACC ACC CAA T T T COX2 ATG ATG ATG TAA TAA TAA ATP8 ATT ATT ATC TAA TAA TAA APT6 GTG ATG ATG TAA TAA TAA COX3 ATG ATG ATG TAA TAA TAA ND3 ATG ATT ATT TAA TAA TAA ND5 ATT ATT ATT TAA TAA T ND4 ATG ATG ATG T TAA T ND4L ATG ATG ATG TAA TAA TAA ND6 ATG ATG ATG TAA TAA TA CYTB ATG ATG ATG TA TAA TAA ND1 ATA ATA ATG TAG TAG TAG Note: mt1825: Longzhouacris mirabilis; mt1826: Ranacris albicornis; mt1938: Conophyma zhaosuensis. Initiation Codons The PCGs of the mitogenome have extremely similar codon usage pattern to other grasshoppers (Supplementary Materials Tables S7-S9). Among all codons of the PCGs, the most preferred codon with the highest average relative synonymous codon usage (RSCU) is UUA, which codes for Leucine and has an RSCU value of 3.98%. The next common codons are UCA (Serine) and CGA (Arginine), followed by UCU (Serine) and ACA (Threonine), with average RSCU values of 2.463%, 2.467%, 2.09%, and 1.993%, respectively, indicating a distinct codon usage bias in grasshoppers [29]. The sizes of the 22 tRNAs varies over a very small range in all the three newly sequenced mitogenomes (Supplementary Materials Table S10). Except for tRNASer-AGN lacking the DHU arm, all of the other 21 tRNAs can be folded into a typical clover structure (Supplementary Materials Figure S1). The numbers of base mismatches in the tRNAs varies drastically among the different mismatch types in all species, but all species have similar distribution patterns of base mismatches ( Table 3). The mismatch of G-U represents the majority of the total mismatches. A-A occurs only once in trnD. U-U occurs in trnQ and some other tRNAs. A-G occurs only in trnW. For the most frequent mismatch of G-U, it does not occur in trnM, trnD, trnN,and trnW for all three species, andthe maximum mismatch number in one tRNA is five ( Table 4). The lrRNA and srRNA are located between the trnL1 and trnV, and trnV and A + T-rich regions, respectively. Their lengths vary between 1365-1389 bp (lrRNA) and 792-806 bp (srRNA). The control region is located between rrnS and trnI, and contains the highest proportion of A + T content ranging from 78.6 to 80.9%. The lengths of the control region vary between 860 and 1437 bp (Supplementary Materials Tables S4-S6). Table 3. Total numbers of different types of base mismatches in tRNAs of the three newly sequenced mitogenomes. Species A In addition to the control region, there are also some gene intervals or base overlaps between some genes, and the maximum overlap area is between trnL1 and rrnL (Supplementary Materials Table S10). There are nine, seven, and seven tightly aligned gene pairs without overlap or interval in the mitogenome of the three species, respectively. Phylogeny The phylogenetic trees inferred from the dataset of the 13 mitochondrial PCGs using maximum likelihood and Bayesian inference methods have an extremely consistent topology above the genus level ( Figure 2, Supplementary Materials Figure S2). At the family level, the monophylies of both Acrididae and Dericorythidae are not supported. The three species of Dericorythidae form three individual clades. Conophymacris viridis completely falls into Acrididae, having the closest relationship with Xiangelilacris zhongdianensis. Conophyma zhaosuensis and Dericorys annulata are located near the base of the trees, but do not form a single clade. Conophyma zhaosuensis forms a small clade with Leptacris sp. and Dericorys annulata forms an individual clade itself, located at the most outside of the ingroup. At the subfamily level, the monophylies of six subfamilies (Spathosterninae, Oxyinae, Cyrtacanthacridinae, Eyprepocnemidinae, Calliptaminae, and Coptacrinae) are retrieved usually with strong nodal support. However, the remaining subfamilies are not recovered as monophyletic. For the subfamily Melanoplinae, nearly all species cluster into an independent clade except for Xiangelilacris zhongdianensis, which forms a small clade with Conophymacris viridis of Dericorythidae, and has close relationships with Coptacrinae and Longzhouacris mirabilis of Habrocneminae. The two species of Habrocneminae sampled in this study, Longzhouacris mirabilis and Menglacris maculata, do not form a single clade, but fall into two distantly separated clades, one of which is Menglacris maculata + Ranacris albicornis, and the other is Longzhouacris mirabilis + Conophymacris viridis + Xiangelilacris zhongdianensis + Coptacrinae, with a bootstrap value of 100% or posterior probability of 0.96. The members of Hemiacridinae are divided into two distantly separated clades, with Hieroglyphus species having a close relationship with Spathosterninae, but Leptacris sp. forming a small clade with Conophyma zhaosuensis. The members of Catantopinae are divided into three clades: Ranacris albicornis, the genus Traulia and the typical Catantopini species. Ranacris albicornis is consistently most related to Menglacris maculata, with a bootstrap value of 100% or a posterior probability of 1. The genus Traulia has the closest relationship with the clade including Coptacrinae species. The clade of the typical Catantopini species forms the sister group of Cyrtacanthacridinae. sp. forming a small clade with Conophyma zhaosuensis. The members of Catantopinae ar divided into three clades: Ranacris albicornis, the genus Traulia and the typical Catantopin species. Ranacris albicornis is consistently most related to Menglacris maculata, with a boot strap value of 100% or a posterior probability of 1. The genus Traulia has the closest rela tionship with the clade including Coptacrinae species. The clade of the typical Catantopin species forms the sister group of Cyrtacanthacridinae. Although the phylogeny deduced from the mitochondrial PCGs is robust, the phylogenetic trees reconstructed from the dataset of partial COX1 fragment sequences exhibit great difference from the former (Supplementary Materials Figures S3 and S4). The monophylies of the subfamilies Calliptaminae, Coptacrinae, Eyprepocnemidinae, and Oxyinae are no longer supported in both or at least one tree from the COX1 dataset. The relationships among the key groups are also very different between the ML and BI trees, and the relationships among most clades are unsolved, forming a large polytomy at the base of the trees. Even the two Hieroglyphus species are also split into two distantly separated clades. This result indicates the extreme instability of the phylogeny reconstructed using COX1 sequence. Despite the great difference in the topology between the trees from mitochondrial PCGs and COX1 datasets, and the instability of the COX1 trees, the small clades of Ranacris albicornis + Menglacris maculata and Conophymacris viridis + Xiangelilacris zhongdianensis are always robust in all trees (Figure 2, Supplementary Materials Figures S2-S4). The clade of the genus Conophyma is also robust in the COX1 trees, but the relationship of this clade with other groups varies (Supplementary Materials Figures S3 and S4). The position of Longzhouacris mirabilis also varies in the COX1 trees (Supplementary Materials Figures S3 and S4). It falls into the clade of the subfamily Eyprepocnemidinae in the ML tree of the COX1 dataset ( Figure S3), but forms a polytomy clade with some species of the subfamilies Hemiacridinae and Oxyinae, as well as other clades, in the BI tree of the COX1 dataset (Supplementary Materials Figure S4). In addition, it is noticeable that the genus Emeiacris always falls into the clade of the subfamily Melanoplinae and has an extremely stable close relationship with Paratonkinacris in all trees (Figure 2, Supplementary Materials Figures S2-S4). The similar case also occurs in the genus Choroedocus, which always forms a stable clade with Shirakiacris species of the subfamily Eyprepocnemidinae and has a closer relationship with the subfamily Calliptaminae than Catantopinae in the trees from the dataset of mitochondrial PCGs (Figure 2, Supplementary Materials Figure S2). With a further look at the trees reconstructed from the amino acid dataset, we find that the ML tree reconstructed using the MtOrt model ( Figure 3) has an extremely high similarity in the main topology with the trees deduced from the mitochondrial PCGs (Figure 2, Supplementary Materials Figure S2), including the non-monophyly of Dericorythidae, the monophylies of the six subfamilies (Spathosterninae, Oxyinae, Cyrtacanthacridinae, Eyprepocnemidinae, Calliptaminae, and Coptacrinae), the positons of the genera Emeiacris and Choroedocus, the relationship of Menglacris with Ranacris, and that of Conophymacris with Xiangelilacris, and so on. The most important difference is that Dericorys annulata and Conophyma zhaosuensis forms an independent clade only in this tree ( Figure 3). The BI tree deduced from the amino acid dataset using the MtRev model (Supplementary Materials Figure S5) is similar to the ML tree, but the monophyly of the subfamily Calliptaminae is no longer supported, with Peripolus nepalensis escaping from the clade of the genus Calliptamus. Insects 2023, 14, x FOR PEER REVIEW 12 of 27 Phylogenetic Position of the Genus Emeiacris The genus Emeiacris was established with Emeiacris maculata Zheng, 1981 as the type species [80] and three known species so far [32]. According to the original description, Emeiacris is most similar to the genus Oxyacris of the subfamily Oxyinae, and is mainly characterized by the rounded apex of the lower knee-lobes of the hind femora (Figure 4e), the widely separated metasternal lobes (Figure 4f), and the distinct process at the lateral margin of the supra-anal plate. When Emeiacris was erected, it was not definitely assigned to any subfamily. Therefore, it is possible that the authors of OSF placed Emeiacris in the subfamily Oxyinae according to the original reference, where the closest relative of Emeiacris was Oxyacris of the subfamily Oxyinae [32]. Subsequently, Emeiacris was definitely placed in the subfamily Podisminae [81], and then in Melanoplinae [33]. Morphologically, the species of Emeiacris are extremely similar to the species of the genera Ognevia and Fruhstorferiola (Figure 4a-n). The rounded apex of the lower knee-lobes of the hind femora and the widely separated metasternal lobes are typical distinguishing characters of Melanoplinae (Figure 4e,f), but not those of Oxyinae (Figure 4o-q). In addition, the absence of the ectoapical spine in the hind tibia (Figure 4g,h), and the epiphallus not divided into two separated symmetric parts (Figure 4i-k), also disagree with the diagnostic characters of Oxyinae, but match those of Melanoplinae. In the molecular study, Emeiacris consistently falls into the clade of Melanoplinae and has the robust closest relationship with Paratonkinacris in all trees (Figures 2 and 3, Supplementary Materials Figures S2-S5). Therefore, the genus Emeiacris should be considered as a member of the subfamily Melanoplinae rather than Oxyinae. Phylogenetic Position of the Genus Choroedocus The genus Choroedocus was proposed by Bolívar [82] to replace the preoccupied genus name "Demodocus Stål, 1878" (nec Demodocus Guérin, 1843 in Coleoptera). Demodocus was proposed first as a subgenus of the genus Calliptenus, which was considered being most similar to the genus Eyprepocnemis [83], and then raised to the generic level by Brunner von Wattenwyl [84]. Kirby [85] proposed to restrict Walker's name Heteracris to the genus because it was preoccupied in Coleoptera. Bolívar [82] proposed a new name, Choroedocus, for Demodocus. No matter Demodocus, Heteracris, or Choroedocus, they were always definitely assigned to the group Euprepocnemes [82,86], or the tribe Eyprepocnemini [87,88], or the subfamily Eyprepocnemidinae [33,34,38,85]. There are indeed a few works where Choroedocus is placed in the subfamily Catantopinae [81,89,90], but Catantopinae in this sense contains actually all Eyprepocnemidinae taxa. In other words, all Eyprepocnemidinae taxa are members of the subfamily Catantopinae, and there is no category of the subfamily Eyprepocnemidinae in that classification scheme. We do not know why the authors of OSF finally placed the genus Choroedocus in the subfamily Catantopinae. The most probable reason may be that the genus Choroedocus was once placed by Liu [91] in the family Catantopidae without assignment of the subfamily position. However, this is not strong evidence for the decision because the truth is that Liu [91] merely listed no subfamily category in his work. After examining materials of the genus Choroedocus, we found they highly agree with the distinguishing characters of the subfamily Eyprepocnemidinae: the pronotum with distinct lateral carina and a large, black, velvety maculation, the hind tibiae with many more spines on the external margins (Figure 5a-d), and the male genitalia structure, especially the epiphallus (Figure 5e-j), which is very similar to that of Shirakiacris shirakii (Figure 5k-t). Based on molecular analysis, Choroedocus has a robust relationship with Shirakiacris of the subfamily Eyprepocnemidinae (Figures 2 and 3, Supplementary Materials Figures S2 and S5). Therefore, it is more reasonable to consider Choroedocus as a member of the subfamily Eyprepocnemidinae. The family name Dericorythidae was first proposed by Eades [41] according to the distinctive pseudoarch in the phallic complex. The pseudoarch found in Dericorythidae is a paired structure not connected across the midline (Figure 6j,s-t). In contrast, the arch of aedeagus rises from the median, dorsobasal region of the dorsal valves of aedeagus [44] ( Figure 4m). Eades [41] thought that the presence of a well-developed arch sclerite should be treated as a crucial character in defining the family Acrididae. However, the representatives of Dericorythidae examined by Eades [41] were extremely limited, with only two species in the subfamily Dericorythinae, and one species in the subfamilies Conophyminae and Iranellinae, each, leading to a possibility that the morphological diversity was not fully represented by the limited taxon sampling. In this study, the 3 sampled species of Dericorythidae did not cluster into a single clade in all phylogenetic trees (Figures 2 and 3, Supplementary Materials Figures S2-S5). Conophymacris szechwanensis first clusters into a clade with Xiangelilacris zhongdianensis, and then with Longzhouacris mirabilis and two species of Coptacrinae in the trees from the mitogenome dataset, showing an extremely distant relationship with the two other species of Dericorythidae (Figures 2 and 3, Supplementary Materials Figures S2 and S5). Furthermore, Conophymacris szechwanensis has not only a true arch rather than a pseudoarch (Figure 7j), but also an extremely different external morphology (Figure 7a-d) and geographical distribution. Dericorys annulata and Conophyma zhaosuensis are both located at the base of the trees, but do not form a single clade in most phylogenetic trees (Figure 2, Supplementary Materials Figures S2 and S5), except in the ML tree deduced from amino acid dataset with MtOrt model, where Dericorys annulata and Conophyma zhaosuensis forms an independent clade ( Figure 3). Although both Dericorys annulata and Conophyma zhaosuensis have pseudoarches in the phallic complex (Figure 6j,s,u) and similar geographical distribution region, they exhibit distinct differences in external morphology, including the general appearance and the male genitalia structure, especially the epiphallus (Figure 6e-i,p-r). Therefore, the family Dericorythidae is certainly not a monophyletic group, and the relationship among the family needs to be clarified by denser taxon and character sampling and more nuclear molecular markers, including both genome and transcriptome data [5]. After all, morphology may be heavily influenced by many abiotic and biotic factors [92]. Some morphological characters may have evolved independently multiple times [2,28] and may not be reliable for recognizing monophyletic groups within some higher categories [28,93]. Phylogenetic relationships between or within some groups may be clouded by many factors, such as gene tree discordance, introgression, and the gene tree anomaly zone [92]. Denser taxon sampling will reveal a wide range of variation and more efficiently improve an artificial classification [94][95][96]. Phylogenetic Relationship between the Genera Conophymacris and Xiangelilacris The genus Conophymacris was erected by Willemse [97] with Conophymacris chinensis Willemse, 1933 as the type species, and placed originally in the subfamily Catantopinae. Later, it was respectively placed in the tribe Conophymatini of Catantopinae [43,98], the subfamily Conophyminae [40], and Podisminae [33,34,81], but not assigned to a definite tribal position within the subfamily Conophyminae in OSF [32]. The genus Xiangelilacris was established by Zheng et al. [99] with the type species, Xiangelilacris zhongdianensis, as the only known species so far. This genus is most similar to the genera Indopodisma and Pedopodisma, according to the original description. Therefore, it was undoubtedly recognized as a member of the tribe Podismini of the subfamily Melanoplinae by later acridologists [34]. However, this opinion has not been supported by mitogenome evidence, and it always has a very robust close relationship with Conophymacris in all phylogenetic trees [27] (Figures 2 and 3, Supplementary Materials Figures S2-S5). The clade of Conophymacris + Xiangelilacris fell into neither the subfamily Melanoplinae nor the clades of other species of the family Dericorythidae. We examined some materials of Conophymacris szechwanensis (Figure 7a-k) as well as the types of Xiangelilacris zhongdianensis (Figure 7l-s), and found that the types of Xiangelilacris zhongdianensis are nymphs rather than adults, with the bud of hind wings distinctly covering on that of tegmina (Figure 7r,s), and most similar to Conophymacris species. In addition, Conophymacris szechwanensis has a true arch sclerite in the phallic complex (Figure 7j), indicating its possible membership in the family Acrididae. Both Conophymacris szechwanensis and Xiangelilacris zhongdianensis have distinct lateral carinae on the pronotum and ectoapical spines in the hind tibiae, which are absent in Melanoplinae. Therefore, it is undoubted that the genera Conophymacris and Xiangelilacris have a most robust close relationship with each other, and neither of them belongs to either the subfamily Melanoplinae or the family Dericorythidae, or even the Conophyminae. Their exact position needs further comprehensive research. Phylogenetic Relationship between the Genera Menglacris, Ranacris and Longzhouacris The genus Menglacris was established by Jiang and Zheng [100], with Menglacris maculata Jiang and Zheng, 1994 as the type species. Although it was not assigned to a definite subfamily position originally, its membership in Habrocneminae was recognized by Li et al. [33], and followed by Mao et al. [34]. The taxonomic history and subfamily position of the genus Ranacris have been mentioned in the introduction section. Although these two genera are placed in different subfamilies in OSF, they display a very robust close relationship with each other in all phylogenetic trees (Figures 2 and 3, Supplementary Materials Figures S2-S5) and an extremely high similarity in external morphology and male genital structure ( Figure 8). However, they are always far away from the genus Longzhouacris of the subfamily Habrocneminae in all phylogenetic trees (Figures 2 and 3, Supplementary Materials Figures S2-S5), and show distinct differences in morphology and male genital structure (Figures 8 and 9). Therefore, we have decided that the genera Menglacris and Ranacris are close relatives, but their relationship with Longzhouacris and other related groups may be resolved only in a broader context where all members of the subfamily Habrocneminae and the tribe Mesambriini, or even more related groups, could be included in the analysis. Performance of the Mitochondrial COX1 Gene in Reconstructing Phylogeny A previous study has suggested that COX1 barcode region may perform much better in phylogenetic reconstruction at genus and species ranks than at higher ranks [101]. In this study, the COX1 barcode region extracted from the mitogenome plus that of six additional Conophyma species [102] was used again to test: (1) the accuracy of mitogenome sequence of Conophyma zhaosuensis; (2) the phylogenetic position of the genus Conophyma using more sampled species; and (3) its performance in reconstructing phylogeny at higher categories under the phylogenetic framework derived from the complete mitogenome dataset. The result showed that Conophyma zhaosuensis always fell within the same Performance of the Mitochondrial COX1 Gene in Reconstructing Phylogeny A previous study has suggested that COX1 barcode region may perform much better in phylogenetic reconstruction at genus and species ranks than at higher ranks [101]. In this study, the COX1 barcode region extracted from the mitogenome plus that of six additional Conophyma species [102] was used again to test: (1) the accuracy of mitogenome sequence of Conophyma zhaosuensis; (2) the phylogenetic position of the genus Conophyma using more sampled species; and (3) its performance in reconstructing phylogeny at higher categories under the phylogenetic framework derived from the complete mitogenome dataset. The result showed that Conophyma zhaosuensis always fell within the same clade together with other Conophyma species (Supplementary Materials Figures S3 and S4) [102], indicating the reliability or accuracy of the newly sequenced mitogenome of Conophyma zhaosuensis. Although the Conophyma species always formed a single clade in both ML and BI trees (Supplementary Materials Figures S3 and S4), they did not cluster into a single clade with Dericorys annulata and Conophymacris szechwanensis, just as in the trees deduced from the complete mitogenome dataset (Figure 2, Supplementary Materials Figure S2), indicating the remote relationship among these three genera. As for the performance of COX1 gene in resolving the phylogeny at higher categories, the monophylies of most well-charactered subfamilies were not supported except for Spathosterninae and Cyrtacanthacridinae. Sometimes, even the congeneric species, such as the Hieroglyphus species, were split into different clades. Therefore, the mitochondrial COX1 gene alone is not suitable for resolving phylogeny of higher categories, at least in Acridoidea, but is indeed powerful in reconstructing phylogenetic relationships among closely related species [103] despite the high error rates sometimes in individual lineages [104].
2023-01-21T06:16:41.168Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "52fc4ce6fc5198851b0f1118f426ce55cb988b69", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4450/14/1/85/pdf?version=1673612002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7602848e86cc4635c296f1d7c250f19c2e944c4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246478102
pes2o/s2orc
v3-fos-license
Exploring Key Factors for Contractors in Opening Prefabrication Factories: A Chinese Case Study Adoption of prefabrication is essential for improving the urban built environment. However, the existing prefabrication market in China is far from mature. As the stakeholder who conducts construction activities, the contractor is facing a dilemma of lacking steady prefabricated components supply. In this circumstance, a potential solution is that contractors open their own prefabrication factories to guarantee stable component supply. The aim of this research is exploring the key factors for contractors to open prefabrication factories. Firstly, a total of 28 influencing factors were identified from literature. Then, the identified factors were divided into four categories: policy environment, market environment, technological environment, and enterprise internal environment. Through interviews with experienced professionals, a total of 19 factors were selected for future analysis. Based on the 19 factors, a questionnaire was designed and distributed to the experts to rate the degree of mutual influences. The collected data were analyzed using Ucinet6.0 software, and the adjacency matrix and the visual models were established. Finally, through the analysis of node centrality, betweenness centrality, and closeness centrality, the four key influencing factors were determined including mandatory implementation policy, precast concrete component's price, market demand, and contractor's strategic objectives. The results of this study could assist contractors in making decisions of opening their own prefabrication factories toward more sustainable environment. INTRODUCTION A prefabricated building refers to a building in which all or parts of the building are prefabricated in a factory. These are then transported to construction sites for assembly, connection, and partial cast-in-situ construction (1)(2)(3). Compared with traditional construction methods, prefabrication has many advantages (4). First, using prefabricated components is a way to industrialize construction, which greatly improves the duration, quality, and sustainability of the project. It also reduces waste, noise, dust, operation cost, labor demand, and resource depletion of the construction site. In addition, the main components of the prefabricated building are produced in factories, while the assembly of components can be carried out at the same time with other in-situ construction processes (5). Bottlenecks and construction delays are common problems due to inefficient construction site production. Prefabricated components can reduce bottlenecks, improve production rates, and shorten production times (6). In addition, weather does not have significant negative impacts on prefabricated construction assembly and this effectively ensures construction efficiency (7). Furthermore, relevant research results show that prefabricated components generate savings ranging from 4 to 14% of total life-cycle energy consumption. The surrounding urban built environment also benefits from waste reduction and quality control (8). Prefabricated components can be used to control pollution since the pollution resulting from site work has become a big threat to the urban environment. Prefabricated components are constructed on-site by lifting and splicing and thus reduce the workload for in-situ construction, improve the quality control process, and ensure the health and safety of workers (9). Due to the high degree of standardization of prefabricated components, it is possible to adopt digital technologies and information management in the design, production, transportation, and assembly process. Development of prefabricated construction in China has been ongoing for over 70 years (10)(11)(12). However, the development of prefabrication progressed at a slow pace due to some challenges such as quality problems, higher prefabricated component cost (13), and rapid development of in-situ construction (14). Prefabrication is linked in several ways to factory production, assembly construction, information management, and intelligent applications (15). In recent years, with a declining labor force, an increase in employment costs and national sustainable development requirements (16), prefabricated construction has gained more attention. It plays a critical role in realizing the industrialization and modernization of the construction industry (17). Thus, prefabrication has been widely implemented and promoted in China over the past few years. To promote prefabrication development, the national government has put forward a development goal of having prefabricated buildings make up 30% of all newly built buildings in about 10 years (18). In response to this development goal, more than 30 cities including Shanghai, Chongqing, and Beijing have issued a series of policies to promote prefabrication. In addition, standards and norms have been introduced to promote the implementation of prefabricated construction. They have been shown to play an important role in safeguarding the development of prefabrication in China (19). According to officially released data, a total of 630 million square meters of prefabricated buildings were built in 2020, which accounted for 20.5% of all newly built buildings (20). Along with the rapid promotion of prefabricated buildings, industries related to prefabrication have developed quickly as well. According to government statistics, a total of 328 national-level prefabricated construction industrial bases and 908 provincial-level industrial bases have been established across the country, which indicates that the market demand for prefabricated components is huge (21). To deal with such a huge demand, prefabrication factories have been opened by different stakeholders including developers, design companies, contractors, and other investors (22). As the stakeholder who is directly involved in the construction process, the contractor has the willingness to open prefabrication factories to preferentially fulfill demand (23). However, as opening a prefabrication factory requires a huge amount of resources (e.g., site, technology, investment), the contractors need to carefully consider their potential decisions of opening prefabrication factories from many aspects. In existing literature, there is a knowledge gap of identifying what are the key factors for contractors to make decisions. Thus, this paper aims to explore the key factors contractors take into consideration when deciding to open prefabrication factories. Identification of Potential Factors Potential influencing factors were extracted from previously published relevant literature. After an initial screening, a total of 28 potential influencing factors were identified. Five experienced professionals were then invited to filter the identified potential influencing factors and their opinions on these factors were solicited. Based on their suggestions and comments, nine potential influencing factors were excluded and the remaining 19 factors were classified into four categories according to their attributes as shown in Table 1. Explanation of the Influencing Factors The influencing factors can be explained as follows. Policy Environment To develop industrialized construction in China, a mandatory implementation policy has been promulgated. According to the 2015 Construction Industry Modernization Development Outline, prefabricated buildings will account for more than 30% of new buildings by 2025 (24). The mandatory implementation policy was promulgated to upgrade existing industrial structures to satisfy green development goals. In essence, the aim of the mandatory implementation policy was to change the traditional construction mode to the prefabrication method, which is an important measure for promoting supply-side structural reform (25). In short, the policy is an essential driving force for promoting the application of prefabrication construction in the construction industry (26). Economic incentives for manufacturing such as tax reductions have been continuously emphasized. Liang (27) stated that the total value of tax reduction was approximately 2.36 trillion yuan in 2019, which was essential to promote the high-quality development of the manufacturing industry. As equipment accounts for a huge proportion of total costs, a tax reduction policy could help to decrease the total cost and promote highquality development. The availability of priority land supply refers to giving priority to providing land for prefabrication factory sites. On the basis Market Environment Relevant research shows that opening prefabrication factories is aligned with current market demand in China. This is because the annual market size for the nationwide industrialization of construction is expected to reach 5 trillion yuan in 2025, which would account for about 50% of the total output value of the entire construction industry (30). Market demand refers to the need for prefabricated components which promote technological innovation. The next 10 years will be a golden period of rapid development in the industrialization of construction. The level of industrial chain maturity directly affects the benefits of contractors, which is based on the relationship between market supply and demand. There are two key nodes in the industry chain, namely the prefabricated components in the production stage and the sales stage (31). Industrial chain maturity in this article refers to a new prefabricated industry chain which is a key part of the construction process. The competitive pressure for contractors in the construction market is sustained growth, since the construction industry is subject to strong internal competition. The increasingly competitive environment has put various contractors under great pressure. Opening prefabrication factories is the right way to provide developers with better products and services. Contractors can also acquire core competitive prefabricated technology to outshine market competition (32). To a significant extent, the demand for prefabricated buildings depends on the willingness of developers to accept them. They are the upstream enterprises supplying contractors in the industry chain (33). The willingness of developers to accept prefabricated buildings has led to prefabrication methods that have gone beyond traditional construction constraints. With increasing acceptance of prefabricated technology, contractors will have many new opportunities and challenges in the construction market. The price of precast concrete components is a vital factor in the cost of any building. This refers to the cost management of prefabricated projects (34). Jiang et al. (35) stated that the selection of prefabricated component suppliers is one of the important links in the housing industry chain and the price of prefabricated components is one of the key factors. The choice on whether to open prefabrication factories or not depends on the profit earned from prefabrication sales. Industrial support refers to the relevant industrial conditions for prefabricated production, such as the supply of raw materials, transportation conditions, and industrial workers (36). The completeness of industrial support conditions in the region affects the investment, construction, and production costs of prefabricated component factories. Insufficient industrial support capacity will lead to a lack of relevant enterprises in the regional industrial chain and increase the operating costs of enterprises. Technological Environment The professional competence of the contractor refers to its technical competence in the prefabricated construction field. It is critical to acquire various key construction techniques to keep up with cutting-edge technology, including systems, specialization, integration, prefabrication, assembly, and information technology (37). Opening prefabrication factories can help improve the professional level of contractors in the prefabricated construction field. Furthermore, the professional competence of a contractor includes its human resources department, as "having the right people" is crucial for success. The financial and quality outcome of a project is highly dependent on the competence of the individual chosen as the site manager (38). It is crucial to make perfect standard specifications which are related to the design, production, and installation of prefabricated buildings. Vakili (39) stated that maturity of standards was conducive to the production, operation, and sales of prefabricated component factories. Moreover, standards establish requirements that stipulate prefabricated construction processes and products. Related design and technical standards can provide professional expertise to guide contractors in the implementation of prefabricated construction (40). Enterprise Internal Environment The internal environment of the company refers to the economic and technological level of the enterprise which may sway its decision to open prefabrication factories. There is no obvious gap in policy awareness among various sectors, so large companies do not have a better understanding of market-based instruments than their small and medium-sized counterparts (41). Li et al. (42) stated that firm size has a significant effect on the willingness of construction enterprises to accept the policy. Small enterprises are restricted by many factors such as capital, technology, and talent. The strategic goals of a contractor guide the direction of development of an enterprise. Building prefabrication was strongly promoted by local governments, which relies on the in-situ manufacturing and work-site assembly of prefabricated components (17). The strategic objectives of contractors should be aligned with local policies, so that the green benefits of prefabricated building initiatives can be reaped in China. The establishment of prefabrication factories can expand the business scope of the enterprise and enhance the level of specialization in prefabricated buildings. This is conducive to the development of EPC (Engineering Procurement Construction) and other general contracting methods (43). Prefabricated buildings have the potential for improved quality, productivity, efficiency, safety, and sustainability (44). The perception of contractors has been further enhanced under the mandatory implementation policies. Yu et al. (45) stated that a perception of greater purchasing behavior led to more actual purchasing behavior to a limited extent, which is analogous to the building behavior of contractors who work with prefabrication factories. The innovative potential of a contractor refers to the innovation present in the construction technology or assembly technology used by the contractor. The level of innovation of Chinese construction enterprises is not ideal, and the innovative potential of construction enterprises must be urgently strengthened at this stage (46). Building prefabrication factories and promoting contractor transformation can effectively enhance the innovative potential of contractors (47). According to the findings of field interviews, research and applications, prefabricated technologies are deeply affected by the market environment, especially company awareness of prefabrication buildings and prefabrication factories. The willingness of contractors to open prefabrication factories depends on market demand. The competitiveness of traditional buildings is still higher than that of prefabricated buildings in China (48). Huang et al. (49) stated that it was influenced by many factors. The more willing the contractors, the greater the possibility that they would establish prefabrication factories. The production of prefabricated components is the link between design and construction. It is essential for prefabrication factories to achieve precision manufacturing and quality excellence. The realization of precision manufacturing can effectively avoid or even eliminate waste and uncertainty in the production process, which can improve product quality and production efficiency (26,50). Contractors have technical and management advantages when establishing prefabrication factories that can help to realize in-plant precision manufacturing and lean construction assembly of prefabricated components at work sites. Most contractors often pursue the economic benefits derived from developers because the ultimate goal of contractors is to earn a profit. The cost of prefabricated construction is higher than traditional construction due to the uneven distribution of prefabrication factories and high operation costs in China (51). Prefabrication and industrialized building systems confer advantages including shorter project duration, cost savings, enhanced site protection, better product quality, and reduced waste (52). For contractors, opening prefabrication factories can enhance their own industrial capacity. This proposal takes into consideration the life cycle cost and the increased benefits of the final product. The aim of promoting the sustainable development of prefabricated buildings is to satisfy current and future needs of the construction industry. It also ensures faster progress, cost-effectiveness and construction quality, as well as worker safety (53). Therefore, contractors will pay more attention to construction quality as producing standardized and highprecision prefabricated components can effectively avoid errors caused by manual labor in traditional construction. Furthermore, prefabricated construction technology can effectively optimize construction structure and improve construction quality (54,55). RESEARCH METHODOLOGY Data Collection In this study, 11 experienced experts from different stakeholders, such as government (1), research institutes (2), contractors (4), and prefabrication factories (4), were invited to gives scores for the degree of influence of various factors by completing questionnaires. All of the invited experts must have professional experiences more than 5 years and present different stakeholders' viewpoints on opening a prefabrication factory. The first row and the first column of the matrix are the influencing factors, and the values in the matrix represent the degree of influence that each element in the columns has on the elements in the rows. In the questionnaire, the experts were asked to give a score ranging from 0 to 4 to evaluate the degree of influence. A score of 0 means that the element in the column does not influence the element in the row; a score of 1 means the degree of influence is small; a score of 2 means the degree of influence is average; a score of 3 means the degree of influence is large; and a score of 4 means a very high influence. After collecting the questionnaire results from the experts, Ucinet 6.0 software was used to conduct a consistency analysis to check the expert scoring results. Ucinet 6.0 software was selected because it can deal with the original data as the format of matrix and then provide visible relationships between different influencing factors. Social Network Analysis According to the social network theory, actors are resourcecompetitive and form social relations and social network structures through resource flows. Therefore, the characteristics of the network structure and relationships in the network have important impacts on the actors (56). The relationship structure of social networks can be generally represented by graphs or Frontiers in Public Health | www.frontiersin.org matrices. There are three main methods of representation used in this article, namely degree centrality, betweenness centrality, and closeness centrality. Among them, degree centrality refers to the degree to which a node is directly related to other nodes. The greater the number of nodes, the greater the power of the node in the social network. The betweenness centrality refers to the ability of a node to control other nodes. The greater the centrality, the larger the number of nodes that need to pass information between each other via this node. The closeness centrality is the reciprocal of the sum of the distances from the node to all other nodes. The larger the value, the easier it is for the node to communicate with other nodes to transfer resources. Social network analysis solves the complex interest relationships between actors or organizations in real life by quantitatively analyzing the social relationships between actors. Therefore, it is widely used in various research fields, such as economics (57), sociology (58), and management (59). In the field of construction engineering, the decision-making of contractors is influenced by many actors, such as governments, developers, and component manufacturers. This study used social network analysis to identify the power and reputation of actors in the decision-making mechanism for contractors. It provided a basis for the selection of the main body of the social network organization structure. Meanwhile, it can effectively identify key influencing factors in the behavioral decisionmaking mechanism, analyze the degree of mutual influence, and analyze the decision-making mechanism for contractors when deciding to open prefabrication factories (60). Network Density Analysis Network density can reflect the degree of connection between nodes in a network. If the overall social network graph is an undirected relational network with n influencing factors, the theoretical value of the total number of associations is N * (n -1)/2. If the actual number of associations (which can be considered as the number of connections) contained in the social network is M, the social network density is then equal to m/(n * (n -1)/2) = 2m/(n/(n -1)). If the whole social network graph is a directed relational network, the theoretical value of the total number of associations is N * (n -1) and the network density is equal to M/(n/(n -1)) (61). In particular, the formula for network density in the directed relational network is: Degree Centrality Analysis In the social network diagram, degree centrality refers to the number and intensity of direct or indirect connections (adjacent connections) with a certain node. If the node is located in the center of the social network diagram, the value for degree centrality of this node increases, and more nodes are connected with that point. Degree centrality aims at finding centrality based on the notion that important nodes have many connections as expressed in Equation (2) (61). where n is the number of nodes in the network and a(p i , p k ) is a distance function. a(p i , p k ) = 1 if and only if node p i and node p k are connected. Otherwise, a(p i , p k ) = 0. Betweenness Centrality Analysis Betweenness centrality aims at finding centrality based on the assumption that nodes that connect other nodes are important nodes. Betweenness centrality measures the ability of an influencing factor to control other factors, i.e., other factors can only be related through this factor, and nodes with high betweenness centrality are in the center of the network. The larger the shape of the node, the greater the betweenness centrality of the influencing factor, i.e., the closer to the center of the network the influencing factors are, the stronger the degree of influence on other nodes. The formula used to calculate the degree of centrality is shown in Equation (3), which is used to express betweenness centrality (61). where g ij is the number of geodesics (shortest paths) linking node p i and node p j and g ij (p k ) is the number of geodesics linking node p i and node p j that contains node p k (62). Closeness Centrality Analysis The closeness centrality refers to the ability where the influencing factor cannot be controlled by other factors. It is derived by calculating the sum of the shortest distance between a certain node and all other nodes in the entire network. When the distance between a node and other nodes in the network model is short, it proves that the node is highly capable of mastering and conveying information instead of being easily affected by other nodes. The value can be normalized by using the maximum possible distance between any two nodes in a network of n nodes. This value is n -1. More precisely, the normalized closeness of node p k is given by (61): RESULTS AND DISCUSSION Based on the results of the consistency analysis, this study optimizes the scoring data of the experts and obtains a social network analysis matrix (adjacent matrix) of the factors affecting the decision-making mechanism of contractors when it comes to opening a prefabrication factory. Network Density Analysis Results Based on the adjacency matrix of influencing factors, Ucinet 6.0 was used to visualize the network association of influencing factors as shown in Figure 1. In Figure 1, there are directional correlations between the influencing factors. The relationships among the influencing factors are indicated by arrows and the number on a specific arrow represents the degree of influence for each node (63). According to the formula, the network density of the model is 1.4620, indicating that the network graph of influencing factors for the decision-making mechanism regarding the opening of fabrication factories by contractors has a high density (generally, 0.50 is used as the average standard for measuring density). In other words, the network graph of influencing factors changes as a whole with a change in any two factors. Therefore, through the analysis of network density, it can be found that the change in the relationship network caused by the change of each influencing factor. We can then modify the decision-making mechanism for contractors when deciding whether to open a prefabrication factory. Degree Centrality Analysis Results The visualization result for degree centrality analysis is shown in Figure 2. The larger the node in Figure 2, the higher the degree centrality of the node. Degree centrality is measured by node in-degree and node out-degree. Node in-degree indicates the number and correlation of other influencing factors directly affecting this factor, and node out-degree indicates the number and correlation of other factors that are directly affected by this influencing factor. Based on the degree centrality analysis results of the influencing factors, the 19 influencing factors are sorted in a descending order based on their level of out-degree as shown in Table 2. They can be divided into four categories according to their levels of out-degree and in-degree: a) The out-degree of an influencing factor is large and its indegree is small. This means that the influencing factor can easily affect other factors, but is not easily affected by other factors (64). It is a spontaneous influencing factor and is the source of influence in the relationship network. b) The out-degree of an influencing factor is small and its indegree is large. This means that the influencing factor does not easily affect other factors, but is easily affected by other factors. c) The out-degree of an influencing factor is large and its indegree is large. This means that the influencing factor easily affects other factors and is also easily affected by other factors. d) The out-degree of an influencing factor is small and its in-degree is small, this means that the influencing factor neither easily affects other factors, nor is it easily affected by other factors. The out-degree and in-degree of the influencing factors are relatively small. This means that such factors do not easily affect other factors and are not easily affected by other factors, meaning that they are independent control factors. Such factors include the innovative potential of a contractor, construction quality, availability of priority land supply, and economic incentives for manufacturing. To improve the role of these factors, it is necessary to start from the factors themselves, such as promoting the development of innovation capabilities and improving innovation through incentives such as efficiency measures. Influencing Factors With a Higher Out-degree and Lower In-degree According to the research findings, the influencing factors with higher out-degree and lower in-degree include mandatory implementation policy, market demand, perception of contractors, and professional competence of contractors. Among them, the mandatory implementation policy has the largest out-degree and the smallest in-degree, indicating that this factor is only subject to government management and regulation as an external influence. It can have an impact on most of the factors related to the decision-making mechanism of contractors when opening prefabrication factories. Generally, market demand is only affected by policy environment factors, which will affect the decision-making of the contractor when it comes to factory construction. The perception of the contractor is generally affected by the mandatory policy environment, so its in-degree is relatively small. However, it will have a limited impact on the internal environmental factors of the contractor enterprise, such as the strategic goals of the contractor and the willingness of the contractor to build a factory, so the out-degree is relatively large. The professional competence of the contractor is an attribute of the contractor and is generally only affected by the internal environment of the enterprise, but it will have an impact on the developer and the market environment. For example, the stronger the level of technical competence when it comes to prefabricated construction for the contractor, the more it can promote the acceptance by developers of prefabricated buildings and market recognition of prefabricated buildings. Many cities in China are facing challenges associated with low-carbon transformation (65). As a significant contributor of carbon emissions, the construction industry has been subject to a series of policies promulgated by local governments which encourage them to be more industrialized (66). Influencing Factors With a Lower Out-degree and Higher In-degree According to the research findings, the factors associated with small out-degree and large in-degree include the strategic objectives of the contractor, expected economic benefits for the contractor, precision manufacturing, size of contractor enterprise, industrial support, and willingness of the contractor. Such influencing factors do not typically affect other influencing factors, but they are easily affected by other factors. For contractors, their strategic objectives determine their development route, but this is easily affected by changes in mandatory policy and market demand. On the other hand, market demand and mandatory policy do not change with the strategic objectives of the contractor. One of the purposes behind the willingness of a contractor to construct a factory is to reduce the cost of the prefabricated project and obtain more economic benefits. The economic benefits are also affected by the market, technology and policy environment. Some internal environmental factors such as the strategic objectives of the contractor, the expected economic benefits of the contractor, precision manufacturing, size of the contractor enterprise, and the willingness of the contractor to build a factory are easily influenced by policy and market factors. However, they are unlikely to change policy and market factors. Industrial support refers to the upstream enterprises related to the prefabricated component industry within the region. The strength of industrial support is affected by many factors, such as raw material supply conditions of prefabricated components, transportation conditions of prefabricated components, and surrounding production environment conditions. Policy and market factors directly affect related industrial support and the expected economic benefits of contractors after building factories. Precision manufacturing of the contractors is affected by the size of the contractor enterprise and professional competence of the contractor. Influencing Factors With a Higher Out-degree and Higher In-degree According to the research findings, the factors associated with large out-degree and large in-degree include developer acceptance of prefabricated buildings, price of the precast concrete components, industrial chain maturity, competitive pressure, and maturity of standards. Developer acceptance of prefabricated buildings directly affects the behavioral decisions of contractors in deciding whether to open fabrication factories. It is also affected by mandatory implementation policy, market demand, and the professional capabilities of contractors. For the contractor, the establishment of prefabrication factories is conducive to improving industrial chain maturity, so as to meet market demand and increase its market competitiveness. It also conforms to national policy guidance for the promotion of prefabricated construction. Developer acceptance of prefabricated buildings not only affects the size of the contractor enterprise and strategic objectives of the contractor, but is also affected by policy and market environment factors. Competitive pressure mainly affects the enterprise's internal environment, such as the strategic objectives of the contractor and the innovative potential of the contractor. It is also affected by the policy environment and the market environment. The price of precast concrete components affects the expected economic benefits of the contractor and the willingness of the contractor to open prefabricated factories. It is closely related to the market and policy environment. With the development of prefabricated buildings, relevant standards, and specifications are constantly being improved. With standards and specifications, a market environment and technological environment can be developed. The formulation of the corresponding rules is also beneficial to contractors when establishing prefabrication factories. Influencing Factors With a Lower Out-degree and Lower In-degree According to the research findings, the factors associated with small out-degree and small in-degree include innovative potential of the contractor, construction quality, availability of priority land supply, and economic incentives for manufacturing. To improve the role of these factors, it is necessary to start from the factors themselves, such as promoting the development of contractor innovation capabilities and improving innovation and efficiency incentives for contractors. It is also important to strengthen the implementation of preferential land supply and economic incentive policies for the manufacturing industry. Honorary incentive policies for quality construction of highquality projects should also be encouraged. Betweenness Centrality Analysis Results The visualization results of the centrality analysis of influencing factors are shown in Figure 3. Based on the analysis of the betweenness centrality of the influencing factors, the 19 influencing factors are sorted according to their level of betweenness centrality as shown in Table 3. The average value of the betweenness centrality is 4.789. Overall, the betweenness centrality scores of the influencing factors are highly polarized. Among them, the strategic objectives of contractors, industrial support, developer acceptance of prefabricated buildings, expected economic benefits of contractors, and competitive pressure are higher than average. This indicates that these factors have strong control capabilities and serve as a bridge for other influencing factors to generate multiple correlations. The betweenness centrality of maturity of standards, professional ability of contractors, construction quality, contractor innovation ability, market demand, contractor awareness of building factories, contractor willingness, manufacturing economic incentives, availability of priority land supply, and mandatory implementation policies are lower than the average value. This indicates that the conduction effect of these factors in the network relationship is weak. Among them, the betweenness centrality of policy factors is 0 or almost 0, which means that almost no relationship between any two influencing factors is transmitted through the influencing factors, so they are at the edge of the network and are hardly connected to other factors. Closeness Centrality Analysis Results The visualized results of the closeness centrality for influencing factors are shown in Figure 4. According to Table 4, the closeness centrality of the influencing factors can be divided based on two indicators: incloseness centrality and out-closeness centrality. In-closeness centrality means how easy it is for other nodes to reach the node, while out-closeness centrality refers to how easy it is for the node to reach other nodes. These two representations are the reciprocal of the sum of the shortest distance. In general, low in-closeness centrality and high out-closeness centrality indicate that it is not easy for other nodes to reach this node. However, it is easier for this node to reach other nodes. Therefore, the independence of resource output will be high at the edge of the network if the influencing factors have low in-closeness centrality and high out-closeness centrality. High in-closeness centrality and low out-closeness centrality mean that it is easier for the other node to reach this node. It is more difficult for this node to reach other nodes. Therefore, influencing factors with high in-closeness centrality and low out-closeness centrality mainly depend on the resource input of other subjects in the network in the central part of the network. Based on the analysis results of influencing factors regarding closeness centrality, the in-closeness centrality and out-closeness centrality with 19 influencing factors are summarized in Table 4. For mandatory implementation policies, the conclusion is that in-closeness centrality is low and out-closeness centrality is high. This shows that this influencing factor has high independence from the output of resources at the edge of networks. The strategic objectives of the contractor, industrial support facilities, expected economic benefits for the contractor, precision manufacturing, size of contractor enterprise, the price of precast components, and competitive pressure have relatively high in-closeness centrality and low out-closeness centrality. This means that they mainly depend on the input of resources from other subjects in the center of the network. There are 11 factors including contractor willingness to build prefabrication factories, construction quality, contractor acceptance of prefabricated buildings, maturity of standards, contractor innovation ability, industrial chain perfection, contractor professional competence, contractor perception of prefabrication mode, market demand, availability of land supply, and economic incentive measures that have relatively low in-closeness centrality and out-closeness centrality. It has been demonstrated that these influencing factors are relatively independent in the transmission of resources and are not easily controlled by other influencing factors and therefore they are at the edge of the network. DISCUSSION Based on the analysis results, four key influencing factors can be obtained through a comprehensive analysis of node centrality, betweenness centrality, and closeness centrality. More details are explained below. It can be seen from the results that the mandatory implementation policy has the largest out-degree node as well as larger out-closeness centrality, which shows that this influencing factor tends to influence other factors instead of being easily controlled by other factors. Furthermore, it has a strong ability to dominate other influencing factors. Therefore, mandatory implementation policy can be considered as a key influencing factor in social networks as well as a source which is related to general influencing capability in the network. When node in-degree for mandatory implementation policy is 0, it means that the factor is not influenced by other factors. Mandatory implementation policies can only be decided by functional departments within national and governmental organizations. For example, in 2017, "Suggestions for the implementation on vigorously developing prefabricated buildings" aims to focus on promoting prefabricated buildings and developing new methods of construction. The price of precast concrete components has the largest node centrality and large in-closeness centrality, which indicates that there are many factors closely connected with this factor. It takes the shortest distance for other influencing factors to reach this influencing factor in the social network. Therefore, the price of precast concrete is a key influencing factor in a social network. Meanwhile, price changes in precast concrete components will have a direct impact on capacity as well as market demands of prefabricated components companies. When starting construction, companies will consider the market price of precast concrete components and forecast expected economic benefits after the construction as references for planning construction project sizes. Moreover, the price of precast concrete components is also greatly correlated with transportation costs (67). Relevant research has shown that transportation costs in the suburbs are higher than those in urban areas (68)(69)(70). Therefore, the location where prefabricated components are constructed has a direct link with the economic benefits after construction. Market demand has large betweenness centrality with low node centrality and closeness centrality. Therefore, it easily influences other factors and also has a strong ability to control other influencing factors instead of being easily influenced by them. This makes it a key influencing factor relating to networks. In the context of mandatory implementation policies, developers have started to develop more prefabricated building projects which lead to an increase in demand for prefabricated components. Construction market demand led by developers can have a positive and direct impact on opening prefabrication factories for contractors. The higher the developer acceptance of prefabricated buildings, the greater the demand for prefabricated components in the market. With the pursuit of economic benefits (63) as the aim, contractors will manufacture more prefabricated components based on market demand so as to meet the needs of developers. Market demand also has a positive impact on industrial chains. Relevant studies have demonstrated that if a company functions in a complex and uncertain business environment, the market demand of suppliers will have a direct and positive impact on the company industrial chains and enhance company performance (71,72). For contractors, establishing factories for prefabricated components refers to a key link in the improvement of their own industrial chains for prefabricated buildings. After completing the factories, two important points which refer to both production and sales will be established in the industrial chains for prefabricated buildings, thus contributing to the development of companies. The method of lean management by contractors can be applied to production as well as sales to help effectively promote the development of enterprises. Contractor strategic objectives have maximum betweenness centrality as well as in-closeness centrality with higher node centrality at the same time. This indicates that the factor tends to be influenced by other factors and easily influences other factors as well, thus having a "mesomeric effect" several times and becoming the "bridges" in social networks. Therefore, it is a key influencing factor in social networks. In the context of advancing economic globalization, the selection of development strategies becomes difficult for many companies. However, diversification and specialization are the main business strategies at present. When it comes to development routes of prefabricated buildings for contractors, establishing factories for prefabricated parts, expanding the business scope for companies, and improving specialization in prefabricated buildings will be helpful. In doing so, contractors can progress along the development path for projects involving general contracting as well as EPC which can integrate R&D, design, manufacturing, purchasing, and construction. CONCLUSIONS Prefabricated construction has many advantages compared with traditional construction methods. However, the contractor may encounter difficulties in obtaining a steady supply of prefabricated components. Establishing self-owned prefabrication factories is a potential solution to solve this problem. The aim of this paper is to explore the key factors for contractors to open prefabrication factories. Relevant data were collected from questionnaires and further analyzed using Ucinet 6.0 to obtain the adjacency matrix and visual models of influencing factors. By using the social network analysis method, degree centrality analysis, betweenness centrality analysis, and closeness centrality analysis were carried out on the influencing factors. The analysis results revealed that mandatory implementation policy, price of precast concrete components, market demand, and contractor strategic objectives were the key factors that influence establishment of prefabrication factories by contractors. The results of this study contribute in revealing the potential mechanism for contractors to open prefabrication factories, thus to reduce carbon emissions and to promote sustainable development. However, this study also has a few limitations. For example, the research data were collected only from Guangdong province. Future research can be carried out across the country. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS PX and JiasZ: conceptualization. JiaZ: methodology. JianZ: validation. PX, ZW, MA-A, and JiasZ: writing, reviewing, and editing-original preparation. All authors have read and agreed to the published version of the manuscript.
2022-02-03T14:34:47.058Z
2022-02-03T00:00:00.000
{ "year": 2022, "sha1": "f411077ce1e95f05466b38bb0f0149d8f464bcbd", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2022.837350/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "f411077ce1e95f05466b38bb0f0149d8f464bcbd", "s2fieldsofstudy": [ "Engineering", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
8484320
pes2o/s2orc
v3-fos-license
The Comet Assay and its applications in the field of ecotoxicology: a mature tool that continues to expand its perspectives Since Singh and colleagues, in 1988, launched to the scientific community the alkaline Single Cell Gel Electrophoresis (SCGE) protocol, or Comet Assay, its uses and applications has been increasing. The thematic areas of its current employment in the evaluation of genetic toxicity are vast, either in vitro or in vivo, both in the laboratory and in the environment, terrestrial or aquatic. It has been applied to a wide range of experimental models: bacteria, fungi, cells culture, arthropods, fishes, amphibians, reptiles, mammals, and humans. This document is intended to be a comprehensive review of what has been published to date on the field of ecotoxicology, aiming at the following main aspects: (i) to show the most relevant experimental models used as bioindicators both in the laboratory and in the field. Fishes are clearly the most adopted group, reflecting their popularity as bioindicator models, as well as a primary concern over the aquatic environment health. Amphibians are among the most sensitive organisms to environmental changes, mainly due to an early aquatic-dependent development stage and a highly permeable skin. Moreover, in the terrestrial approach, earthworms, plants or mammalians are excellent organisms to be used as experimental models for genotoxic evaluation of pollutants, complex mix of pollutants and chemicals, in both laboratory and natural environment. (ii) To review the development and modifications of the protocols used and the cell types (or tissues) used. The most recent developments concern the adoption of the enzyme linked assay (digestion with lesion-specific repair endonucleases) and prediction of the ability to repair of oxidative DNA damage, which is becoming a widespread approach, albeit challenging. For practical/technical reasons, blood is the most common choice but tissues/cells like gills, sperm cells, early larval stages, coelomocytes, liver or kidney have been also used. (iii) To highlight correlations with other biomarkers. (iv) To build a constructive criticism and summarize the needs for protocol improvements for future test applications within the field of ecotoxicology. The Comet Assay is still developing and its potential is yet underexploited in experimental models, mesocosmos or natural ecosystems. Introduction The extraordinary growth in the chemical industry during the second half of the twentieth century has led to the appearance in nature of thousands of new products every year, a large percentage of which have significant biological effects. The presence in the environment of xenobiotics that are biologically active and difficult to break down represents a degree of stress that is frequently unacceptable for living organisms and that is also expressed at the ecosystem level. Both direct and indirect toxic activity can, in certain circumstances, be an important risk factor for the human population as well. The usual way to approach ecotoxicity testing, according to relevant EPA and OECD guidelines for the testing of chemicals (for example, in the context of REACH normative) or of veterinary drugs, is the use of well-defined tests, in which an array of selected species, representing the main trophic levels, are exposed to a single pollutant under controlled laboratory conditions. Such a standardized approach is necessary to acquire information in a relatively short time, to gather data easy to compare and to interpret and, of course, for regulatory purposes. However, extrapolation to real world is challenging if at all feasible. Models to study environmental toxicity are a necessary compromise between the control of experimental parameters (through the use of lab-reared substitute species and the setting of a thoroughly controlled exposure scenario) and realism (field or semi-field studies). An entirely different approach is based on the use of native species, which essentially considers pollution as a complex situation and therefore implies a more holistic interpretation of the real conditions of exposure in the field. This kind of study includes the capture of animals and/or the collection of plants, water or soil samples on the field. This approach allows considering interactions among pollutants and also homeostasis. Life-term exposure occurs in a natural context, allowing the action of such modulating factors as discontinuous pattern of pollution, reduction of the animal activity or sheltering. Interpretation of the results, on the other hand, may be particularly difficult in face of the many constraints and confounding factors of the natural environment . The term mutagen refers to a substance that induces transmissible changes in DNA structure (Maurici et al., 2005), involving a single gene or a group of genes. Genotoxins are a broader category of substances which induce changes to the structure or number of genes via chemical interaction with DNA and/or non-DNA targets (Maurici et al., 2005). The term genotoxicity is generally used unless a specific assay for mutations is being discussed. A large number of assay systems have been established for the measurement of genetic toxicity of chemical and physical agents. The Comet Assay, or Single Cell Gel Electrophoresis (SCGE), is a standard method for determining in vivo/in vitro genotoxicity. It offers a simple way of evaluating the damage caused by a clastogenic agent by measuring breaks in the DNA chain of animal and plant cells. One of the most striking features of the Comet Assay is the versatility, which allows its application to a wide array of different cell types and matrices. This characteristic, as well as its sensitivity, makes it especially well-suited for ecotoxicological studies, both in the terrestrial and the aquatic compartment. Although, for different reasons, water has been a privileged scenario for the pioneering studies on environmental genotoxicity, soil remains the primary way of entry into the environment for a number of pollutants, going from agricultural pesticides to veterinary drugs. As a consequence, testing species representative of the trophic chain in both compartments is relevant and necessary to thoroughly assess the genotoxic effects of environmental pollutants. In either case, it is clear that in the last decades the Comet Assay has been applied to a wide range of scenarios, species and ecogenotoxicity assessment approaches. As such, the present paper primarily aims to critically reviewing the application and technical developments of this versatile protocol in the context of ecotoxicology. Experimental Models Amphibians Amphibians are among the most sensitive organisms to environmental changes, mainly due to an early aquaticdependent development stage and a highly permeable skin. As such, they have been proposed as bioindicators of environmental contamination (Gonzalez-Mille et al., 2013). Environmental contaminants are pointed out as the primary cause in the decline of amphibian populations, hence the importance of evaluating exposure and sublethal effects in amphibian monitoring programs (Gonzalez-Mille et al., 2013). Nonetheless, the application of the Comet Assay in ecotoxicological studies involving these organisms is relatively new. The first work reported dates from 1996 (Ralph et al., 1996). Since then, a number of studies have been conducted that apply the Comet Assay to amphibian cells in adult and larval stages of several species, mainly Lithobates clamitans and Xenopus laevis. These studies focused mainly on the determination of the exposure effects to several contaminants, such as, for instance: herbicides (Clements et al., 1997;Liu et al., 2006Liu et al., , 2011Yin et al., 2008;Meza-Joya et al., 2013), pesticides (Feng et al., 2004;Yin et al., 2009;Ismail et al., 2014) and other xenobiotics as methyl methanesulfonate (Ralph et al., 1996;Ralph and Petras, 1998b;Mouchet et al., 2005a). Reports on the effects of the exposure to fungicides (Mouchet et al., 2006a), metals (Wang and Jia, 2009;Zhang et al., 2012), petrochemical contaminants (Huang et al., 2007), Persistent Organic Pollutants (POPs) (Gonzalez-Mille et al., 2013), ethyl methanesulfonate (Mouchet et al., 2005a); benzo(a)pyrene (Mouchet et al., 2005a), sulfur dyes (Rajaguru et al., 2001), antibiotics (Banner et al., 2007;Valencia et al., 2011), and dimethyl sulfoxide (DMSO) (Valencia et al., 2011) may also be found. Additionally, the biomonitoring of contaminated sites recurring to the Comet Assay in amphibians has also been performed, namely, of chemically-polluted lakes (Erismis et al., 2013), coal mines (Zocche et al., 2013), waste dumping sites (Maselli et al., 2010), dredged sediments (Mouchet et al., 2005b), polluted water bodies Petras, 1997, 1998a) and residues from municipal solid waste incineration (Mouchet et al., 2006b). Studies have also been reported where on sperm cells (Shishova et al., 2013) and the effects of exposure to electromagnetic fields (Chemeris et al., 2004) were assessed by the Comet Assay. Generally, studies are conducted in vivo and erythrocytes are the cell type most commonly used. Piscine Models Historically, fishes are closely linked with the transposition of the Comet Assay to the field of environmental toxicology, since they are among the first animal models to which the technique was adopted as a biomonitoring tool to assess the genotoxicity of contaminants on wildlife. A pioneering application was carried out by Pandrangi et al. (1995). This study examined the effects of toxic wastes accumulated in the sediment of the Great Lakes (Canada) and the sentinel species selected were the brown bullhead (Ameiurus nebulosus) and the common carp (Cyprinus carpio). The alkaline procedure developed and reported by Singh et al. (1988) was successfully adapted to fish erythrocytes, albeit the introduction of a few modifications. The authors concluded that the assay "is extremely sensitive and should be useful in detecting DNA damage caused by environmental contaminants." Since 1995, this premonitory statement has been recurrent and increasingly reinforced by an array of scientific publications, exploring a wide diversity of approaches, viz. in vitro (Kienzler et al., 2012), ex vivo , in vivo (Palanikumar et al., 2013), and in situ (Srut et al., 2010) exposures, as well as surveying wild native specimens (Laroche et al., 2013). To date, more than 300 articles have been published addressing DNA integrity in fish cells through the Comet Assay, making fish by far the most adopted animal group in the framework of environment health assessment. Furthermore, in recent years we have witnessed to an even greater profusion of publications. In 2013, for instance, 43 scientific articles were published (according to a literature search on PubMed) evaluating DNA damage by Comet Assay in piscine models (including fish cell lines) exposed to various potentially genotoxic agents. This vast utilization of fish should also be regarded as reflecting a primary concern of genetic ecotoxicologists over the health status of aquatic ecosystems. As a further evidence of the Comet Assay popularity as a tool for detecting DNA strand breaks in fish (along with other aquatic animals) it should be underlined that this subject has been periodically reviewed in 1998 (Mitchelmore and Chipman, 1998(Lee and Steinert, 2003, and 2009 (Frenzilli et al., 2009). It is well-established that Comet Assay is applicable, virtually, to all species. A clear demonstration of this polyvalence is the finding that, since 1995, this assay was successfully adapted to more than 90 fish species. This wide range of species includes mostly bony fish (Class Osteichthyes), both ray-finned fishes (Subclass Actinopterygii), the overwhelming majority of cases, and lobe-finned fishes (Subclass Sarcopterygii) like Arapaima gigas (Groff et al., 2010). The jawless fish (Class Agnatha) are represented with an interesting study with sea lamprey (Petromyzon marinus) describing the relationship between sperm DNA damage and fertilizing ability (Ciereszko et al., 2005), while cartilaginous fish (Class Chondrichthyes) are completely unexplored. Bearing in mind that the Comet protocol requires very small cell samples, the technique showed to be suitable for a broad variety of fish sizes, from very small fish (e.g., the mosquitofish Gambusia holbrooki; Ternjej et al., 2010), and even fingerlings (e.g., milkfish Chanos chanos; Palanikumar et al., 2013), up to bigger species like conger (Conger conger;Della Torre et al., 2010). Bivalves and Other Molluscs In recent years, the application of the Comet Assay in molluscs has been springing up. These organisms have long been regarded as prime subjects in biomonitoring programmes worldwide, especially, albeit not exclusively, in aquatic ecosystems. Bivalves, in particular, receive special attention both as sentinel and toxicity-testing subjects and a large array of literature has been published in the last few years. Among these, mussels (Mytilus spp.) have become one the most important targets when researching on marine genotoxicants using the Comet Assay (in large part owing to their worldwide distribution and known sensitivity to pollutants), from substance testing to the monitoring of sediments and waters in situ and ex situ and even recovery assessment following oil spills (Thomas et al., 2007;Almeida et al., 2011;Fernández-Tajes et al., 2011;Pereira et al., 2011;Martins et al., 2012Martins et al., , 2013Dallas et al., 2013). Research on the genotoxic effects of emerging pollutants, including nanomaterials is also arising (Gomes et al., 2013). Other bivalves, of more local relevance, have been shown to be good candidates, such as the clam Ruditapes decussatus in SW Europe (Martins et al., 2013) and the cockle Cerastoderma edule . In freshwater environments, the green-lipped mussel (Perna spp.), the zebra mussel Dreissena polymorpha and the Asian clam Corbicula fluminea are the most common bivalves in genotoxicity assessment through the Comet Assay (Michel and Vincent-Hubert, 2012;Parolini and Binelli, 2012;Chandurvelan et al., 2013;Michel et al., 2013;dos Santos and Martinez, 2014). Gastropods take the place of bivalves in terrestrial environments and the use of snails (like Helix spp.) as effective sentinels for genotoxicants has been demonstrated in situ (Angeletti et al., 2013). Terrestrial Organisms The fate and effects of pollutants on living organisms may differ in the two compartments. Soils are complex associations with high binding capacity to both inorganic and organic molecules, which may, as well as certain modifications along time (e.g., aging and weathering), modulate the biological effects of contamination. For these reasons, toxicity to terrestrial species cannot be directly extrapolated from aquatic species, meaning that specific approaches and models are needed to assess the impact of soil pollutants on terrestrial biota (Vasseur and Bonnard, 2014). The role that filtering organisms, like mussels, play in water is covered in soil by earthworms, which, in addition, are able to move around and prospect its surroundings, giving information both on the temporal (accumulation) and the spatial axis. Plants, in turn, are sessile, but expand their roots both laterally and in depth, absorbing pollutants from successive strata. The application of Comet Assay to earthworms, and consequently the use of such extraordinary prospectors as sentinels for the presence of genotoxicants in soil, started in the nineties of the last century (Singh et al., 1988;Verschaeve and Gilles, 1995;Salagovic et al., 1996), and since then has been extensively revised (Cotelle and Férard, 1999;Espinosa-Reyes et al., 2010;Liu et al., 2010;AtliŞekeroglu et al., 2011;Lionetto et al., 2012;Andem et al., 2013;Vernile et al., 2013;Fujita et al., 2014;Vasseur and Bonnard, 2014;Zhang et al., 2014). Several earthworms comparative studies have been performed (Vasseur and Bonnard, 2014). Eisenia fetida and Aporrectodea caliginosa showed an equivalent sensitivity, as assessed by Comet Assay (Klobučar et al., 2011). Fourie et al. (2007) compared the sensitivity of five earthworm species (Amynthas diffringens, A. caliginosa, E. fetida, Dendrodrilus rubidus and Microchaetus benhami) to Cd genotoxicity after a 48 h-exposure. E. fetida presented the highest percent of DNA in tail and was the second most sensitive species after D. rubidus, which showed the highest increase in DNA breaks compared with the control. Plants are also specially well-fitted for ecotoxicological assessment of soils, including genotoxicity. The Comet Assay may be performed in different organs (nucleus of roots cells or leaf cells), and combined, when suitable, with growth tests (Grant, 1994;Sandhu et al., 1994;Gopalan, 1999;Ma, 1999;Sadowska et al., 2001;Ma et al., 2005). However, cell lysis and release from plant cells is challenging and require special adaptations to the protocol (such as mechanical extraction of nuclei or protoplast production), which may be tissueand species-dependent (see Costa et al., 2012a and references therein). In general, the Comet Assay in plants is far from being as common and widespread as in animals. Genotoxicants in the terrestrial compartment have also been tracked by means of Comet Assay using vertebrates as sentinel species, particularly avian and rodents. The ecological disaster occurred in April 1998 in the mines of Aznalcóllar, consisting in a massive toxic spill of acid waste containing metals, threatened the wildlife in the Doñana National Park in SW Spain. The presence of DNA damage was studied along 4 years by means of Comet Assay in white storks (Ciconia ciconia) and black kites (Milvus migrans) (Pastor et al., 2001(Pastor et al., , 2004Baos et al., 2006). Results indicate that the exposed birds had a significantly increased level of genotoxic damage compared with control animals from noncontaminated locations, that the toxic spill still appears to be affecting the wildlife 4 years after the mining disaster and that attempts at cleaning up the waste have proved ineffective based on DNA damage detection. A study to determine DNA damage in blood cells of barn swallows (Hirundo rustica) inhabiting the Chernobyl region was carried out, to evaluate whether chronic exposure to low-level radioactive contamination continues to induce genetic damage in free-living populations of animals. The results showed that Comet values in barn swallows living in areas surrounding Chernobyl are still increased when compared to swallows sampled at low-level sites, even 20 years after the accident at the Chernobyl nuclear power plant (Bonisoli-Alquati et al., 2010). Rodent species have been used as sentinels of eco-genotoxicity in a variety of scenarios. The European wood mice (Apodemus sylvaticus) is a ubiquitous, abundant species which has been studied to assess the effects of dumping sites (Delgado et al., 2000), urban or traffic pollution or the surroundings of an abandoned uranium mining site (Lourenço et al., 2013). In all these cases, the combination of Comet Assay and wood mice proved to be a sensitive and reliable tool for the detection of the exposure to environmental genotoxicants. The yellow-necked wood mouse (Apodemus flavicollis) is a closelyrelated species inhabiting the regions of central and northern Europe. A study was performed in different protected areas of the Strandzha National Park in Bulgaria in 2010 and 2011. An increase in the Comet Assay parameters in the analyzed individuals of yellow-necked mouse from the Sredoka protected area was established. Those results indicated that there was genetic damage in some mice populations as a consequence of chronic contamination (Mitkovska et al., 2012). The Algerian mouse (Mus spretus) is a similar species, more frequent in south-Europe. This species has been used in different studies, however. A comparison was done between mice living in an industrial area in the neighborhood of Huelva city, SW Spain, and in a natural area (Doñana National Park). Results suggest that Comet Assay in wild mice can be used as a valuable tool in pollution monitoring (Mateos et al., 2008). Genotoxicity monitoring using the Comet Assay on peripheral blood leukocytes of the Algerian mouse was carried out in Doñana Park (Spain), after the environmental disaster of the Aznalcollar pyrite mine in 1998. The mice were sampled in different areas 6 months after the ecological disaster and again 1 year later. Results showed that in 1998 Comet parameters were increased in all the areas examined, whereas a significant decrease in the values was observed in the 1999 samples, which were collected in a riverside area subject to tide flows (Festa et al., 2003). Wild individuals of Rattus rattus and Mus musculus have also been assessed for DNA damage by the Comet Assay. A study was conducted in a coal mining area of the Municipio de Puerto Libertador, Colombia. Animals from two areas in the coal mining zone and a control area were investigated. The results showed evidence that exposure to coal results in elevated primary DNA lesions in blood cells of rodents (León et al., 2007). Meadow voles (Microtus pennsylvanicus) have been used to measure the effects of pesticide exposure in golf courses of the Ottawa/Gatineau region of Canada (Knopper et al., 2005). Ctenomys torquatus is a South-American species which was used for biomonitoring in the coal region of Rio Grande do Sul (Brazil). The results of this Comet Assay study indicate that coal and by-products not only induce DNA damage in blood cells, but also in other tissues, mainly liver, kidney, and lung (da Silva et al., 2000a,b). It is also worth to note how a multi-trophic level approach may be applied to assess the impact of toxicity on a given ecosystem. A recent example is the assessment of the effect of radioactive materials released in 2011 during the accident at Japan's Fukushima nuclear power plant on wildlife. The effects of exposure to environmental radiation were studied by means of Comet Assay in wild boars (Sus scrofa leucomystax) and earthworms (Megascolecidae). Regions with low (0.28 μSv/h) and high (2.85 μSv/h) levels of atmospheric radiation were compared. The authors constructed a model food web featuring the wild boar as the top predator, and measured the radioactivity levels in soil, plant material, earthworms, and wild boar. The extent of DNA damage in wild boars did not differ significantly between animals captured in the two regions, but earthworms from the "high-dose" region had a significantly greater extent of DNA damage than did those from the "low-dose" region (Fujita et al., 2014). A Methodological Overview Amphibians Over the years, the Comet Assay protocol has undergone some alterations; however there is no clear evolution or tendency (see Table 1). Regarding the lysis buffer, in the first papers published by Ralph et al. (1996) and Ralph and Petras (1998b) and also by Clements et al. (1997) no detergent (e.g., Triton X-100) nor DMSO were added to the stock solution. Later, in 1997, 1998a, added these components to the lysis buffer, which made it very similar to the buffers commonly used nowadays in most of the studies published. Ever since, in most of the studies, the buffer includes these two components, with few exceptions (Chemeris et al., 2004;Valencia et al., 2011;Zhang et al., 2012;Meza-Joya et al., 2013). Additionally some variations are also found in the composition of the lysis buffers, such as the inclusion or exclusion of some commonly used reagents like, for example, the replacement of sodium sarcosinate with SDS as detergent. However, in two particular studies performed by Valencia et al. (2011) and also Meza-Joya et al. (2013), a different lysis buffer and lysis protocol is used. These authors exposed the cells to a lysing solution containing proteinase K and calcium chloride, before the cells were mixed with the agarose and spread out on slides. This protocol was used in blood cells from Eleutherodactylus johnstonei to overcome the problem of lysing those cells, which were seemingly resistant to the lysis treatments commonly performed. Thus, this appears to be an important factor to consider in future studies with similar species. Regarding lysis itself, it is usually performed under alkaline conditions, using time intervals varying from 25 min to a maximum of 1 week. Until 2005, lysis was usually performed at room temperature, however, from 2006 until now it is generally conducted at 4 • C, which is in agreement to the guidelines published by . The low melting point agarose concentration it is usually 0.5%, but it varies from 0.4 to <1%, which limits the comparison of the results obtained in the various studies, since it directly affects DNA migration. Accordingly, the higher the agarose concentration, the lower the % tail DNA . Denaturation is generally conducted in alkaline conditions (pH > 13), from 5 min to 40 min which, once again, limits the comparison between studies, since it also affects DNA migration. As referred by , the higher the incubation period the higher the % tail DNA. Regarding electrophoresis, voltage can vary between 18 and 27 V, generally at 300 mA, from 4 to 50 min. However, not all the studies refer the voltage gradient used (V/cm), and therefore a comparison between studies is still a limitation. Generally, variation between protocols, mainly regarding agarose concentration, denaturation and electrophoresis conditions, denotes lack of standardization, compromising direct comparisons between studies. Piscine Models The wide variety of fish species addressed, tissues sampled, and experimental approaches adopted have led to a profusion of adaptations to the Comet Assay protocol (see Table 2). To date, no standardized Comet Assay procedures exist for environmental studies involving fish. In addition, a standardization of sampling protocols when using laboratory exposed or both transplanted and wild specimens in biomonitoring studies is required (Frenzilli et al., 2009). The Comet Assay adopted in different contexts has proved to be also valuable in the elucidation of the mechanisms of genotoxicity and DNA repair. In this direction, the implementation of a protocol with an extra step where nucleoids are incubated with DNA lesion-specific repair endonucleases has added greatly to the value of the Comet Assay , namely on the specific detection of oxidized bases and thus, identifying oxidative DNA damage as a harmful process underlying the genomic integrity loss. The use of endonuclease III (thymine glycol DNA glycosylase-Endo III) was initially proposed by Collins et al. (1993) to specifically target oxidized pyrimidines, while formamidopyrimidine DNA glycosylase (Fpg) was firstly adopted by Dusinska and Collins (1996) to signal oxidized purines. The adoption of this improved procedure in the field of environmental genotoxicology using piscine models took almost one decade, since, to the authors' knowledge, it was applied for the first time in 2003 (Akcha et al., 2003). This enzyme-modified assay has attracted particular attention in the last years, being applied either in whole organism (Tomasello et al., 2012), involving different tissues (blood, liver, and gill) (Aniagu et al., 2006), or cell line (Kienzler et al., 2012) testing. It was concluded that the scoring of the DNA damage encompassing oxidatively induced breaks increases sensitivity (Tomasello et al., 2012) and reduces the possibility of false negative results (Guilherme et al., 2012a) when compared to the standard Comet Assay. This approach can be particularly informative when the additional breaks corresponding to net enzyme-sensitive sites are shown (Guilherme et al., 2012a). In the light of these positive outcomes, it seems clear that this specific tool has been underexploited. Another technical development concerns the adoption of Comet Assay to evaluate the DNA repair ability of a specific tissue (Collins et al., 2001), namely through the in vitro assays for nucleotide excision repair (NER) and base excision repair (BER). For these assays, a DNA substrate containing specific lesions is incubated with an extract prepared from the tissue to test. The accumulation of breaks due to the incubation with that extract is a measure of DNA repair activity in the tissue . The few studies published using this type of assay include the detection of tissue-specificities of BER activity in Xiphophorus species, showing that brain possesses higher BER activity than gill and liver (Walter et al., 2001). The other available publications resulted from the work of the same research group and concern the application of BER (Kienzler et al., 2013a) and NER (Kienzler et al., 2013b) assays in fish cultured cells. Though the previous publications recommend the adoption of these DNA repair biomarkers as a complement the more classical genotoxicity endpoints (Kienzler et al., 2013a), their application has been clearly underestimated. Blood has been, undoubtedly, the preferred tissue to perform Comet Assay in fish (e.g., Guilherme et al., 2010;Lourenço et al., 2010;Ternjej et al., 2010), mainly due to the easy sampling and availability of dissociated cells, a critical factor. All fish blood cells are nucleated which also represents an important practical advantage (comparing to mammals) for the assessment of genomic integrity. Nevertheless, other somatic tissues like liver, kidney and gills have been also frequently addressed (Guilherme et al., 2012b;Kumar et al., 2013;Velma and Tchounwou, 2013), as well as germ cells (Pérez-Cerezales et al., 2010). It is recognized that DNA strand breakage can be tissue-and cell-type-specific (Pandey et al., 2006). Hence, it is improbable that blood cells can reflect the type and extent of DNA damage occurring in other cell types. The choice of blood has been mainly determined by practical/technical reasons and rarely relied on the knowledge of a comparative performance with other target tissues. It has been stated that circulating cells are less sensitive, when compared to other types of cells (Frenzilli et al., 2009), but this is not a consensual assumption. As an example, a comparison between DNA damage in gill, kidney and blood tissues of Therapon jarbua following an exposure to mercuric chloride indicated the following order in terms of sensitivity: gill > kidney > blood cells (Nagarani et al., 2012). Guilherme et al. (2012b) stated that DNA damage in liver returned faster to the control level comparing to gills, which was regarded as an indication of a better adaptive behavior of hepatic cells, probably related with a higher capacity to maintain the genomic stability by detecting and repairing damaged DNA. Bivalves and Other Molluscs Haemocytes are the most common target for genotoxicity assessment in vivo and in vitro in bivalves and gastropods (see Table 3). Although collection requires some skill, obtaining haemocytes from bivalve adductor muscles or haemocoel (e.g., pericardial) in bivalves and gastropods is proved to be feasible and able to yield cells apt for the Comet Assay in both number and quality. Still, it has been noted, concerning terrestrial snails, that broken or detached epiphragms may cause significant dehydration of tissues, hampering collection of haemolymph (Angeletti et al., 2013). Altogether, it is likely that haemolymph collection needs to be properly set and tested for each target organism. Gills have also been successfully employed since cell resuspension is easy enough to be assisted by gentle tissue splicing and "soft-pipetting" followed by low-speed centrifuging (≈2000 g) to remove debris and dead cells, without the need for treatment with collagenase (see Martins et al., 2012). Still, it has been shown that the baseline DNA strand breakage may greatly differ between organs. The molluscan digestive gland, the analogous of the vertebrate liver and therefore of high relevance in toxicological studies, was shown to yield levels of single strand breakage likely too high (from autolytic processes) for a valid application of the Comet Assay without proper cell sorting and viability check (refer to Raimundo et al., 2010, in a study with the cephalopod Octopus vulgaris and Hartl et al., 2004 with the clam Ruditapes philippinarum). Recent advances have also shown the feasibility of obtaining adequate cultures of molluscan cells for in vitro studies using the Comet Assay (Michel and Vincent-Hubert, 2012) and even the possibility to cryopreserve mussel haemocytes (Kwok et al., 2013). Altogether, these advances certainly contribute to standardize the Comet Assay in biomonitoring and genotoxicity testing with bivalves and other molluscs. Terrestrial Organisms The Comet Assay in earthworms is performed on the small cells which constitute the most abundant class among the cellular population of the coelomic fluid, and that are the homologous, in worms, of vertebrate leucocytes. Cells are collected according to Eyambe et al. (1991), or by means of electric or ultrasonic stimulation. Eisenia foetida (andrei) is the most commonly used species, owing to the fact of being the one recommended by international guidelines for lethality and reproduction ecotoxicology studies; however, other species have been used, as for instance A. caliginosa (Klobučar et al., 2011), Lumbricus terrestris, L. rubellus (Spurgeon et al., 2003), D. rubidus and M. benhami (Fourie et al., 2007), among others (Vasseur and Bonnard, 2014). Performing the Comet Assay in vegetal cells, however, present some particular difficulties (Gichner and Plewa, 1998). The rigid cellulose cell walls prevent DNA from leaving the cell, and are not easily eliminated with the usual alkaline treatment; so, nuclei isolation from tissues is necessary as a first step. However, the isolation procedure (either mechanical or chemical) may produce some degree of nuclear disruption, which could in some cases constitute a serious handicap. On the other hand, the high concentration of pigments and metabolites present in photosynthetic tissues (as leaves) tends to cause further damage to the isolated nuclei. To avoid this concern, root apical tissue is often preferred, but, in this case, the high rate of cell division may in turn be a problem. To reproducibly perform Comet Assay in leaves, some modifications to the standard procedure have been proposed, which includes a centrifugation through sucrose cushion, to eliminate disrupted nuclei and secure a higher fraction of undamaged nuclei (Peycheva et al., 2011). Recently, protocols have been developed to perform the Comet Assay in tree cell cultures from protoplasts following failure to obtain nude nuclei by the most common mechanical processes (Costa et al., 2012a) In spite of these difficulties, the Comet Assay has been successfully used in recent years to test the effects of Cr(VI) in Pisum sativum (Rodriguez et al., 2011), of Chlorfenvinphos and fenbuconazole in Allium cepa (Türkoglu, 2012), cadmiumzinc (Cd-Zn) interactions in tobacco plant (Tkalec et al., 2014) or to demonstrate the correlation between the occurrence of B chromosomes and the DNA damage that is induced by the chemical mutagen, maleic hydrazide (MH), in Crepis capillaris plants ( Kwasniewska and Mikolajczyk, 2014), among others. A recent revision (Ventura et al., 2013) is available. There are a variety of working protocols of the comet assay for both birds and mammals (see Table 4). Circulating lymphocytes are used mainly as the test cell type because of its available and because it can be a non-invasive method of extracting sample . As described previously, the use of lesion-specific repair endonucleases has been employed in studies with in terrestrial organisms. This aspect brings to the Comet Assay a very interesting added value for targeting routes that are acting during exposure. Correlations with Other Biomarkers Amphibians The combination of Comet Assay, to detect DNA strand breaks, with the evaluation of other biomarkers to determine the effects of contaminants in exposed organisms has been performed in many studies. Some of those studies show a positive correlation between the results given by the Comet Assay and other biomarkers. For instance, in the studies performed by Mouchet et al. (2005aMouchet et al. ( ,b, 2006a, a positive correlation between DNA strand breaks detection and micronucleus induction was observed most of the times. This result was expected since the Comet Assay measures primary DNA damages and the micronucleus test reflects irreparable lesions that result from the non-repaired or inappropriately repaired primary DNA damages, which are likely to be inherited by subsequent generations of cells. In another study Liu et al. (2006) investigated the role of reactive oxygen species (ROS) in the herbicide acetochlor-induced DNA damage on Strauchbufo raddei tadpole liver and the results showed a positive correlation between DNA damage and malondialdehyde (MDA) formation and a negative correlation between DNA damage and total antioxidant capability. This result showed that the herbicide acetochlor induce DNA damage through the formation of ROS. Zhang et al. (2012) conducted a study to evaluate cadmium-induced oxidative stress and apoptosis in the testis of frog Fejervarya limnocharis, which also showed a positive correlation between DNA damage, lipid peroxides and ROS formation and gluthatione determination, showing the role of oxidative stress to damage DNA of these cells. These studies show the importance of the inclusion of the Comet Assay in a battery of tests that contribute to determine the chain of events leading to the effects observed and to determine the type of damages to DNA. Piscine Models As a sign of maturity, in the last years a particular attention has been devoted to the interference of non-contamination related factors (biotic and abiotic) with the genotoxicity expression. This is a critical knowledge to allow a correct assessment of the contribution of chemical contamination to the DNA damage measured. In this direction, hypoxia, and hyperoxia, known as important stressors in the aquatic environment, were tested in Cyprinus carpio, revealing that both conditions increase oxidative DNA damage (approximately 25% compared to normoxic conditions) (Mustafa et al., 2011). Another study demonstrated that acute extreme exercise results in oxidative DNA damage in Leuciscus cephalus, suggesting that fish living in fast flowing and polluted waters are at increased risk (Aniagu et al., 2006). The effects of age, gender, and sampling period were also investigated (Akcha et al., 2004). In adult fish (Limanda limanda), DNA breaks were higher in males than in females, whereas the opposite trend was observed for juveniles. Regardless of gender, the extent of DNA damage was higher in the adult comparing to juvenile fish. It was also suggested that the formation of DNA lesions can be modulated by seasonal variables, namely those related to variations in lipid content, biotransformation activity and/or to spawning cycles (Akcha et al., 2004). It was hypothesized that anesthesia used before tissue sampling can have confounding influences on the DNA integrity evaluation. Still, Nile tilapia exposed to benzocaine showed that this anesthetic does not affect Comet Assay results (de Miranda Cabral Gontijo et al., 2003). The assumption that the Comet Assay can be successfully applied to monitor effects of environmental disturbances emerged unanimously from the majority of fish studies using this technique (e.g., Ciereszko et al., 2005;Srut et al., 2010). Tough a more skeptical perspective can detect in this unanimity a self-worth and self-legitimation positioning, it is also clear that it represents a strengthening of the goodness of the assertion. It has been suggested that the ecotoxicological consequences of a genomic instability and its correlation with DNA breaks measured by the Comet Assay deserves a special attention (Jha, 2008). To gain ecological relevance, a mechanistic association between genotoxic stress and effects at higher biological levels should be identified, contributing to predict deleterious effects mainly at population level (e.g., abundance and reproduction impairments). The controversy whether adverse effects of anthropogenic genotoxicants can be associated to the decline of fish populations has been the leitmotiv for some recent studies. A complete life-cycle test was carried out with zebrafish (Danio rerio) and the model genotoxicant (4-nitroquinoline-1oxide) seeking for a causal linkage between genotoxic effects and ecotoxicological risk (Diekmann et al., 2004a,b). It was observed a reduction of egg production, which would have led to fish extinction according to a mathematical simulation (Diekmann et al., 2004a), concomitantly with DNA damage induction (Diekmann et al., 2004b). However, this study failed on demonstrating a direct evidence that genotoxicity is functionally related to reduced egg production (Diekmann et al., 2004a). The assessment of the consequences of germ cell DNA damage on progeny outcomes has been regarded as a strategy to signal potential long-term effects of aquatic genotoxicants in fish, since genetic damage in such cells, if unrepaired or misrepaired, can be passed on to future generations (Devaux et al., 2011). In this direction, it was demonstrated a positive correlation between the DNA damage in sperm from parental fish (Salmo trutta and Salvelinus alpinus) exposed to the alkylating genotoxicant model methyl methanesulfonate and the incidence of skeletal abnormalities in the offspring, clearly suggesting that DNA damage had been inherited (Devaux et al., 2011). In a subsequent study, spermatozoa of Gasterosteus aculeatus were exposed ex vivo to MMS before in vitro fertilization and a relationship between abnormal embryo development in the progeny and sperm DNA damage was demonstrated . It was also revealed that sperm of Oncorhynchus mykiss maintains its ability to fertilize in spite of having DNA damage, although embryo survival was affected (Pérez-Cerezales et al., 2010). The risk evaluation of the impact of DNA-damaged germ cells in the reproduction is particularly relevant in animals with external fertilization/embryo development (Pérez-Cerezales et al., 2010), like fish, since both gametes and embryos can be directly exposed to waterborne genotoxicants. This approach can represent an additional contribution to predict the impact of DNA damage on recruitment rate, progeny fitness, and thereby, on the population dynamics. A recent multi-generation study with zebrafish (D. rerio) involving a chronic exposure to MMS demonstrated impairments in survival, growth, reproductive capacities and DNA integrity (Faßbender and Braunbeck, 2013). Furthermore, due to the transfer of mutations and inherited DNA damage to the next generation, the offspring was subject to elevated teratogenicity and mortality, pointing out a causal relationship between genotoxicity and the decline of wild populations (Faßbender and Braunbeck, 2013). Bivalves and Other Molluscs It must be noted that there are many reports showing reduced genotoxic effects of organic toxicants to molluscs through studies ex situ (Parolini and Binelli, 2012;Martins et al., 2013), which, nonetheless, does not relate with technical constraints of the Comet Assay (at least the standard protocols for the alkaline assay are proven to be perfectly effective) but rather on the mechanisms underneath the bioactivation of organic toxicants by multi-function oxidases that, in vertebrates, are responsible for the production of ROS and genotoxic metabolites (Peters et al., 2002). Nevertheless, studies in situ with bivalves, at least, often yield good agreement between Comet Assay data and background levels of mixed toxicants, especially organic Martins et al., 2012;Michel et al., 2013). Still, some authors noted the influence of environmental confounding factors, especially, season-related, highlighting increased oxidative stress and DNA strand breaks during warmer months (Almeida et al., 2011;Michel et al., 2013). The enzyme-modified Comet Assay to detect oxidative DNA damage is just starting to be applied to molluscs, in an attempt to understand the mechanisms underlying DNA damage in these organisms, a subject that still remains largely unknown. It is the case, for instance, of the work by Dallas et al. (2013), who failed to detect Ni-driven Fpg-sensitive (oxidative) DNA damage in the haemocytes of tested mussels, which contradicts in vitro studies with humans cells (refer to Cavallo et al., 2003). In another example, Michel and Vincent-Hubert (2012) disclosed that hOGG-1 is more effective in the detection of oxidative damage than alkylated sites (even compared to Fpg) in D. polymorpha gill cells exposed in vitro and in vivo to a known genotoxicant such as B[α]P. These apparent contradictions showed just how much little is known about the causes and mechanisms of DNA damage and repair in molluscs. In fact, Comet Assay data often yields contradictory or non-linear relations when contrasted to bioaccumulation of genotoxicants and biomarkers related to oxidative stress (such as lipid peroxidation or the activity of antioxidant enzymes), depending on substance, species, and conditions of assessment (e.g., Noventa et al., 2011;Martins et al., 2013). This, again, calls for the need to break way toward the understanding of the fundamental mechanisms underlying genotoxicity in molluscs and their differences to vertebrates, for which most genotoxicity assessment approaches have been devised. Terrestrial Organisms E. fetida is extensively used as a compost worm because of its potential to degrade wastes, and has been reared in farms and laboratories for decades. Its continuous exposure to toxic compounds, especially those deriving from agricultural practice, may have been an evolutionary factor for the species. The selective appearance of specific metabolic ways for the detoxification of certain compounds may also result in the activation of other genotoxicants, as has been shown in other species (Mus musculus compared with Apodemus silvatycus, Acosta et al., 2004). On the other hand, and by a similar reasoning, worms which are native of polluted areas may have developed resistance to those compounds present in their environment. Discussion and Future Prespectives The Comet Assay presents several significant advantages over other commonly used assays for genotoxicity studies. Its applicability to both eukaryotic and prokaryotic organism and its use in almost any cell type makes this assay a test very verifiable, reliability, relatively rapidly in data collection and realistic correlation are characteristics also provided by this technique. However, one of the virtues of this assay is unquestionably its cost-effectiveness, compared to many other techniques. The discussion about the importance of inter-specific differences in sensibility, and on the meaningfulness of using substitute instead of native or target species, is long-lived and still alive, and concerns the core of the toxicological thinking. Indeed, extrapolation is the Achilles heel of toxicology, hence the particular attention given to protocol enhancement and standardization, albeit the need to reason that each case study and each organism need their own set of technical specifications and interpretation requirements, especially considering non-model and moreover, native, species. There is a wide variety of internal procedures of laboratories where the Comet assay is carried out. As underlined in a previous review article (Frenzilli et al., 2009), the development of suitable guidelines for standardizing Comet Assay protocols is imperative to achieve a harmonization and inter-laboratory calibration. This is also a critical issue to the generalized recognition of Comet Assay as environmental monitoring tool and to allow its integration in regulatory genotoxicological studies. It should be required to the scientist community and to the regulatory agencies to make a meta-analysis or a simple comparison of results obtained from the literature. Although the Comet Assay has been applied in studies of amphibians, for instance, since the late 1990s, a standardized method to perform the assay and to measure and report this effect does not exist. This represents a disadvantage that limits the comparison with other studies. Despite that, the use of Comet Assay in these organisms is increasing, although it is still limited to the detection of DNA damage. This shows that there is a great potential for development and application of this technique to ecotoxicological studies and environmental risk assessments using amphibians as bioindicator species. The elucidation of the type of DNA damage that is generated and the accurate monitoring of DNA repair through lesion-specific enzymes during the Comet Assay protocol, will add value to this assay in future ecotoxicological studies for exposure assessment and effects on these organisms. Additionally, it could also help to determine the potential causes of their decline in specific environments. Despite the evidence here highlighted toward a functional association between genotoxicity measured at individual level and a negative impact at population level, so far, DNA damage detected by Comet Assay in fish (as well as in other animal models) has failed to garner sufficient recognition to be incorporated into national and international risk assessment protocols, even though the comparison between this and other potential biomarkers as already showed higher efficiency in the distinction between impacted and reference sites (Costa et al., 2012b). The unequivocal and convincing (mainly for public regulatory agencies) demonstration of its ecological relevance is probably the greatest challenge to Comet Assay on the next decade (goal extensible to majority of biomarkers currently adopted in environmental toxicology). Another of the many technical constraints that need to be circumvented before the Comet Assay can be efficiently and profusely applied to a wider range of organisms relate to the collection and nature of samples per se. For instance, one of the major problems in ecotoxicity terrestrial testing is the high amount of product needed to perform the Comet Assay test. In the case of earthworms, a possible method to reduce the amount of test material required is to inject the test solution directly in the coelomic cavity of the earthworms; this is how was conducted the recently reported Comet Assay study of functionalized-quantum dots (QDs) and cadmium chloride on Hediste diversicolor and E. fetida coelomocytes. Results demonstrated that functionalized-QDs (QDNs) and cadmium chloride induced DNA damages through different mechanisms that depended on the nano-or ionic nature of Cd (Saez et al., 2014). Spiked soil should be allowed to stabilize for a sufficient period before starting the exposition test to performing the Comet Assay. This time, necessary to reach a status of equilibrium similar to that established in natural conditions, is probably too short in most studies. On the other hand, the nature and circumstances of soil in the real polluted areas may dramatically affect the bioavailability of xenobiotics. Time and exposure to the action of weather tends to have a homeostatic effect, decreasing the access of toxicants to the internal medium of living organisms. This partially accounts for the surprisingly mild effects frequently observed in areas which chemical analysis have shown to be heavily polluted (Alexander and Alexander, 2000;Borràs and Nadal, 2004;Vasseur and Bonnard, 2014). As a consequence, experiments with spiked soil could tend to show a higher degree of toxic effects, being more sensitive but also, possibly, less realistic. Still regarding this issue, a way to avoid the large amounts of sample needed in a conventional growth test in soil consists in treating only the exposed root tips. For example, Allium cepa root tips were treated with TiO 2 nanoparticles dispersions at four different concentrations (12.5, 25, 50, 100 mg/mL). The bio-uptake of TiO 2 in particulate form was the key cause of ROS generation, which in turn was probably the cause of the DNA aberrations and genotoxicity (Ghosh et al., 2010;Panda et al., 2011;Pakrashi et al., 2014). Overall, these few examples clearly illustrate that the application of the Comet Assay in ecogenotoxicity assessment remains as purposeful as challenging. The swift integration of novel methodological improvements to the protocol with this field of research, such as DNA repair enzyme modifications, shows that ecotoxicologists are constantly improving approaches and protocols. Furthermore, it must be noticed, as hereby demonstrated, that ecotoxicology is probably one of the most diversified and complex field of research where genotoxicity assessment is surveyed as routine. As such, one may expect another further decades of successful, although constantly improving, application of this versatile protocol. Frost, D. R. (2011). Amphibian Species of the World: An Online Reference, v. 5.5. Electronic Database accessible at http://research.amnh.org/vz/herpetology/amphibia/. American Museum of Natural History, New York, USA.
2016-05-12T22:15:10.714Z
2015-06-04T00:00:00.000
{ "year": 2015, "sha1": "07b92d7d78af84ae55910439319a74dd82d64248", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2015.00180/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07b92d7d78af84ae55910439319a74dd82d64248", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233931395
pes2o/s2orc
v3-fos-license
Analyzing and Presenting Data with LabVIEW LabVIEW is an abbreviation for Laboratory Virtual Instrument Engineering Workbench and allows scientists and engineers to develop and implement an inter-active program. LabVIEW has been specially developed to take measurements, analyze data, and present the results to the user. You determine what the device looks like, rather than the manufacturer of the device. LabVIEW has a very large library of functions and subprograms (subVIs) that can help you during your programming and use without occupying memory. Hidden programming problems that you may encounter in traditional programming languages are less common in LabVIEW. LabVIEW also includes different applications such as serial device control, data analysis, data presentation, data storage and communication over the internet. Analysis library; It includes versatile and useful functions such as signal generation, signal processing filters, Windows statistics and regressions, linear algebra and array arithmetic. Due to the graphical nature of LabVIEW, it is an innate data presentation package. You can view the data in any form you want. Chart, graph and user-defined graph are among the output options that can be used. As a scientist or an engineer, you frequently measure physical changes such as temperature, pressure, time, mass, electric current, light intensity, radioactivity etc. You generally need to analyze and present the data. When you have large amounts of data, you need to use software to analyze and present the data. LabVIEW makes these actions easy for you. Because LabVIEW includes hundreds of built-in and add-on functions you need that make it easy to create a user-friendly interface. In this chapter, we focus on data analysis and presentation. Introduction Almost all LabVIEW applications include 3 steps: (1) acquiring data, (2) analyzing and processing the data, and (3) presenting the data in a report or on a chart/ graph (Figure 1). Acquire: NI (National Instruments) is a global leader in computer-based data acquisition. More than millions of data acquisition devices have been sold by NI. LabVIEW developed by NI is a user friendly programming interface and easily communicates to NI devices. Therefore, most of the scientists and engineers choose LabVIEW for programming and NI devices for measurements. Analyze: LabVIEW software has more than 600 built-in functions for signal synthesis, frequency analysis, probability, statistics, math, curve fitting, For analysis, you will frequently use Mathematics palette and Signal Processing palette (Figure 2). Statistics and Histogram Express VIs are located in Probability & Statistics subpalette of Mathematics palette (Figure 3). In the following example, VI simulates a DC signal with noise (Figure 4). VI also generates a histogram and result of basic statistical analysis. Mathematics palette also contains Fitting subpalette ( Figure 5). This palette contains the following fitting VIs. You can use curve fitting for several reasons. For example, to reduce noise, to find mathematical relationships among variables, to estimate the variable value between data samples or out of data sample range. The following simple VI plots data and Exponential fit (Figure 6). You can use the other fitting VIs with the same manner. You can use the Signal Processing VIs for spectrum analysis, signal generation, digital filtering, and data windowing (Figure 7). It is located in Functions palette. In Signal Processing palette, Waveform Measurements palette contains Tone Measurements and Spectral Measurements Express VIs (Figure 8). The following example in Figure 9, Tone Measurements Express VI finds amplitude, frequency and phase of a signal which is generated by Simulate Signal Express VI. In this example, Spectral Measurements Express VI generates the power spectrum of the signal. LabVIEW has two ways to display data in 2D. These are Chart and Graph (Figure 11). A Waveform Chart remembers and displays a certain number of points by storing them in a buffer. Waveform Chart displays received data in addition to already existing data. A Waveform Graph accepts arrays of data in various forms, e.g. array, waveform, or dynamic data. It plots all the received points at once. You can visualize more than one data source on a chart or graph. In the following example DAQ Assistant take data from two channels. You can see data from all channels on a chart as shown in Figure 12. A multi-plot chart can be displayed as overlaid plot or stacked plot (Figure 13). To select Stack Plots or Overlay Plots right-click on the chart. Overlay Plots mode overlays all plots on the same y-axis. Stack Plots mode gives the each plot its own y-axis. To plot y values in a chart/graph you should wire only the y array data (y values) to the Waveform Chart or Waveform Graph. LabVIEW assumes that you sample y values at regular intervals, and thus creates x values at regular intervals. If you want to specify x and y values for a plot, you can use XY Graph. In the following example we plot multiple circles in a XY Graph (Figure 14). If you want to display both analog and digital signals together in a graph, use a Mixed Signal Graph located in the Graph palette. A Mixed Signal Graph is made by bundling multiple graphable data types. You can add plot area from the pop-up menu of an existing plot area by selecting Add Plot Area. You can also remove a plot area by selecting Remove Plot Area. In the following example, you can see both analog and digital signals together in a Mixed Signal Graph with two plot area (Figure 15). LabVIEW allows you to use 3D graphs to plot data in three dimensions. 3D graphs are located in Controls> > Modern> > Graph> > 3D Graph. LabVIEW allows you eleven types of 3D graphs: The Scatter, Bar, Pie, Stem, Ribbon, Contour, Quiver, Comet, Surface, Mesh, and Waterfall graphs. You can see some of them in Figure 16. However, to study with 3D graphs you must have learned basics of vector and matrix. Plot Helper.vi is automatically created in the block diagram when you drop any of the 3D graph. Plot Helper.vi is a polymorphic VI and thus it can accept Matrix or Vector inputs according to your selection (Figure 17). You can find the two examples of 3D graphs, below. In the following example we created a cylinder combining 5 circles whose z axis points are different from each other. Note that i (iteration number) generates z matrix (Figure 18). The following VI generates a sphere and visualizes it in 3D Parametric Graph (Figure 19). Here radius of sphere is 5 and sphere is generated by 20 circles. A sphere is a collection of circles. You can see from XY Graph that each circles are individual size. Publishing information to the web LabVIEW can publish any application to the Web with Remote Panels. Therefore, you can easily make your VI reachable as a Web page. Thus, clients can control the VI or view generated data by using their web browsers. Clients must use a version of the LabVIEW Run-Time Engine compatible with the version of LabVIEW. NI recommends that customers use the supported browser (Internet Explorer). Google Chrome version 42 and later, Mozilla Firefox 52 and later, Safari 12.1 in macOS Mojave 10.14, and Microsoft Edge are not supported browsers. Before view and control a front panel remotely, the Web Server must be enabled on the server computer where the VI or application wanted to view and control is located. Follow the steps below to learn how you can do it. 1. Create a VI. We created a VI named Remote Panel Example.vi (Figure 20). Open block diagram and click to Tools»Options»Web Server 3. Under the Remote Panel Server section, check Enable Remote Panel Server (Figure 21). 5. Under the Browse Access section, enter the network name of the computer and press the Add button. Allow viewing and controlling option must be selected (Figure 22). (Figure 23). Click OK and exit out of the 9. Fill Document title, Header and Footer sections and click Next (Figure 24). 10. Under the Save the New Web Page section, select where to save the HTML file and choose the file name and press Save to Disk button (Figure 25). 11.Click Connect button in Document URL window (Figure 26). Ensure that default browser is Internet Explorer. If not, copy URL address and paste it to Internet Explorer. You will see the following Internet Explorer page (Figure 27). In this page, clients must click Run button to control the VI from their computer. Report generation LabVIEW includes Report Generation toolkit to present your data in a Microsoft Office Word and/or Excel file. To use LabVIEW Report Generation Toolkit, it must be installed. Then corresponding functions will be located in Functions> > Report Generation palette (Figure 28). Microsoft office word and excel reports Report Generation palette contains many functions. Therefore, it is not easy to understand their properties. We recommend that you examine first the report generation example VIs in LabVIEW (Help>> Find Examples>> Search). You can modify them according to your purpose. These VIs generally generate reports based on templates. Using a template, allows you to generate standard reports for each execution of a VI. In the following example we generate a report for Microsoft Office Word (Figure 29). The example draws a circle and paste the circle to a Ms. Office Word document (Figure 30). You can determine color, size, graph type, marker style etc. by using the functions in Word Specific palette (Figure 31). Similarly, you can use Excel Specific palette to generate programmatically an Excel report (Figure 31). When you execute the VI you see the following picture in an automatically created Word document. HTML report LabVIEW has the ability to programmatically create html reports. Html files can be read by web browsers. We highly recommend you to present your data as a html file. Because reading an html file is not effected by version of web browsers. On the contrary, current version of Microsoft Office Word or Excel in your computer may not be compatible with LabVIEW Report Generation toolkit you installed. The following VI generates an html report (Figure 32). Here, Random Number function generates Y array. X Array is generated by absolute time values. When you execute the VI above the following report will be generated (Figure 33). Report Generation toolkit also contains Report Express VI and MS Office Report Express VI (Figure 34). These Express VIs allow you to present data in form of html, MS Office Word or Excel. In the following example, we use Report Express VI to present data in html format (Figures 35 and 36). You can also send data to printer or present data in MS Office Word or Excel format with the same VI. To do this double click Report Express VI and select the corresponding line from Destination tab in Configuration Report window. When you execute the VI above you will see the following report. Save data You can create folder, file or path, write and read data by using the File I/O VIs and functions (Figure 37). LabVIEW allows you to save data with different data formats. Write Delimited Spreadsheet.vi converts a 2D or 1D array to a text string and writes the string to a new byte stream file or appends the string to an existing file. Both 2D and 1D arrays can be strings, signed integers, or double-precision numbers. You should put Write Delimited Spreadsheet.vi out of the loop (Figure 38). Putting the VI inside the loop is not the good way of using it. Because LabVIEW at every iteration would open-write-close the file if it is inside the loop. This is not good in terms of VI efficiency. You can also write date/time information for each data point. In the following example (Figure 39), data consist of time and random number. Read Delimited Spreadsheet.vi reads a specified number of lines or rows from a numeric text file beginning at a specified character offset and converts the data to a 2D, double-precision array of numbers, strings, or integers. In the following example VI writes and reads data by using Write Delimited Spreadsheet.vi and Read Delimited Spreadsheet.vi (Figure 40). Note that we formatted time and random number by using Format Into String. Another way to write and read data is to use Write To Measurement File and Read From Measurement File (Figure 41). It can only accept numeric or waveform data although Write To Spreadsheet File can accept array of strings, signed integers, or double-precision numbers. We recommend that you use Write To Measurement File to write data on disk. Because this Express VI allows you to save data as text (LVM), binary (TDMS), binary with XML header (TDM) and Microsoft Excel (.xlsx) formats. Write To Measurement File is an Express VI. When you double-click to the Express VI Configure Write To Measurement File window opens (Figure 42). Here, you can configure writing. As you see in Figure 42 Configure Write To Measurement File window allows you to change settings. It may be difficult to understand how you can configure this Express VI for the first time. To understand the function of each setting, we suggest that you individually experience with each setting. In the following example we add time values (x) to the signal (random numbers, y) by using Write To Measurement File (Figure 43). Time data must be connected to Comment terminal of Write To Measurement File. You can also save string data using Write to Text File. In the following example, VI saves date with time information (Figure 44). Similar to examples above you can write and read data by using Write Binary File and Read Binary File (Figure 45). Binary files use less data storage. Therefore, it is useful when you have large data. However, Write To Measurement File save data as binary format, too. You may not need to use Write Binary File and Read Binary File. (Figure 46). The following VI generates two waveform sinus signals and write-read them (Figure 47). NI has created a technical data management (TDM) solution. TDM Streaming is located in File I/O palette (Figure 48). NI recommends costumers to use TDMS file format because it combines the advantages of several data storage options in one file format ( Table 1). You can also work with TDM and TDMS files in Excel by utilizing the free TDM Excel Add-in for Microsoft Excel (supported Excel version: from 2007 to Excel 2016). You can take additional information from the following link [3]. https://www.ni.com/en-tr/support/documentation/supplemental/06/the-ni-td ms-file-format.html. National instruments DIAdem DIAdem is a software to manage large amounts of data for measurement data aggregation, inspection, analysis, and reporting (Figure 49). With DIAdem, extraction of information from data can be efficiently performed. DIAdem is well adapted to LabVIEW. You can transfer your data from your LabVIEW application to DIAdem. With DIAdem, DataPlugins can be used to read, inspect and search different kinds of custom file formats. NI supplies free downloadable DataPlugins for hundreds of the most commonly used data file formats. LabVIEW and OriginPro National Instruments engineers have created various NI LabVIEW add-ons which contain many functions and subVIs to meet the required functionality. Besides the add-ons developed by NI engineers, add-ons have been developed for some other applications such as Origin. Origin or OriginPro is a powerful data analysis and graphics software preferred by scientists and engineers in industry, academia and research laboratories around the world [4]. Once the data collected by LabVIEW, the end-user will need to analyze the data and generate reports for presentation. Origin provides powerful analysis and graphing tools to reanalyze and present data. The ability to communicate easily between LabVIEW and Origin is a good platform that can greatly increase its efficiency in terms of data analysis and presentation [5]. There are studies in the literature using Origin and LabVIEW for this purpose [6]. Origin provides subVIs to make work in LabVIEW environment. These subVIs allow to data transfer from LabVIEW to Origin and the data to be analyzed and presented in the Origin environment. This subVIs can be accessed from the folder where OriginPro is installed (Samples\COM Server and Client \LabVIEW). Also, in order to quickly access these subVIs in the LabVIEW environment, the subVIs can be copied from the installed folder and then pasted into a folder named OriginPro in vi.lip\addons folder where LabVIEW is located [7]. OriginPro library can be accessed from the menu opened by right-clicking on addons in the Block Diagram window in LabVIEW as shown in Figure 50. After adding the subVIs provided by OriginPro to LabVIEW, you can see subVIs as shown in Figure 51. There are four sections under Origin function palette. These are • OriginApp: Basic VIs that handles the Origin OPJ files, worksheet and columns, • OriginAppClassics: Older VIs existed before Origin 8 (deprecated), • OriginWave: VIs that handles Origin matrix objects, • OriginMatrix: VIs that handles LabVIEW Waveform data. In the following example, the measured temperature values are transferred and plotted in OriginPro. In this example, first, add OA_ConnectToOrigin.vi to the Block diagram so that LabVIEW can connect to OriginPro. Once the link between LabVIEW and OriginPro is established, add OA_NewWorksheet.vi to the Block diagram to open a new worksheet in OriginPro. This also creates the name and details of the worksheet to use the entries of VI. After that, you use OA_GetColumn.vi to select a column. Then, you send the information of this column via OA_Col-Setting.vi. You will have to pay attention to two important points while filling in the entries of this VI. The first is the Data Format. This is the part where you need to write the format of the data. The second is the column type. Here we can determine the axis of the column to use in the chart. You can specify an axis such as X, Y, or Z, as in the example shown in Figure 52. After this process is completed, you will be able to transfer your data to the worksheet by using OA_Col-SetData.vi. You can use the Read Delimited Spreadsheet.vi in the File I/O menu to send the data to worksheet created in the Origin environment as in the example shown in Figure 52. You should pay attention for the correct format of the data. In addition, if you are working with data consisting of a single data created in certain periods of time such as the data used in this example, and if you are going to plot these data versus time, you will need to use OA_Col-SetEvenSampling.vi. Finally, VI in Figure 52 plots the temperature data at 0.5 ms intervals as shown in Figure 53 by using OA_NEWEmptGraf.vi and OA_PlotWksCols.vi. Similarly, another example is to use the template you created in the OriginPro. For this, you must first create a template in the OriginPro. Save this template in Docu-ments\OriginLab\2015\User Files folder in your computer. In this case, it will be easy for you to access this template from LabVIEW. For an example application, a template has been prepared in OriginPro as in Figure 54. Now add OA_ConnectToOrigin.vi to the Block Diagram for the connection between Labview and OriginPro. OA_AddOriginPath.vi creates the file path. Then, use OA_Load.vi to upload file. OA_FindWorksheet.vi is to select worksheet. After this step is completed, since you defined Voltage for the X axis and Current for the Y axis in the template, you need to send your data to OriginPro accordingly. For this, you can use OA_GetColumn.vi to select column. You can send your data to these columns with the help of OA_Col-SetData.vi. These steps are given in Figure 55. Here, the data you previously saved in the LabVIEW environment is opened again and drawn in OriginPro [8]. Conclusion The scientist and engineers frequently need to measure physical changes, analyze them, and present data from the measurement. LabVIEW includes hundreds of built-in and add-ons functions for analysis and presentation of data. If you do not know LabVIEW well, you should check the examples in LabVIEW and ni.com. This is the easiest way to learn how you can analyze and present your data. LabVIEW has effective 2D (Charts and Graphs) and 3D visualization tools for data presentation. Graphs accept only array data. It plots all the received points at once. Charts attach received data to already existing points. When an array of points is wired to a chart or graph, LabVIEW assumes the points are equally spaced out. If you also want to define X axis values, you should use XY Graph. LabVIEW allows you to present your data html, Microsoft Office Word or Excel. LabVIEW also allows you to publish any application to the Web with Remote Panels. Therefore, your VIs can be reachable as a Web page. You can control a remote device from your home. It is interesting, right? Generally, you want to save and read your data. There are different ways to do it in LabVIEW. We recommend you to use Write To Measurement File and Read From Measurement File. These supports data as text (LVM), binary (TDMS), binary with XML header (TDM) and Microsoft Excel (.xlsx) formats. DIAdem software is a NI product to manage data for measurement data aggregation, inspection, analysis, and reporting. Interestingly, DIAdem can use more than one thousand data file formats by utilizing DataPlugins. More than 500,000 clients in the world use OriginPro to import, graph, explore, analyze interpret their data. If you are a user of OriginPro software, you can integrate it with LabVIEW. When you install Origin add-ons for LabVIEW you can easily communicate with OriginPro. Thanks We thank to Serdar Bölükbaşıoğlu, manager of Ludre Software company, for his contribution.
2021-05-08T00:03:03.889Z
2021-02-24T00:00:00.000
{ "year": 2021, "sha1": "1514f378e0ae7012b0a6172c16136b41fdd8d5a2", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/75407", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "64987547463964b89b9f05caa9f2a32f4f6a37ad", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Materials Science" ] }
126209140
pes2o/s2orc
v3-fos-license
Floating Orbitals Reconsidered: The Difference an Imaginary Part Can Make Floating orbitals for valence electrons have made cameo appearances at several stages in the history of quantum chemistry. Most often, they were considered as potentially useful basis functions and, more recently, also as muses for the development of subatomistic force fields. To facilitate computation, these orbitals are generally taken to be real spherical Gaussians. However, the computational advantages carry over to complex Gaussians. Here, we explore the potential utility of an imaginary part. Analytical equations for two mobile electrons show that an imaginary part shifts the balance between contributions to the exchange energy that favor parallel versus antiparallel electron spins. However, an imaginary part also carries a large kinetic energy penalty. The imaginary part is therefore negligible for two valence electrons, except in the case of strong core–valence exchange interactions. This consideration allows a self-consistent model for the nd2 triplet ground states of transition metal ions versus the ns2 singlet ground states of main group ions. INTRODUCTION Typically, calculations of electronic structure in wave mechanics employ linear combinations of basis functions that are centered on the atoms. Part of the rationale includes our considerable intuition about the roles of various atomic orbitals in bonding. On the other hand, because we also have considerable intuition about electron density in bonding and lone pair regions, the use of "floating" (or "distributed" or "bond") orbitals has frequently been explored on the premise that they can afford more compact and less costly basis sets. 1−8 However, because it is less straightforward to situate and size such orbitals than to solve for coefficients in large linear combinations of atomic orbitals (LCAOs), the latter approach has generally prevailed. On a separate track in computational chemistry, Monte Carlo (MC) and molecular dynamics (MD) techniques have been used with atomistic force fields to simulate atomic trajectories in molecules of ever increasing size. However, these same tools can simultaneously be used to situate and size floating orbitals. 9,10 In the same spirit, these tools have also been applied to subatomistic force fields in which valence electrons are modeled as semiclassical particles, a construct that has enabled highly efficient, turnkey simulations of chemical reactions among molecules that are intrinsically flexible and polarizable. 11 In effect, the semiclassical approach assigns valence electrons to floating orbitals with parameters that evolve according to MC or MD protocols. Thus, it seems timely to revisit the floating orbital approach. Whether atom-centered or floating, orbitals have generally been described with Gaussian functions for the ease of evaluating the integrals involved in energy calculations 12 (with cusp conditions accommodated by sums of concentric Gaussians in some cases 13,14 and by distancedependent corrections in others 15,16 ). Moreover, the employed Gaussians have generally been taken to be real. (The main exception is in ab initio descriptions of resonance states, where it has proven convenient to transform the problem by a rotation of the electron coordinates in the complex plane. 17−22 ) When real Gaussians comprise a basis set, the wave functions are complex only to the extent that the coefficients of the linear combinations are complex and each Gaussian contributes to the wave function with a constant phase. In the semiclassical use of floating spherical Gaussian orbitals (FSGOs), there has been no phase variation at all. 23−25 Here, we consider what may be gained by lifting these severe constraints by allowing the variable a in eq 1 to be complex. In section 2, we derive the energies of electrons occupying complex FSGOs. In section 3, we explore the simplest, symmetric case of two electrons with a nucleus at the midpoint between them. This makes it easier to identify the unique features that are contributed by the imaginary parts of the orbitals. In particular, we show that the imaginary parts attenuate the contributions to the exchange energy that favor paired spins more strongly than they attenuate contributions that favor unpaired spins. In section 4, we apply these equations to He-like species, the simplest cases that involve exchange integrals. We find that the imaginary parts of the FSGOs turn out to be negligible (close to 0) compared to the real parts, which leads to the result that electron pairing is favored, consistent with the singlet 1s 2 ground states of these species. In section 5, we consider other ions with just two valence electrons. Here, we first model the shielding effects of the core electrons and show that electron pairing is still favored. In section 6, we add a model for the exchange effects of the core electrons and show that this results in an imaginary part of the FSGOs large enough to favor the triplet state. This suggests that the high-spin ground states of transition metals are stabilized by core−valence exchange interactions. Finally, we discuss the implications of these results for developing subatomistic force fields that are transferable between main group and transition metal elements. ENERGIES OF ELECTRONS OCCUPYING COMPLEX FSGOS We focus on a two-electron system because that is the simplest that includes the exchange contributions that distinguish the energies for like and unlike electron spins. Given wave functions Ψ ↓↓ and Ψ ↓↑ , respectively, these energies are where the Hamiltonian includes the quantum kinetic energy of the electrons, the repulsion between the electrons, and the attraction of the electrons to a nucleus N with nuclear charge Z N and the antisymmetric wave functions for two electrons of like spin (αα or ββ) or unlike spin (αβ or βα) are and (1, 2) respectively, where the spatial orbitals Φ a and Φ b are FSGOs (eq 1). The forms for electrons with unlike spins (eq 5) fulfill the requirement that the α spin is consistently associated with the electron at position a (top) or position b (bottom), while the β spin is consistently associated with the other position. An important consequence is that the exchange integrals for electrons of unlike spins are identically zero due to the orthogonality of spins in the cross-terms. This is physically reasonable given that electrons with unlike spins are not indistinguishable particles. Thus where, given the symmetry of the Hamiltonian (where we now adopt the conventional compact notation with implicit electrons indices) and The exchange energy is therefore where Going forward, it is convenient to distinguish between terms arising from the kinetic, repulsive, and attractive parts of the Hamiltonian with the notation and Indicating a complex conjugate with an asterisk, defining a measure of the inequivalence of a and b as and making use of Boys' integrals 12 defined as the energies are given by and As expected, ΔU K is positive; the antisymmetric spatial wave function (eq 4) has tighter curvature corresponding to greater kinetic energy. Also, as expected, ΔU R is negative because the antisymmetric spatial wave function (eq 4) has depleted electron density between the center's of the two FSGOs, reducing the probability of close encounters between the two electrons. For the same reason, ΔU A is positive when the nucleus is in the region between the centers of the two FSGOs, where the electron density is diminished. Of course, when a* = a and b* = b, these results are the same as those obtained previously for FSGOs with no imaginary part. 25 UNIQUE ROLE OF THE IMAGINARY COMPONENT In order to explore the difference that an imaginary component of the FSGO can make, it helps to consider the case of where ℜ and ℑ are real numbers and i = (−1) 1/2 . This equality is a reasonable approximation for two electrons in similar environments. Substituting eq 24 into eqs 15−22 yields The approximation in eq 32 assumes that the vector between electrons of like spin is essentially orthogonal to the vector from the nucleus to the midpoint between the electrons. This is a reasonable approximation for electrons that are both close to a given nucleus, which are the only pairs that make significant contributions to ΔU A anyway. For species with one nucleus and just two electrons, it is also expected that the nucleus will be located at the midpoint between the two electrons, as illustrated in Figure 1. In that case Combining eqs 30, 31, and 33, we find that, when 0 ℑ = , U ex > 0 for all ℜ and r when Z N = 1 (see Figure 2a) and the more so when Z N > 1 (because the positive value of ΔU A increases with Z N ). As expected, 0 ℑ ≠ adds to the kinetic energy (eq 26) because it increases the curvature of the wave function, whereas it has no effect on the electrostatic energies (eqs 27 and 28) because they depend only on the distribution of electron density. The more interesting results are those for the exchange energy (eqs 29-33). 0 ℑ ≠ has a damping effect via the exponential in Ω (eq 29). However, this is offset in varying degrees by the exaggerating effect of 0 ℑ ≠ on ΔU K , ΔU R , and ΔU A (eqs 30−33). This is especially so for ΔU A and ΔU R where 0 ℑ ≠ results in terms in which F 0 has a negative argument. These terms can be large because, as is evident in eq 15 and illustrated in Figure 3, F 0 (−x) becomes exponentially large with increasing x. It is also notable that the coefficients for ℑ are such that their effect is greater in ΔU R (eq 31) than in ΔU A (eq 33). Thus, ℑ increases the influence of the negative ΔU R which favors like (i.e., unpaired) spins versus the positive (ΔU K + ΔU A ) which favors unlike (i.e., paired) spins. The range of U ex < 0 when 0 ℑ ≠ is illustrated in Figure 2b,c. Of course, this influence of ℑ is unhelpful for He-like species, which are all 1s 2 singlets in the ground state. Thus, it is important to examine whether ℑ is sufficiently small for these species (as we do in section 4). However, a more interesting application for floating orbitals is the description of valence electrons in heavy elements. Therefore, we go on (in Sections 5 and 6) to consider the influence of core electrons in ions with two valence electrons. Of these, the main group ions have low-spin ground states (e.g., 2s 2 and 3s 2 singlets) consistent with smaller values of ℑ, and the transition metal ions have high-spin ground states (e.g., 3d 2 triplets) consistent with larger values of ℑ. HE AND HE-LIKE IONS In these two-fold symmetric species, there are three degrees of freedom: See eq 24 and Figure 1. We find that the kinetic energy penalty for 0 ℑ ≠ (in eq 26) is sufficiently large that the minimum of U ↑↑ ( r , , ℜ ℑ ) occurs at ℑ ∼ 0 for all values of Z N . Furthermore, as U ex > 0 when 0 ℑ ≠ for all Z N , ℜ, and r, it follows that U ↑↑ > U ↑↓ for He and all He-like ions, consistent with the known 1s 2 singlet ground states of these species. INFLUENCE OF CORE ELECTRONS: SHIELDING The new term partially negates the attraction of the valence electrons to the nucleus, with the shielding increasing as c N increases (i.e., as the distribution of the core electrons is more contracted around the nucleus). The effect is to replace eq 28 with and eq 33 by Although U A is highly sensitive to shielding (as illustrated in Figure 4), the shielding effect does not prevent the drive of U K , ΔU K , and ΔU A toward small ℑ from overwhelming the drive of ΔU R toward large ℑ, just as it did across all values of Z N in the absence of shielding (in the previous section). Thus, it remains that U ex > 0, which dictates a singlet ground state. Comparing the results in Figure 5 with experiment (see abstract graphic), we find that, although the valence FSGO construct greatly overestimates the relative stability of the ns 2 singlet ground states of main group ions, the rising trend with increasing Q N is reproduced. On the other hand, the model does not accommodate the nd 2 triplet ground states of transition metal ions. INFLUENCE OF CORE ELECTRONS: CORE−VALENCE EXCHANGE The above results are obtained without considering energy contributions from valence-core exchange and it may be that valence−core exchange is more important for transition metals where valence electrons are going into a shell that is already partially filled in the kernel. However, disaggregating the core electrons would greatly diminish the utility of floating orbitals. Therefore, we consider whether the effect of core−valence exchange can be represented by an empirical term in U A . Given the (1/Ω) factor shared by all the exchange terms, we expect that core−valence exchange energy should diminish exponentially with distance and with 1 ( / ) 2 ℜ[ + ℑ ℜ ]. Thus, for exploratory purposes, we replace eq 37 with eq 39. In eq 39, the exchange term is proportional to the number of core electrons of a given spin, (Z N − Q N )/2, and has two scaling constants, χ N and τ N , that can depend on the core. In particular, it is hypothesized that χ N is negligible when the core−shells are filled (as for main group elements) and χ N > 0 when they are not (as for transition metals). For χ N > 0, the core−valence exchange energy clearly favors larger ℜ and larger ( / 2 ℑ ℜ), with the latter favoring the triplet state. Figure 6 shows that the results are strongly dependent on the shielding, with the triplet state favored only when shielding is sufficiently strong (i.e., when the value of λ = c N −1/2 is sufficiently small). Notably, although the magnitude of the stabilization of the triplet is small compared to the experimental values (see abstract graphic), the trend across the transition metals is correct, with greater stabilization of the [Ar]3d 2 triplet state for larger Z N . Given that spherical Gaussians are obviously crude descriptions of the valence orbitals and core electron distribution, the quantitative discrepancy is understandable, whereas the qualitative agreement suggests that attribution of the triplet ground state in [Ar]3d 2 ions to core−valence exchange is phenomenologically reasonable. DISCUSSION AND CONCLUSIONS Our goal has been to identify how the imaginary part of a floating orbital might be useful in describing valence electrons. We find that an imaginary part changes the balance between different contributions to the exchange energy for a pair of electrons such that the bias against the triplet state is reduced. However, due to the large contribution of an imaginary part to the kinetic energy, the imaginary part is generally negligible. This is consistent with the 1s 2 singlet ground states of all He-like ions. For more relevant ions, we consider the effect of core electrons on a pair of valence electrons. Initially considering only shielding of the nuclear attraction, we again find that the imaginary part of the floating orbital is negligible. This is consistent with the ns 2 singlet ground states of the ions of the main group elements, including qualitative trends within and between rows but not with the nd 2 triplet ground states of the ions of the transition metals. We hypothesize that the latter reflects core−valence exchange interactions, supposing that this effect is more important when the valence electrons belong to the same shell as the "outermost" core electrons. Choosing a plausible form for the core−valence exchange energy, we find imaginary parts that are sufficiently large to stabilize the triplet state when the shielding is sufficiently strong (i.e., when the core electron density is sufficiently compact). Moreover, the stabilization shows the correct trend, becoming stronger with increasing atomic number. However, the magnitude of the stabilization is small compared to experimental values. The analytical forms of the energy integrals obtained here rely on the use of spherical Gaussians for valence electron orbitals and core electron densities. As this is a crude approximation, we expect no more than qualitative insight. However, such insights have proved useful in the development of subatomistic force fields in which independently mobile valence electrons are modeled as semiclassical particles interacting with each other and with kernels via potentials that take quantum effects into account. For a number of main group elements, studies based on spherical Gaussian orbitals have provided the forms of the potentials implemented in eFF 23 and an interpretation of the potentials discovered heuristically in LEWIS. 25 The present work shows how this approach may be extended to the transition metals. Whereas, the main group elements only require that the valence electrons have four dynamic variables (a set of three Cartesian coordinates and a real cloud size parameter), including the transition metals requires a fifth dynamic variable (an imaginary cloud size parameter). This additional dynamic variable (which is expected to be significant in magnitude only near the kernel of a transition metal) imposes a very modest addition to computational overhead.
2019-04-22T13:12:29.585Z
2018-09-11T00:00:00.000
{ "year": 2018, "sha1": "4c3cde7a72e4be09ef54e3a53de3a6126380c35f", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b01528", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9ae3ea61e5f44900030d84c4c232d3234a08a0e2", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
253707528
pes2o/s2orc
v3-fos-license
Motor demands influence conflict processing in a mouse-tracking Simon task Previous studies have shown incorrect motor activation when making perceptual decisions under conflict, but the potential involvement of motor processes in conflict resolution is still unclear. The present study tested whether the effects of distracting information may be reduced when anticipated motor processing demands increase. Specifically, across two mouse-tracking Simon experiments, we manipulated blockwise motor demands (high vs. low) by requiring participants to move a mouse cursor to either large versus small (Experiment 1) or near versus far (Experiment 2) response boxes presented on the screen. We reasoned that participants would increase action control in blocks with high versus low motor demands and that this would reduce the distracting effect of location-based activation. The results support this hypothesis: Simon effects were reduced under high versus low motor demands and this modulation held even when controlling for time-varying fluctuations in distractor-based activation via distributional analyses (i.e., delta plots). Thus, the present findings indicate that anticipation of different motor costs can influence conflict processing. We propose that the competition between distractor-based and target-based activation is biased at premotor and/or motor stages in anticipation of motor demands, but also discuss alternative implementations of action control. Goal-directed behavior requires action control, the ability that enables us to translate action-relevant information into appropriate motor responses (e.g., Verbruggen et al., 2014;Shiffrin & Schneider, 1977). Central to our understanding of action control is the key question of how decision-making and motor processes interact to optimize sensorimotor behavior (e.g., Wispinski et al., 2018;Kim et al., 2021;Cisek & Kalaska, 2010). One useful approach to tackle this question is to study behavior in conflict tasks, where participants are presented not only with relevant but also with distracting and potentially conflicting information (e.g., Stroop, 1935;Eriksen & Eriksen, 1974;Simon & Rudell, 1967). Findings from conflict task studies have shown that distracting information affects not only decision-processes involved in selecting a response, but also motor processes involved in initiating and executing a response (e.g., Servant et al., 2016;Freud et al., 2015;Buetti & Kerzel, 2009). There is still uncertainty, however, about whether and how motor processes are involved when making actions under conflict. In the present study, we aim to provide some further insights into the role of motor processes in conflict processing by investigating how increased motor processing demands influence the effect of distracting information in the Simon task with mouse movements. As elaborated in more detail within our introduction, we reasoned that the Simon effect may be paradoxically reduced with larger motor demands, because target-related stronger activity during premotor and/or motor processing could counteract distractor-based activation. In a standard visual Simon task, participants are required to make a left or right response to the identity of a lateralized target (e.g., a letter H or S) while ignoring its distracting spatial location (e.g., Hommel, 1994b;Lien & Proctor, 2000;Bausenhart et al., 2021;Hommel, 2011). Responses are typically faster and more accurate when target and response location are on the same (congruent trials) compared to opposite sides (incongruent trials). This so-called Simon effect has most often been observed when responses are simple key presses with the fingers of the left and right hand (e.g., Lien & Proctor, 2000;Hübner & Mishra, 2016;. However, the effect can also be reliably measured when participants use other response effectors-vocal (e.g., Treccani et al., 2017;Wühr & Ansorge, 2007), eye (e.g., Leuthold & Schröter, 2006) and foot (e.g., Janczyk & Leuthold, 2017;Miller, 2016) responses-or perform more complex, continuous movements like reaching towards left versus right response boards (e.g., Salzer & Friedman, 2020;Finkbeiner & Heathcote, 2016), or moving a mouse cursor to response boxes presented on the left versus right side of the screen (e.g., Scherbaum et al., 2010;Grage et al., 2019;Wirth et al., 2020). Many theoretical accounts of the Simon effect and other conflict effects assume that target-based information undergoes controlled processing within one route, whereas distractor-based information is processed presumably rather automatically by another parallel route (e.g., Eimer et al., 1995;Ridderinkhof et al., 1995;De Jong et al., 1994;Hübner et al., 2010;Ulrich et al., 2015;Wühr & Heuer, 2018;Kornblum et al., 1990). In essence, conflict effects emerge because distractor-based activation spills over to decisionmaking that is mainly driven by target-based activation and this activation superimposition improves (congruent trials) or impairs (incongruent trials) task performance. These accounts generally agree that activations are superimposed when selecting a response during decision-making. For example, a recently introduced model of conflict processing, the Diffusion Model for Conflict Tasks (DMC), assumes that the total response time (RT) in a trial is the result of a decision process in which activations are superimposed plus "the residual duration of all processes outside the decision process (e.g., stimulus encoding and response execution)" (p. 153 Ulrich et al. (2015)) 1 . Interestingly, however, there is also evidence that motor processes are involved in conflict processing (e.g., Lim & Cho, 2021;Buetti & Kerzel, 2009;Scorolli et al., 2015;Stürmer & Leuthold, 2003;Treccani et al., 2018;Freud et al., 2015;Miller & Roüast, 2016;Hietanen & Rämä, 1995;Hasbroucq et al., 2001). For example, EEG and EMG measures indicate that distracting information triggers motoric activation that can compete with motor activation provided by on-going decision processes (e.g., Servant et al., 2016;Stürmer et al., 2002). Note that these findings do not necessarily imply that distracting information only affects motor processes in parallel and independently from decision processes, because it is also possible that distractors produce motor activation after triggering cognitive-based response codes (cf. Valle-Inclán & Redondo, 1998;Hommel et al., 2004). Relatedly, it is similarly possible that independent Simon effects arise at both response selection and motor programming stages (e.g., Buetti & Kerzel, 2009, 2008. In any case, there are good reasons to assume that the competition between distractor-based and target-based activation might be localized during both premotor and motor processing, and that control processes also operate on motor processes. The goal of the present study was to examine a novel approach to elaborate on how predictable motor processing demands modulate the superimposition of activation. While some studies suggest that anticipating motor demands can influence decision-making and/or motor processing (cf. Hagura et al., 2017;Marcos et al., 2015;Morel et al., 2017;Cos, 2017), it is unclear whether and how motor demands affect performance in the presence of distracting information. To tackle this issue, we used a visual Simon task with mouse movements and compared the Simon effect in blocks in which participants had to move a mouse cursor to either large versus small (Experiment 1) or near versus far (Experiment 2) response boxes presented on the screen. In general, we reasoned that increased action control in blocks associated with high (i.e., small or far responses boxes) compared to low (i.e., large or near responses) motor demands would result in amplified target processing since participants can anticipate that more demanding movements are required to reach the action goal. Thus, motor demands could bias processing at premotor and/or motor stages of processing. Assuming that the distractor and target processes are combined when a response is selected and/or a motor response is initiated, stronger target-based activation at the stage(s) where activation-superimposition occur(s) should lead to a reduced Simon effect under high compared to low motor demands. As will be considered in the General Discussion, there are also other possibilities regarding how increased motor demands may influence the Simon effect, but for now, we focus on this simplified biased competition account. Critically, the temporal dynamics of conflict effects (including the Simon effect) make it difficult to infer the effects of experimental manipulations when looking only at mean RTs (e.g., Mittelstädt & Miller, 2020;Hommel, 1993bHommel, , 1995. For example, the visual Simon effect with horizontal key press responses is usually larger for faster than for slower responses as becomes evident from distributional analyses (e.g., Burle et al., 2013;De Jong et al., 1994;Luo & Proctor, 2020;Proctor et al., 2011;Wiegand & Wascher, 2005;Wascher et al., 2001). Specifically, delta plots display the size of conflict effects as a function of response speed by plotting the difference between congruent and incongruent mean response times (RTs) separately at RT percentile ranging from fastest to the slowest RTs (e.g., 10%, 20%, 30%). The slope of delta plots is usually interpreted as a marker of the time-course of distractor-based activation and, as illustrated in Fig. 1A, is primarily decreasing in the horizontal Simon task (e.g., Ridderinkhof, 2002;De Jong et al., 1994;Ellinghaus et al., 2017). Thus, manipulations which prolong processing duration can simply reduce the mean Simon effect because location-based activation has more time to fade out (cf., Hommel, 1994a, b;Mittelstädt et al., 2021). To see whether the motor demand manipulations produce effects beyond those explainable purely in terms of timevarying distractor processing, we compared the delta plots in the low to the high motor demand condition (cf. Mittelstädt & Miller, 2020. Specifically, an overlapping delta plot pattern would indicate that the effects can be explained based purely on the unfolding of distractor-based activation (cf. solid and dashed delta plots in idealized prediction Fig. 1A). However, the Simon effect might be reduced beyond what can be explained by response speed which should shift the delta plot of the high demand condition downward relative to the delta plot of the low demand condition (cf. solid and dotted delta plots in Fig. 1A). It should be noted that decreasing Simon effects are primarily observed in the visual Simon task with key presses (e.g., Mittelstädt & Miller, 2020) and touch-based finger movements (e.g., Buetti & Kerzel, 2009), but analogous reasoning also applies when we would observe other time-varying characteristics of Simon effects with mouse movements (e.g., increasing delta plots, cf. Fig. 1B). Furthermore, motor demands may also affect the time-course of distractor-based processing and hence the slope of delta plots. Thus, the general point is that interpretations based solely on mean RT may not be sufficient to rule out accounts in which the motor demand manipulation influences the Simon effect exclusively because of time-varying activations. The present manipulation of motor demands is motivated by Fitt's Law (Fitts, 1954), according to which the difficulty of the motor task increases with the distance to the target and with decreasing target size. Thus, the manipulation of target size (Experiment 1) and of target distance (Experiment 2) should both affect movement time (MT) 2 . Experiment 1 In the first experiment, we manipulated motor demands by reducing the size of the response boxes. Thus, in different block of trials, the response boxes were either large (low motor demand) or small (high motor demand). ΔRT (Simon effect) Increasing delta plots RT B. Fig. 1 Schematic depiction of two qualitatively different delta plots shifts of a high (slow) demand motor processing condition (i.e., dashed and dotted delta plots) compared to a low (fast) demand condition (i.e., solid delta plots) separately for generally decreasing (A) or increasing delta plots (B) 2 After completing the present study, we became aware of a study by Wirth et al. (2020) which explored the influence of several design parameters on measures of finger-tracking performance (e.g., initiation and movement times) in a Simon task-set up. Although this study was not designed to investigate how motor demands influence conflict processing, it is interesting to note that they also manipulated the size of the response areas in one experiment but this manipulation did not modulate the Simon effect in mean initiation nor mean movement times. Unfortunately, it is not clear whether this pattern holds true when considering the whole IT and MT distributions, because they did not report any delta plot analyses. Moreover, one aspect of their experimental procedure makes it generally difficult to derive any post-hoc conclusion regarding whether and how motor demands influences conflict processing: The color filling of one of the left versus right response areas served as the target-based information and hence, varying the size of response areas also varied the visibility of targets. Thus, there might also be effects on target and/or distractor processing due to the rather perceptual component of their manipulation. Similarly, it is difficult to interpret their post-hoc betweenexperiment comparison related to conditions with different distances of response areas in finger-tracking. Here, the authors report significantly smaller mean congruency effects in movement times between the near condition of their Exp. 1 and their far condition of Exp. 2 (no delta plots are reported). As the authors themselves concede (Wirth et al., 2020), however, the interpretation is hindered, since the impact of the other experiment-specific conditions on this comparison is unclear. In addition, there was surprisingly no evidence that the overall time to complete a trial differed between the respective conditions. Methods Participants 30 people were tested online. Data of three participants were excluded due to moving the mouse out of the starting box region before stimulus onset in over 25% of trials (for more details, see data preparation section). The final sample consisted of 27 participants (21 female, all right-handed), ranging in age from 19 to 28 years (M = 21.93) 3 . All participants gave informed consent, were tested in a single session lasting approximately 35 min, and received course credits for participation. Apparatus and stimuli The experiment was conducted online using the JavaScript library jsPsych (e.g., De Leeuw, 2015), by extending the mouse-plugin reported in Schütt et al. (2022). All visual stimuli were presented in black on a grey background. Figure 2A illustrates the stimulus display. The two stimulus letters (i.e., H and S) were randomly assigned to left-and right target responses. A starting box was presented in the center at the bottom of the screen. Two response boxes were presented to left and right upper screen positions. In high motor demand blocks, the size of response boxes was reduced by factor 2. Initiation times were calculated from the time of stimulus onset until participants left the starting box. The remaining time (i.e., until participants made a click with the mouse in a response box region) was considered movement times. Thus, overall response times reflect the sum of initiation and movement times. Note that in both experiments we also reanalyzed movement times while excluding trials in which participants paused their movements (with pauses defined as no movement for more than 50 ms during the interval between movement onset and clicking in or reaching the target region). The movement time results on both mean and distributional RT level were similar to the ones reported in the present result sections when excluding these trials, indicating that pause-and-restart movements are very unlikely to have contaminated the reported findings. Procedure Motor demands (low vs. high) were held constant within a block and alternated across sequential blocks. Half of the participants were tested with a block with low motor demand for the first block. The experiment consisted of ten blocks of trials, of which the first two were considered practice blocks and were removed from subsequent analyses. The practice blocks consisted of 20 trials each, whilst the remaining blocks consisted of 64 trials each. Participants were instructed to initiate each trial by clicking the left mouse button within the starting box region, after which a fixation cross appeared on the screen for 500 ms. Following the offset of the fixation cross, a single letter was presented to the left or right side of the screen (i.e., Simon task). We opted to display targets below response areas-and not within response areas-to be consistent with other (Simon) mouse-tracking studies (e.g., Scherbaum et al., 2010;Grage et al., 2019;Scherbaum & Kieslich, 2018;Scherbaum & Dshemuchadse, 2019) and to minimize effects not related to motor demands (e.g., on perceptual components) as much as possible. The letter remained on the screen until participants responded (i.e., no response deadline) by clicking into the left or right response box. Feedback was displayed for either 1 s or (correct) or for 2.5 s (error) before the next trial started. B. Experiment 2 H Fig. 2 Schematic illustration of the stimulus display in Experiment 1 (A) and Experiment 2 (B). Participants had to initiate each trial by clicking into the starting box (depicted as grey squares) and after 500 ms a target letter was presented to the left or right of the screen. Participants responded by clicking into one of the two response boxes (depicted as black squares). Response boxes with solid lines were used in low motor demand blocks. Response boxes with dotted lines were used in high motor demand blocks. In Experiment 1, the size of response boxes differed by factor 3 and in Experiment 2, the distance of response boxes differed by factor 2 3 The sample size in the two experiments was somewhat arbitrarily set, but both practical constraints (e.g., participant availability) and empirical constraints (e.g., effect size in previous studies, larger variability in an online-setting) were taken into account. In a previous study (Experiment 4 in Mittelstädt and Miller 2020), we observed a rather large effect size (d = 1.12) for a shift between delta plots (as measured via paired t-test on predicted Simon effects as in the present study). With the actual sample sizes of 27 (Exp. 1) and 24 participants (Exp. 2) we would have over 80% power to detect a significant effect regarding the delta plot comparison of at least d = 0.50 (Exp. 1) and d = 0.53 (Exp. 2) at a significance level of = 0.05 (one-sided paired t-test). The cut-off to exclude participants was somewhat arbitrarily set after inspecting the proportion of "too-early" trials for each participant in this and the second experiment. Note, however, that a qualitatively very similar result pattern and inferential statistics were also obtained when using more or less strict cut-offs (e.g., excluding participants with over 10 or 70% too-early trials). Data preparation For both percentage error (PE) and time analyses (i.e., initiation times and movement times) in both experiments, we made sure that mouse movements were continuously recorded and we excluded trials with corrupt trajectories. This lead to the exclusion of 5 (< 0.01%) and 20 (< 0.01%) trials in Experiment 1 and 2, respectively. Then, data of participants who failed to follow task instructions by moving the cursor out of the starting box region before the stimulus letter was presented (n = 3, with 98%, 94% and 33% of trials, respectively). For the remaining 27 participants, less than 2.5% of trials were removed due to this reason. Based on visual inspection of the overall response time distribution, we then additionally excluded "too-fast" (< 50 ms, < 0.5%) and "too-slow" (> 4 s, < 0.1%) trials. For time analyses, we additionally excluded choice error trials (< 1%). In both experiments, similar results were also obtained when including time outliers. Moreover, we also analyzed the data in the two experiments using (a) a stricter "too-fast" criterion (i.e., up to 200 ms) which is commonly used in key-based reaction time experiments to exclude anticipatory trials and (b) stricter "too-slow" criteria (i.e., 2 s and 3 s). The result pattern and test statistics were quite similar, suggesting that the motor manipulation does not solely affect processes taking place immediately after stimulus onset. Design For the analyses on mean initiation times, mean movement times and mean PE as dependent variables, we performed repeated-measures ANOVAs with the within-subject factors of motor demands (low, high) and congruency (congruent, incongruent). For the analyses on distributional times, we constructed delta plots separately for low and high motor processing blocks by creating 9 time percentiles (i.e., 10%, 20%,...) separately for each participant within each of four conditions (i.e., low/high × congruent/incongruent). Very similar results were also obtained in analyses using four percentiles. In order to further compare the shapes and offset of the two delta plots, we summarized the delta plot for each participant and condition with a linear regression model predicting the delta in each bin from the mean time in that bin (e.g., Pratte et al., 2010;Mittelstädt & Miller, 2020. To check for an offset between the two conditions, we used the regression model for each condition to compute the predicted Simon effect at each participant's individual mean initiation and/or movement time. Thus, this analysis allowed us to compare the Simon effects at a common time value thereby controlling for potential time-based fluctuations of the size of the Simon effect. We then performed paired t-tests on slopes and predicted Simon effects in order to test for differences in the time-course and offset of delta plots between the two conditions (e.g., Mittelstädt & Miller, 2020;Ellinghaus & Miller, 2018;Hübner & Töbel, 2019;Mackenzie et al., 2022). Results and discussion Initiation times (ITs) Figure 3A shows the mean ITs as a function of motor demands (low, high) and congruency (congruent, incongruent). As can be seen from this figure, the ITs were quite similar across conditions and the ANOVA only revealed a significant main effect of congruency, F(1, 26) = 7.25, p = 0.012, 2 p = 0.22 (with all other ps > 0.564 , all 2 p s < 0.02). The mean ITs were smaller in congruent than incongruent trials (302 ms versus 309 ms). The IT delta plots for the two motor demand conditions shown in Fig. 3C not only had similar shapes but also overlapped across the whole IT distribution. The mean slopes were positive for both low (0.07) and high (0.02) motor demands and a paired t-test indicated no significant difference, t(26) = 1.28, p = 0.212, d = 0.25. Furthermore, there was evidence for an offset between the two delta plots as indicated by a significant difference between the predicted Simon effects for the low (7 ms) and high (2 ms) motor demand conditions, t(26) = 1.71, p = 0.010, d = 0.33. Figure 3A also shows the corresponding mean MTs. The ANOVA revealed significant main effects of motor demands, F(1, 26) = 430.24, p < 0.001, 2 p = 0.94, and congruency, F(1, 26) = 32.32, p < 0.001, 2 p = 0.55. The mean MT was smaller in blocks with low than high motor demands (659 ms versus 893 ms), and the mean MT was also smaller in congruent than in incongruent trials (749 ms versus 802 ms). There was also a significant interaction reflecting a larger Simon effect with low (62 ms) than high (43 ms) demands, F(1, 26) = 5.11, p = 0.032, 2 p = 0.16. As can be seen in Fig. 3C, the delta plots in the low and high demand conditions seem to follow qualitatively distinct time-courses-that is, only the delta plot in the high demand condition showed a decreasing time-course for larger MTs. Critically, across the entire range of MTs, the Simon effect in high demand blocks was consistently less than the one observed in low demand blocks. The mean slope was positive for the low demand condition and negative for the high demand condition (i.e., 0.15 and − 0.04, respectively), and this difference was significant, t(26) = 4.54, p < 0.001, d = 0.87. Most importantly, the predicted Simon effect was larger for the low (85 ms) than high demand condition (49 ms), and a paired t-test indicated a significant difference between these values, t(26) = 5.74, p < 0.001, d = 1.11. Thus, increased motor demands reduced the Simon effect in MTs by more than can be explained by the time-course of location-based activation. For completeness, we also reanalyzed the data while considering overall reaction times (i.e., initiation times + movement times). As can be seen in Appendix A, the results of this analysis also revealed smaller Simon effects under high compared to low motor demands on both mean and distributional RT level. times (IT and MT) within each of 9 time deciles, plotted against the decile average times, as a function of motor demand (low, high) separately for Experiments 1 and 2. The error bars represent 95% withinparticipant standard errors calculated according to Morey (2008) Percentage errors (PEs) Movement times (MTs) Overall, mean PEs were quite low (< 1%) and the descriptive pattern was generally consistent with the one found for mean RTs (see Fig. 3B). The ANOVA revealed no significant effects (all ps > 0.100, all 2 p s < 0.11). Experiment 2 In the second experiment, we manipulated motor demands by varying the distance between the start region and the response box regions. Thus, in different block of trials, the response boxes were in either a near (low motor demands) or far (high motor demands) distance from the starting box. Participants Another sample of 30 participants from the same participant pool were tested online. Using the same trial exclusion criterion described for Experiment 1, the data of six participants were excluded. The final sample consisted of 24 participants (18 female, 23 right-handed), ranging in age from 19 to 23 years (M = 20.62). All participants gave informed consent, were tested in a single session lasting approximately 35 min, and received course credits for participation. Apparatus, stimuli and procedure The apparatus, stimuli and procedure were the same as in Experiment 1 except for the following changes. The response box always had the same size and instead motor demands were manipulated by varying the distance from the starting box (cf. Fig. 2B). Data preparation and design We first excluded the data of one participant due to a technical error and we then followed the same data preparation procedure and design as in Experiment 1. Specifically, we then excluded data of participants who moved the cursor out of the starting box region before the stimulus appeared in a large proportion of trials (n = 5, with 93%, 81%, 73%, 68% and 27% of trials, respectively). For the remaining 24 participants less than 3% of trials were excluded due to this reason. The first two blocks were considered practice and excluded from any analyses. For both PE and time analyses, we excluded "too-fast" (< 50 ms, < 0.5%) and "too-slow" (> 4s, < 0.2%) trials. For time analyses, we additionally excluded choice error trials (< 1%). Results and discussion Initiation times (ITs) Figure 3D shows the mean ITs as a function of the experimental factors. The ANOVA with the within-subject factors of motor demands and congruency revealed again only a significant main effect of congruency, F(1, 23) = 6.21, p = 0.026, 2 p = 0.20. (all other ps > 0.569, all 2 p s < 0.02). The mean IT was smaller in congruent than in incongruent trials (275 ms versus 283 ms). The delta plots were overlapping with a similar shape (Fig. 3F). Indeed, there was no significant difference between the slopes in the low (0.03) and high (0.05) motor demand condition, t(23) = 0.61, p = 0.548, d = 0.12. Furthermore, there was no significant difference between the predicted Simon effects at the same absolute mean ITs in the low (3 ms) and high (1 ms) motor demand condition, t(23) = 0.52, p = 0.607, d = 0.11. Figure 3D shows the mean MTs separately for each condition. The ANOVA with the within-subject factors of motor processing demands and congruency revealed again significant main effects of motor demands, F(1, 23) = 84.79, p < 0.001, 2 p = 0.79, and congruency, F(1, 23) = 65.97, p < 0.001, 2 p = 0.74. The mean MT was smaller in blocks with low than high motor demands (675 ms versus 810 ms), and the mean MT was also smaller in congruent than in incongruent trials (705 ms versus 780 ms). There was also a significant interaction reflecting a larger Simon effect with low (92 ms) than high (58 ms) demands, F(1, 23) = 7.24, p = 0.013, 2 p = 0.24. As can be seen in Fig. 3F, the delta plots in the two conditions followed similar, slightly increasing, time-courses. Most importantly, as in Experiment 1, the Simon effect in high demand blocks was consistently less than the one observed in low demand blocks across the whole MT distribution. Indeed, the mean slopes were positive for both the low and high demand conditions (i.e., 0.11 and 0.08, respectively), and a paired t-test indicated no significant difference between these values, t(23) = 0.66, p = 0.514, d = 0.14. Furthermore, the predicted Simon effect was significantly larger for the low (103 ms) than high demand condition (64 ms) at the same absolute MTs, t(23) = 3.58, p = 0.002, d = 0.73. Thus, these linear-fit-based comparisons confirm the visual inspection regarding the conclusion that the Simon effect in MTs is larger for low than high motor demand when controlling for the increasing time-course of this effect. The results of the overall RT analysis in Appendix A lead to the same conclusion. General discussion In the present study, we examined the effect of increasing the motor processing demands on conflict processing in the Simon task. Specifically, we compared Simon effects in blocks that required more versus less precise mouse movements (i.e., small vs. large response boxes in Exp. 1) and in blocks that required long versus short mouse movements (i.e., far versus near responses boxes in Exp. 2). We reasoned that participants would increase action control by strengthening target-related activation in blocks with high versus low motor demands and that this would reduce the distracting effect of location-based activation. In line with this hypothesis, the Simon effects on mean movement times were reduced under high motor demands and additional delta plot analyses revealed that this pattern holds true even when controlling for time-varying distractor-based activation. In general, the present results fit well to studies emphasizing the need to consider motor processes when studying perceptual decision-making (e.g., Pierrieau et al., 2021;Cisek & Kalaska, 2005;Ulrich et al., 2007;Servant et al., 2021;Donner et al., 2009;Selen et al., 2012). Thus, while formal decision-making models often (at least implicitly) assume that control processes operate independently from motor processes, the present results favor accounts that emphasize the interaction of cognitive control and motor planning (e.g., Wolpert & Landy, 2012;Wispinski et al., 2018). For example, researchers have shown that when making decisions under conflicting sources of information, the distracting activation at least partially also impacts on motor processes involved in initiating and executing the selected responses (e.g., Weissman, 2019;Buetti & Kerzel, 2009;Servant et al., 2016;Freud et al., 2015). Critically, we extend these previous findings by showing that directly manipulating motor processes can recursively bias conflict processing. This bias could be explained by a purely motor-based account: Assuming that participants more strongly activate the target-based motor responses when a high level of motor demands is required, this would reduce the contribution of distractor-based motor activation when the two activations superimpose. The finding that the effects of the motor manipulation were primarily reflected in movement times rather than in initiation times reinforces the idea of differential activations under high versus low motor demands within motor-related stages. Although speculative, analogous reasoning may also explain why the Simon effect was larger with feet than hands responses in an earlier study (Mittelstädt & Miller, 2020). Since we are often required to perform more precise movements with our hands than feet, hand-related motor activation is probably better shielded from the influence of distractor-based activation. However, it is also possible that a high level of motor demands may tap on more of the limited central resources involved in selecting a response at a premotor level (for similar suggestions, see e.g., (Ulrich et al., 2007;Park et al., 2021;Welch, 1898). If so, one would intuitively expect that the Simon effect would tend to be larger instead of smaller for high than for low motor demands-also considering that many conflict effects (e.g., Stroop, Eriksen flanker) usually increase in size under cognitive load (e.g., Lavie et al., 2004). Interestingly, however, the Simon effect actually decreases under cognitively more demanding conditions (e.g., Wühr & Biebl, 2011;Zhao et al., 2010), with the delta plot pattern resembling the one found in the present study (e.g., Mittelstädt & Miller, 2020). One may speculate that less of the central resources (e.g., working memory) are devoted to distractor-based processing not only when cognitive load (e.g., Wühr & Biebl, 2011) BUT also when motor load increases. Relatedly, more efficient central (premotor) target processing with high motor demands might also entirely explain-or at least partially contribute tomodulations of the Simon effect 4 . To separate influences on premotor versus motor processing, it might be possible to localize the effects of the present manipulation with psychophysiological measures (e.g., lateralized readiness potential, see e.g., (Leuthold, 2011;. It might also be useful to extend the current approach of investigating the effects of motor demands on conflict processing to other versions of conflict tasks and motor demand manipulations (e.g., manipulating the force required to press a key when using response force-sensitive keys; cf. (Mattes et al., 2002;Miller & Alderton, 2006). While the central results regarding Simon effects in movement times were generally consistent across the present experiments, there are also some hints that the effects of the experimentspecific motor demand manipulations (i.e, response box sizes versus distance) on processes modulating the Simon effect might at least partially differ. For example, the delta plots showed a decreasing time-course for larger movements times (i.e., > 900 ms) in Experiment 1 but not Experiment 2. Assuming that the slope of delta plots capture inhibitory processes (e.g., Ridderinkhof, 2002), this may indicate the presence of some extra suppression-related control processes operating on distractor-based activation when the size of response boxes become smaller. 5 Furthermore, as can be seen in the Appendix B, exploratory analyses of the mouse trajectory data also point to both shared and distinct influences of the specific motor manipulations. Specifically, in both experiments mean deviations in mouse trajectories were smaller when motor demands increased which seem to reinforce the idea of better motor control within high compared to low motor demand blocks. Moreover, in both experiments, the mouse trajectories were also susceptible to the distracting influences of stimulus location. Interestingly, however, this trajectory-based mean Simon effect was smaller when motor demands increased in Experiment 1, whereas this effect actually increased with motor demands in Experiment 2. In any case, even though the results do not allow decisive evidence regarding whether the motor manipulation interacts with premotor response selection and/or motor response activation in the specific experiments, the manipulation clearly influenced conflict processing throughout the entire movement time distribution in both experiments. Thus, the results are generally consistent with the idea that motor demands can bias the activation-competition process which is implemented in conflict-task models like DMC (e.g., Ulrich et al., 2015). In order to more directly examine this possibility, we examined whether and how DMC captured the empirical result pattern found in the two experiments. As can be seen in Appendix C, the model was generally able to capture the observed data with changes in estimated parameter values that were quite consistent across experiments. Most important, distractor-based activation was reduced in the high motor demand condition (i.e., the strength of the amplitude parameter of the distractor process was smaller with high than low motor demands). Although it also seems plausible that non-decision time increased under high motor demands, it should be emphasized that evidence accumulation models like DMC do not specify whether and how control processes are involved in non-decision (e.g., motor) processing. Therefore, some caution needs to be applied when interpreting these exploratory fitting results (e.g., Roberts and Pashler, 2000) and the comparison with (and development of) computational conflict-task models that bridge both cognitive and motor control systems is clearly warranted. We hope that the central empirical finding of reduced conflict effects with higher motor demands will help tackle this issue. Additional analyses regarding overall reaction times In this appendix, we present the analyses on overall reaction times (i.e., initiation times plus movement times). Experiment 1 The ANOVA revealed significant main effects of motor demands, F(1, 26) = 467.18, p < 0.001, 2 p = 0.95, and congruency, F(1, 26) = 42.33, p < 0.001, 2 p = 0.62. The mean RT was smaller in blocks with low than high motor demands (968 ms versus 1203 ms), and the mean RT was also smaller in congruent than in incongruent trials (1056 ms versus 1114 ms). There was also a significant interaction reflecting a larger Simon effect with low (70 ms) than high (48 ms) demands, F(1, 26) = 6.08, p = 0.021, 2 p = 0.19. The mean slope was slightly positive for the low demand conditions and negative for the high demand condition (i.e., 0.02 and − 0.06, respectively), but this difference was not significant, t(26) = 1.56, p = 0.131, d = 0.30. The predicted 5 We also reanalyzed all results by using an alternative measure of the time needed in each trial based on the trajectory velocity. Specifically, we calculated timepoints where movement velocity was first greater than a criterion (velocity onset) and second, when movement velocity was below this criterion and was near the response box zone (velocity offset). Velocity onsets (offsets) were determined by calculating a combined velocity profile from the x-and y-coordinates, with onsets (offsets) defined as the timepoint when velocity exceeded (fell below) 2 px/ms. We reasoned that with this analyses the time only captures movement times which reflect "ballistic" type movements towards the general response zone. This removes the remaining time portion of the movement involving small corrective type movements within the vicinity of the response zone, which was particularly evident when responding to the small response zones used in Experiment 1. Interestingly, the difference between Simon effects on delta plots when using this ballistic movement measure was only found in Experiment 2 but not Experiment 1 providing further support for especially late effects of the motor demand manipulation in Experiment 1. Simon effect was larger for the low (77 ms) than high demand condition (55 ms), and a paired t-test indicated a significant difference between these values, t(26) = 3.01, p < 0.001, d = 0.58. Experiment 2 The ANOVA with the within-subject factors of motor demands and congruency revealed again significant main effects of motor demands, F(1, 23) = 105.53, p < 0.001, 2 p = 0.82, and congruency, F(1, 23) = 100.48, p < 0.001, 2 p = 0.81. The mean RT was smaller in blocks with low than high motor demands (953 ms versus 1091 ms), and the mean RT was also smaller in congruent than in incongruent trials (980 ms versus 1063 ms). There was also a significant interaction reflecting a larger Simon effect with low (103 ms) than high (66 ms) demands, F(1, 23) = 6.29, p = 0.020, 2 p = 0.21. The mean slopes were positive for both the low and high demand (i.e., 0.04 and 0.05, respectively), and a paired t-test indicated no significant difference between these values, t(23) = 0.62, p = 0.537, d = 0.13. Furthermore, the predicted Simon effect was significantly larger for the low (143 ms) than high demand condition (87 ms) at the same absolute RTs, t(23) = 2.16, p = 0.041, d = 0.44. Additional analyses regarding mouse trajectories In this appendix, we present the analyses on mouse trajectories. Specifically, we explored how strongly participants' mouse trajectories deviated from an optimal path as a function of the experimental conditions by using the R package mousetrap (Kieslich & Henninger, 2017). For this purpose, we calculated per participant, the difference between optimal and observed trajectories across each trial (measured at 101 points) for the maximum the corresponding average deviation within each of four conditions (i.e., low/high × congruent/incongruent). We then performed a repeated-measures ANOVA with the within-subject factors of motor processing demands (low, high) and congruency (congruent, incongruent) on the mean deviations. Figure 4A &B show the mouse trajectories as function of motor processing demands (low, high). The ANOVA revealed significant main effects of motor processing demands, F(1, 26) = 7.91, p = 0.009, 2 p = 0.23, and congruency, F(1, 26) = 71.81, p < 0.001, 2 p = 0.73. The mean deviation was smaller in blocks with high than low motor processing demands (25 vs. 28 px), suggesting that participants movements became more optimal when motor difficulty increased. The mean deviation was also smaller in congruent than in incongruent trials (13 vs. 40 px), indicating that Simon effects were also present in mouse trajectories. There was also an interaction between motor processing demands and congruency, F(1, 26) = 15.35, p = 0.001, 2 p = 0.37. The distracting influences of stimulus location on mouse trajectories were smaller in high (25 px) than low demand blocks (32 px). Figure 4C,D shows the mouse trajectories as function of motor processing demands (low, high). The results of the ANOVA on mean deviation trajectories revealed again that all effects were significant. The main effect of motor processing demands indicated smaller deviation in blocks with high than low demands (21 versus 29 px), F(1, 23) = 25.63, p < 0.001, 2 p = 0.53. The main effect of congruency indicated smaller deviations in congruent and in incongruent trials (11 versus 39 px), F(1, 23) = 115.66, p < 0.001, 2 p = 0.83. In contrast to Experiment 1, the significant interaction reflected a reduced trajectory-based Simon effect with low (24 px) than high (32 px) demands, F(1, 23) = 15.09, p = 0.001, 2 p = 0.40. Additional information regarding DMC model fitting The DMC model assumes that the outputs of controlled (target-based activation) and automatic (distractor-based activation) processes are superimposed into a single Wiener diffusion process (with the diffusion constant ) toward the correct decision boundary b. The drift rate of this superimposed diffusion process is calculated based on the sum from the temporally constant input of a target-based process with drift rate c and the time-varying input of a distractor-based process with drift rate i (t) . Specifically, the input from the distractor-based process is modeled as a pulse-like gamma density function with shape parameter a which reaches its peak amplitude A at time t peak = (a − 1) ⋅ after which it 1 3 decreases back to zero. RT in a given trial is the sum of the decision time needed to reach the response boundary b plus a normally distributed non-decision (residual) time (i.e., with R and R ). Starting point variability is implemented by sampling from a beta-shaped distribution B which varies symmetrically around zero from b 1 to b 2 . The DMC model was fitted to the observed individual data (i.e., overall reaction times) of the two experimental conditions (i.e., high vs. low motor processing demands) from each experiment by using the R-package DMCfun (Mackenzie & Dudschig, 2021). Following (Ulrich et al., 2015), the model was fitted simultaneously to conditionspecific errors and RT distributions by minimizing the root-mean-squared error (RMSE) between observed and predicted values (see also (Mittelstädt et al., 2021)). Specifically, the DMCfun package calculates a cost value for both the percentile RT data (RMSE RT ) and error data (RMSE CAF ) with the total cost being a weighted sum of the two (for more details, see (Mackenzie & Dudschig, 2021;Ulrich et al., 2015)) As fitting algorithms, DMCfun makes use of (Ulrich et al., 2015) to the experimental data of the two motor processing demand conditions within each experiment as well as weighted root-mean-squareerrors (RMSE) averaged across participants Standard Error (SE) of means in parentheses. Following (Ulrich et al., 2015), the step size was t = 1 ms, the diffusion constant was fixed at =4, and the shape parameter of the distractor process was fixed at a = 2 the R-package DEoptim (Mullen et al., 2011) which uses the differential evolution algorithm. The mean best-fitting parameters and mean RMSEs as a function of motor demands for each experiment are shown in Table 1, and the corresponding model fits to capture the distributional RT and error data are visualized in Fig. 5. In the following, we report the results of paired-tests with the factor motor demands (low, high) on the estimated values. Experiment 1 The strength of distractor-based processing (i.e., amplitude A ) was reduced under high compared to low demands, t(26) = 5.25, p < 0.001. Furthermore, the drift rate of targetbased processing c was smaller under high compared to low demands, t(26) = 7.99, p < 0.001. Both mean and variability of residual times were larger under high compared to low demands with t(26) = 11.46, p < 0.001, and, t(26) = 3.21, p = 0.003, respectively. There were no significant differences concerning the other parameters (all ps > 0.356). Experiment 2 The result pattern was very similar to Experiment 1. Specifically, the amplitude A of the distractor-process was of correct RTs separately for congruent and incongruent trials, conditional accuracy functions (CAF) separately for congruent and incongruent trials, RT delta plots showing incongruent minus congruent differences in mean RTs within each of 9 deciles plotted against the decile averages, respectively 1 3 again reduced under high compared to low demands, t(23) = 3.26, p = 0.003. The drift rate of target-based processing c was again smaller under high compared to low demands, t(23) = 2.83, p = 0.010. Finally, mean residual times were again larger under high compared to low demands, t(23) = 10.18, p < 0.001. There were no significant differences concerning the other parameters (all ps > 0.195). Acknowledgements This research was supported by a grant from the Baden-Württemberg Stiftung to Victor Mittelstädt. We thank Nikolas Maier, Samuel Sonntag, and Mareike Tschaut for helpful discussions and their support in data collection. Moreoever, we thank Roland Pfister, Andreas Voss and Peter Wühr for many helpful comments on a previous version of this manuscript. Funding Open Access funding enabled and organized by Projekt DEAL. Declarations Open practice statements Raw data are available via the Open Science Framework at https:// osf. io/ ce9hm/. Analyses scripts are available upon reasonable request. Ethical standards All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Informed consent Informed consent was obtained from all individual participants included in the study. Conflict of interest The authors declare that they have no conflict of interest. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-11-21T06:16:48.950Z
2022-11-20T00:00:00.000
{ "year": 2022, "sha1": "0c8cb47b09880dcadf3d56da39887202456bd367", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00426-022-01755-y.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "fc53cb0c572f48ffff25086905c61bd7f9e7bdc3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
246422829
pes2o/s2orc
v3-fos-license
Supplementing Soy-Based Diet with Creatine in Rats: Implications for Cardiac Cell Signaling and Response to Doxorubicin Nutritional habits can have a significant impact on cardiovascular health and disease. This may also apply to cardiotoxicity caused as a frequent side effect of chemotherapeutic drugs, such as doxorubicin (DXR). The aim of this work was to analyze if diet, in particular creatine (Cr) supplementation, can modulate cardiac biochemical (energy status, oxidative damage and antioxidant capacity, DNA integrity, cell signaling) and functional parameters at baseline and upon DXR treatment. Here, male Wistar rats were fed for 4 weeks with either standard rodent diet (NORMAL), soy-based diet (SOY), or Cr-supplemented soy-based diet (SOY + Cr). Hearts were either freeze-clamped in situ or following ex vivo Langendorff perfusion without or with 25 μM DXR and after recording cardiac function. The diets had distinct cardiac effects. Soy-based diet (SOY vs. NORMAL) did not alter cardiac performance but increased phosphorylation of acetyl-CoA carboxylase (ACC), indicating activation of rather pro-catabolic AMP-activated protein kinase (AMPK) signaling, consistent with increased ADP/ATP ratios and lower lipid peroxidation. Creatine addition to the soy-based diet (SOY + Cr vs. SOY) slightly increased left ventricular developed pressure (LVDP) and contractility dp/dt, as measured at baseline in perfused heart, and resulted in activation of the rather pro-anabolic protein kinases Akt and ERK. Challenging perfused heart with DXR, as analyzed across all nutritional regimens, deteriorated most cardiac functional parameters and also altered activation of the AMPK, ERK, and Akt signaling pathways. Despite partial reprogramming of cell signaling and metabolism in the rat heart, diet did not modify the functional response to supraclinical DXR concentrations in the used acute cardiotoxicity model. However, the long-term effect of these diets on cardiac sensitivity to chronic and clinically relevant DXR doses remains to be established. Introduction Nutritional habits are increasingly recognized for their impact on cardiovascular health and disease, including prevention of cancer relapse or different comorbidities [1]. This is water and kept frozen until use. Further dilutions were prepared in Krebs-Henseleit buffer [31] just before heart perfusion. Protease inhibitor cocktail tablets were obtained from Roche (Mannheim, Germany) and phosphatase inhibitor cocktail was obtained from Pierce (Rockford, IL, USA). Creatine (creatine monohydrate, Creapure ® ) was a gift from AlzChem Trostberg GmbH (Trostberg, Germany). Animals All procedures involving animals were approved by the Grenoble Ethics Committee for Animal Experimentation (15_LBFA-U884-HD-01). Male Wistar rats initially fed a standard chow for young rats (A03 reference U8200, Safe, Augy, France; 3237 kcal/kg) were then differentially fed for 4 weeks starting from 2 months of age. One group of animals continued to receive the standard chow for adult rats (NORMAL; A04 reference U8220, Safe, Augy, France; 2791 kcal/kg) containing 4% (w/w) fish hydrolysate and 8% (w/w) soy meal. The second group was fed a Cr-free soy-based chow (SOY; modified A04 reference U8220 version 149, Safe, Augy, France; 2711 kcal/kg) where fish hydrolysate was replaced by the same percentage of soy isolate. The third group was fed the latter chow supplemented with 2% (w/w) creatine (SOY + Cr, modified A04 reference U8220 version 150, Safe, Augy, France; 2711 kcal/kg). Diets were purchased from Safe, Augy, France). After 4 weeks of the differential diets, animals were anaesthetized with sodium pentobarbital (50 mg/kg i.p.), and hearts were freeze-clamped in situ (immediately after thoracotomy of respirator-ventilated animals) or following ex vivo Langendorff perfusion with or without 25 µM DXR (see the experimental scheme in Figure 1). Frozen hearts were stored at −80 • C. Materials Doxorubicin hydrochloride (DXR) was purchased from Sigma (Saint Louis, MO, USA) or Selleck Chemicals (Houston, TX, USA). A stock solution (10 mM) was prepared in water and kept frozen until use. Further dilutions were prepared in Krebs-Henseleit buffer [31] just before heart perfusion. Protease inhibitor cocktail tablets were obtained from Roche (Mannheim, Germany) and phosphatase inhibitor cocktail was obtained from Pierce (Rockford, IL, USA). Creatine (creatine monohydrate, Creapure ® ) was a gift from AlzChem Trostberg GmbH (Trostberg, Germany). Animals All procedures involving animals were approved by the Grenoble Ethics Committee for Animal Experimentation (15_LBFA-U884-HD-01). Male Wistar rats initially fed a standard chow for young rats (A03 reference U8200, Safe, Augy, France; 3237 kcal/kg) were then differentially fed for 4 weeks starting from 2 months of age. One group of animals continued to receive the standard chow for adult rats (NORMAL; A04 reference U8220, Safe, Augy, France; 2791 kcal/kg) containing 4% (w/w) fish hydrolysate and 8% (w/w) soy meal. The second group was fed a Cr-free soy-based chow (SOY; modified A04 reference U8220 version 149, Safe, Augy, France; 2711 kcal/kg) where fish hydrolysate was replaced by the same percentage of soy isolate. The third group was fed the latter chow supplemented with 2% (w/w) creatine (SOY + Cr, modified A04 reference U8220 version 150, Safe, Augy, France; 2711 kcal/kg). Diets were purchased from Safe, Augy, France). After 4 weeks of the differential diets, animals were anaesthetized with sodium pentobarbital (50 mg/kg i.p.), and hearts were freeze-clamped in situ (immediately after thoracotomy of respirator-ventilated animals) or following ex vivo Langendorff perfusion with or without 25 μM DXR (see the experimental scheme in Figure 1). Frozen hearts were stored at −80 °C. Rat Heart Perfusion Perfusion experiments were essentially performed according to the protocol described earlier [31][32][33][34]. Briefly, rats were anaesthetized with sodium pentobarbital (50 mg/kg i.p.) and heparinized (1500 IU/kg i.v.). Hearts were quickly removed and perfused at constant pressure in a non-circulating Langendorff apparatus with Krebs-Henseleit buffer, first for 30 min for stabilization (baseline) with Krebs-Henseleit buffer alone, and then for 80 min either with Krebs-Henseleit buffer without (control) or with 25 μM DXR. The DXR concentration was chosen on the basis of our previous studies [31][32][33][34]. During perfusion, systolic pressure, end diastolic pressure, dp/dt and -dp/dt, and heart rate were recorded every 10 min. Rat Heart Perfusion Perfusion experiments were essentially performed according to the protocol described earlier [31][32][33][34]. Briefly, rats were anaesthetized with sodium pentobarbital (50 mg/kg i.p.) and heparinized (1500 IU/kg i.v.). Hearts were quickly removed and perfused at constant pressure in a non-circulating Langendorff apparatus with Krebs-Henseleit buffer, first for 30 min for stabilization (baseline) with Krebs-Henseleit buffer alone, and then for 80 min either with Krebs-Henseleit buffer without (control) or with 25 µM DXR. The DXR concentration was chosen on the basis of our previous studies [31][32][33][34]. During perfusion, systolic pressure, end diastolic pressure, dp/dt and -dp/dt, and heart rate were recorded every 10 min. Metabolites, Oxidative Damage/Antioxidant Status, and DNA Integrity Protein-free extracts were obtained by perchloric acid precipitation, and metabolites were quantified using HPLC (AMP, ADP, ATP) or a spectrophotometric assay (Cr and PCr) as described earlier [32,34]. Markers of oxidative damage and antioxidant status were quantified in heart extracts prepared as described earlier [35]. Reduced thiol (SH) groups were assayed according to [36]. N-acetyl cysteine (NAC) in the range of 0.125 to 1 mM (prepared from a 100 mM stock solution) was used for calibration. Standards and heart extracts were diluted in 50 mM phosphate buffer, 1 mM EDTA, pH 8, and 2.5 mM 5,5 -dithio-bis-(2-nitrobenzoic acid) (DTNB), and subsequently the absorbance was measured at 412 nm. Antioxidant status was evaluated using the ferric reducing ability power (FRAP) assay [37]. Plasma thiobarbituric acid reactive substance (TBARS) concentrations were assessed as described [38]. Total genomic DNA was isolated with the QIAamp DNA mini kit (Qiagen) according to the manufacturer's instructions. The final concentration and quality of DNA were estimated both spectrophotometrically (DU-640; Beckman Instruments, Milan, Italy) at 260 nm, and by agarose gel electrophoresis. Nuclear and mitochondrial DNA damage were evaluated using a two-step strategy based on a long PCR and real-time PCR as described in detail elsewhere [34]. Immunoblotting SDS-PAGE separation of heart homogenates (40-50 µg) and immunoblotting were performed according to standard procedures [34]. The transfer quality and equality of loading were checked by Ponceau staining. The blots were developed with chemiluminescence reagent (ECL Prime, GE Healthcare) using a CCD camera (ImageQuant LAS 4000, GE Healthcare). The quantification of signals was conducted using Image-QuantTL software (GE Healthcare). Tubulin or total protein were used for normalization for the phosphorylated proteins (probed on different membranes). The following primary antibodies obtained from Cell Signaling Data Analysis Results are expressed as means ± SEM, if not stated otherwise. Depending on the experimental design, statistical analysis was performed using one-or two-way ANOVA (Sigma Plot; Systat Software, San Jose, CA, USA) or linear regression with dummy variables and robust standard errors (Stata 13; Stata Corp., College Station, TX, USA) to deal with the heterogeneity of variance. When appropriate, these were followed by the Student-Newman-Keuls or Bonferroni test, respectively, for pairwise comparisons. The one-way ANOVA P-values are not reported; the results of pairwise comparisons are reported in the case of significant one-way ANOVA P-values. For two factorial analysis, we report significance values for the effects of diet (P Diet , independent of DXR), DXR (P DXR , independent of diet), and the interaction of both diet and DXR treatment (P Diet*DXR; indicating if the effect of one factor depends on the level of the second factor), and the results of pairwise comparisons (p). The P or p values are given in the graphs with 3 decimal places (and in bold characters) if significant, and with 2 decimal places if not. A value of P or p < 0.05 (for interaction P < 0.1) was considered statistically significant. Results The effects of the three nutritional regimens, standard rodent chow (NORMAL), soybased diet (SOY), and Cr-supplemented soy-based diet (SOY + Cr), on the heart under control and DXR-challenged conditions were studied in a rat model ( Figure 1). After one month of the differential diet, there was no significant difference in animal body weight (NORMAL 366 ± 6 g (n = 36), SOY 387 ± 12 g (n = 23), SOY + Cr 370 ± 11 g (n = 21)). Biochemical parameters were determined both in hearts freeze-clamped in situ immediately after thoracotomy, and after ex vivo Langendorff perfusion, consisting of a 30 min stabilization period followed by 80 min of perfusion without (control group) or with 25 µM DXR (DXR group). Functional parameters were measured ex vivo during Langendorff perfusion. Heart Function Cardiac function was first determined in perfused hearts at baseline (after 30 min of stabilization, Figure 2A), and then after 80 min of subsequent perfusion without or with DXR ( Figure 2B). Data at baseline are considered to reflect the in vivo situation. The Cr-supplemented diet (SOY + Cr vs. SOY, Figure 2A) affected cardiac function, with a slight increase in the cardiac developed pressure LVDP (p = 0.043) and contractility dp/dt (p = 0.039). There was no significant effect of soy diet (SOY vs. NORMAL). DXR perfusion impaired almost all cardiac functional parameters (P DXR < 0.001, Figure 2B) except heart rate, with a time-course ( Figure S1) consistent with previous studies [31][32][33][34]. A statistically significant interaction between diet and DXR was only seen for diastolic pressure at the end of perfusion (EDP; P Diet*DXR = 0.047, Figure 2B), increasing more in the group fed the Cr-free soy chow (SOY vs. NORMAL, p = 0.002; SOY vs. SOY + Cr, p = 0.005). control and DXR-challenged conditions were studied in a rat model ( Figure 1). After one month of the differential diet, there was no significant difference in animal body weight (NORMAL 366 ± 6 g (n = 36), SOY 387 ± 12 g (n = 23), SOY + Cr 370 ± 11 g (n = 21)). Biochemical parameters were determined both in hearts freeze-clamped in situ immediately after thoracotomy, and after ex vivo Langendorff perfusion, consisting of a 30 min stabilization period followed by 80 min of perfusion without (control group) or with 25 μM DXR (DXR group). Functional parameters were measured ex vivo during Langendorff perfusion. Heart Function Cardiac function was first determined in perfused hearts at baseline (after 30 min of stabilization, Figure 2A), and then after 80 min of subsequent perfusion without or with DXR ( Figure 2B). Data at baseline are considered to reflect the in vivo situation. The Crsupplemented diet (SOY + Cr vs. SOY, Figure 2A) affected cardiac function, with a slight increase in the cardiac developed pressure LVDP (p = 0.043) and contractility dp/dt (p = 0.039). There was no significant effect of soy diet (SOY vs. NORMAL). DXR perfusion impaired almost all cardiac functional parameters (PDXR < 0.001, Figure 2B) except heart rate, with a time-course ( Figure S1) consistent with previous studies [31][32][33][34]. A statistically significant interaction between diet and DXR was only seen for diastolic pressure at the end of perfusion (EDP; PDiet*DXR = 0.047, Figure 2B), increasing more in the group fed the Cr-free soy chow (SOY vs. NORMAL, p = 0.002; SOY vs. SOY + Cr, p = 0.005). Figure 2. Heart function: effect of diet and DXR. Hemodynamic parameters: left ventricular developed pressure (LVDP), end-diastolic pressure (EDP), dp/dt, −dp/dt, heart rate (HR), and rate pressure product (RPP) measured in Langendorff perfused hearts after 30 min of stabilization (A) or after 30 min of stabilization followed by an additional 80 min of perfusion (B) without or with 25 µM DXR (empty or hatched bars, respectively). EDP values are given only in (B), as during the stabilization period shown in (A), EDP was adjusted to 5 mm Hg and thereafter the volume of the balloon rest unchanged. Statistical analysis with linear regression followed by the Bonferroni test for pairwise comparisons. For −dp/dt, HR, RPP in (A), and HR in (B), there are no statistically significant differences between groups. Mean ± SEM, n = 11-28 (A), n = 4-14 (B). Creatine and Adenylate Levels Cellular Cr availability and energy state were studied by determination of the Cr and adenylates in heart in situ and after ex vivo perfusion. A deteriorated energy state is often observed in DXR cardiotoxicity [39]. The diets affected the free and total Cr content in situ ( Figure 3A) and also after ex vivo perfusion (P Diet = 0.019 and 0.058, respectively, Figure 3B). As expected, 4 weeks of oral Cr supplementation (SOY + Cr vs. SOY) increased free and total Cr (p = 0.036 and p = 0.028, respectively, in ex vivo perfused hearts). Creatine and Adenylate Levels Cellular Cr availability and energy state were studied by determination of the Cr and adenylates in heart in situ and after ex vivo perfusion. A deteriorated energy state is often observed in DXR cardiotoxicity [39]. The diets affected the free and total Cr content in situ ( Figure 3A) and also after ex vivo perfusion (PDiet = 0.019 and 0.058, respectively, Figure 3B). As expected, 4 weeks of oral Cr supplementation (SOY + Cr vs. SOY) increased free and total Cr (p = 0.036 and p = 0.028, respectively, in ex vivo perfused hearts). Interestingly, standard chow-fed animals also showed increased Cr in comparison to SOY (NORMAL vs. SOY), but the effect was seen only in the group used for ex vivo perfusion (p = 0.016 for free Cr and strong tendency p = 0.08 for total Cr), possibly due to a higher Cr content in the batch of NORMAL chow used here. Cardiac adenylate levels and ADP/ATP and AMP/ATP ratios remained largely unchanged between the diets ( Figure 3A,B, lower rows), except for the soy chow, where the ADP/ATP ratio increased in hearts clamped in situ (SOY vs. NORMAL, p = 0.002). DXR perfusion across all nutritional regimens affected the ATP content and increased AMP/ATP ratios (P DXR = 0.027 and 0.012, respectively, Figure 3B). No statistically significant interference was found between diet and DXR ( Figure 3B). Cell Signaling Pathways Nutrition can lead to sustained alterations in cell signaling, and this can also occur with DXR treatment as we have shown earlier [31][32][33][34]. We therefore analyzed the activation of specific key signaling pathways involved in stress and pro-survival responses: AMPactivated protein kinase (AMPK; determined by phosphorylation of AMPK itself and its substrate acetyl-CoA carboxylase, ACC), extracellular signal-regulated kinase (ERK, determined by ERK phosphorylation), and Akt (determined by either Akt phosphorylation or global phosphorylation of Akt substrates) in heart in situ ( Figure 4A) and after ex vivo perfusion ( Figure 4B). Our experiments revealed a differential cardiac activation pattern of these signaling pathways, dependent on the diet ( Figure 4A,B). The soy-based diet (SOY vs. NORMAL) almost doubled ACC phosphorylation (p = 0.004 in situ; p = 0.002 after ex vivo perfusion), consistent with the above-described increase in the ADP/ATP ratio. Changes in P-AMPK were similar but weaker and did not reach significance. The addition of Cr to the soy-based diet (SOY + Cr vs. SOY) led to no further change in P-ACC but activated ERK (p = 0.014 in situ) and Akt (p = 0.015 in situ; p < 0.001 after ex vivo perfusion at Ser473, and a tendency with p = 0.08 at Thr308). Perfusion with DXR changed the phosphorylation of AMPK, ACC, ERK, and Akt at Ser473 (P DXR = 0.022, P DXR = 0.003, P DXR < 0.001, strong tendency with P DXR = 0.06, respectively, Figure 4B). This confirmed our previous observations in animals fed a NORMAL diet [31,34], namely a DXR-induced inactivation of AMPK signaling with a decrease of P-AMPK and P-ACC (tendencies of p = 0.11 and 0.10, respectively), together with an activation of Akt (p = 0.005 at Ser473) as one factor potentially involved in AMPK inactivation [34]. For phosphorylation of Akt at Ser473, the interaction between diet and DXR was significant (P Diet*DXR = 0.09, Figure 4B), with an increase only observed in the NORMAL group. After DXR perfusion, TBARS tended to increase as compared to the control (PDXR = 0.058, Figure 5B), but the lower TBARS and FRAP values in the SOY group as compared to NORMAL were preserved ( Figure 5B). Consistent with our previous study in ex vivo Langendorff perfused heart [34], DXR caused extensive mitochondrial and nuclear DNA damage, but again, the extent of damage was unaffected by the three diet regimens (Figure 6B). Oxidative Damage, Antioxidant Status, and DNA Integrity Diet and DXR can affect the cellular oxidative/antioxidant balance. As a readout, we determined peroxidized lipids (TBA reactive substances, TBARS) and antioxidant status (reduced thiols; ferric reducing antioxidant power, FRAP), along with the integrity of mtDNA and nDNA in hearts in situ (Figures 5A and 6A) and after ex vivo perfusion (Figures 5B and 6B). In Situ, the soy-based diet diminished FRAP (SOY vs. NORMAL; p = 0.032, Figure 5A). In the ex vivo perfused heart, the diets affected both TBARS and FRAP (P Diet < 0.001 and 0.001, respectively, Figure 5B). Again, the soy-based diet reduced both parameters (SOY vs. NORMAL; p < 0.001 for both; SOY vs. SOY + Cr; p = 0.023 also for both, Figure 5B). Despite these differences, the integrity of nuclear and mitochondrial DNAs was not affected by diet, neither in situ ( Figure 6A) nor after perfusion ( Figure 6B). Oxidative and genotoxic stress are molecular hallmarks of DXR toxicity [34,40,41]. After DXR perfusion, TBARS tended to increase as compared to the control (P DXR = 0.058, Figure 5B), but the lower TBARS and FRAP values in the SOY group as compared to NOR-MAL were preserved ( Figure 5B). Consistent with our previous study in ex vivo Langendorff perfused heart [34], DXR caused extensive mitochondrial and nuclear DNA damage, but again, the extent of damage was unaffected by the three diet regimens ( Figure 6B). Figure 5. Oxidative/antioxidant status: effect of diet and DXR. Reduced thiols, peroxidized lipids (TBARS), and total antioxidant power (FRAP) measured in hearts freeze-clamped in situ immediately after thoracotomy (A) or following ex vivo Langendorff perfusion (B) without or with 25 μM DXR (empty or hatched bars, respectively). Statistical analysis with one-way (A) or two-way (B) ANOVA followed by the Student-Newman-Keuls test for pairwise comparisons. For thiols, TBARS in (A) and thiols in (B), there are no statistically significant differences between groups. Mean ± SEM, n = 5-6 (A), n = 4-7 (B). Discussion This study reveals nutrition-induced alterations in cardiac function, cell signaling, and some biochemical markers after only 4 weeks of differential feeding of young male rats. The Cr-free soy-based diet (SOY) as compared to standard rodent diet (NORMAL) activated AMPK signaling as revealed by increased ACC phosphorylation, slightly increased the ADP/ATP ratio, and lowered both lipid peroxidation and the total antioxidant capacity. Supplementation of SOY with 2% Cr (SOY + Cr) as compared to SOY moderately increased cellular Cr, predominantly affected signaling pathways by activating Akt and ERK, and slightly increased cardiac developed pressure and contractility (LVDP and dp/dt) at baseline. These alterations are, in principle, relevant for cardiac health and its response to DXR, but they did not alleviate cardiac dysfunction induced by acute DXR challenge in the perfused heart model applied here. Three key signaling pathways with fundamental importance for cardiovascular health were altered by diet: AMPK, ERK, and Akt. This may be critical to many functional and biochemical changes detected in our study. AMPK is a central energy sensor and regulator of the cell. During energy stress, it is activated allosterically by AMP and ADP, favors catabolism, and maintains cellular energy homeostasis [42]. Soy-based diet (SOY) as compared to standard chow (NORMAL) led to strong phosphorylation of ACC, an AMPK substrate, reporting activation of this pathway in the heart, consistent with a slightly reduced energy state in situ. Perfusion with DXR is known to induce a drop in the cardiac energy state [39], but paradoxically, this often occurs without activation of AMPK signaling, as reported by us [31,34] and others [43]. In the present study, we also observed signs of bioenergetic impairment by DXR in perfused heart, together with decreased AMPK activation. Only the SOY group maintained AMPK energy signaling as seen at the level of P-AMPK and P-ACC. Indeed, some treatments known to activate AMPK were shown to mitigate DXR cardiotoxic effects, including diet restriction [43,44]. Different soy components were implicated in AMPK activation in tissues other than heart, such as phytoestrogens in rat [45], genistein in cultured cancer cells [46], and different types of polyphenols [47,48]. The addition of Cr (SOY + Cr) did not (further) activate cardiac AMPK, consistent with a study on skeletal muscle [49], and not supporting earlier data on muscle cells [50]. Phosphorylation and activation of the rather pro-anabolic Akt and ERK by diet in heart in situ occurred rather inversely relative to AMPK. While the soy diet (SOY vs. NORMAL) increased AMPK activity and left Akt activity unchanged or tended to diminish ERK activity, the addition of Cr (SOY + Cr vs. SOY) led to activation of Akt and ERK, with a trend of lower AMPK activity. This supports a negative cross-talk of these kinases as described by us [34] and others [34,[51][52][53], and by which AMPK is inhibited via Aktdependent phosphorylation in its catalytic α-subunit. This cross-talk can modulate AMPK activity in the heart both under basal conditions in situ and during DXR perfusion [34]. The inhibitory effects of the soy-based diet on Akt and ERK could be mediated by genistein, known to inhibit Tyr kinases (for a review, see [54]) and to have an anti-proliferative effect, consistent with indirect AMPK activation [55][56][57][58][59]. The activation of Akt and ERK seen with Cr supplementation was also reported for skeletal muscle [60,61] and suggested by recent database meta-analysis [62]. Such rather pro-anabolic effects could mediate many cytoprotective aspects of Cr supplementation, such as in cardioprotection [18,19], muscular dystrophies, neuromuscular and neurodegenerative disorders [16,63], brain health [64], or wound healing [65]. Notably, upregulation of Akt was shown to confer significant cardioprotection in DXR-treated animals [66]. Slightly altered performance of the perfused heart was observed only after Cr supplementation (SOY + Cr vs. SOY) under baseline conditions. Contractility (dp/dt) and developed pressure (LVDP) were modestly increased, together with a trend of a decreased heart rate, with the latter resulting in an unchanged rate pressure product (RPP). Beyond bioenergetics, Cr enhances the expression of muscle myogenic regulatory factors as reported for skeletal muscle [60,61,67,68] and affects signaling pathways, such as the Akt activation mentioned above. An earlier study did not detect Cr-induced changes in cardiac function [69], possibly because the reference diet plays an important role. Soy is not only Cr-free but may itself have additional cardio-vascular effects not examined here, such as blood pressure-lowering effects [70][71][72][73][74]. Moreover, the higher basal activity of the AMPK pathway may potentiate Cr effects, since both are directed to improve cell energetics. Perfusion with DXR deteriorated cardiac function as expected, but diet did not modulate the functional response, except for a lower increase in diastolic pressure in the SOY + Cr vs. SOY group. Cr was also not effective in a perfused heart model for acute oxidative stress [75]. Possibly, the acute insult at a supraclinical DXR dose is too strong, and long-term exposure of animals to low clinical DXR doses would be more suitable for an analysis of dietary effects. A striking feature of the soy-based diet (SOY vs. NORMAL) was the low cardiac level of both lipid peroxidation and total antioxidant capacity. This was most pronounced in the ex vivo perfused heart, likely because of perfusion-associated oxidative stress. The antioxidant properties of soy include the main soy isoflavone genistein [76] and other phytoestrogens or polyphenolic compounds. They share a high reactivity as hydrogen or electron donors, can stabilize unpaired electrons as polyphenol radicals, chelate transition metal ions, modulate the expression of antioxidant defense genes, and activate signaling pathways [77,78]. However, a general lipid-lowering capacity of soy could also reduce detectable lipid peroxides [27,54,79], consistent with AMPK-induced reduction of lipid anabolism and an increase of their catabolism. The combined reduction of both lipid peroxidation and total antioxidant capacity may seem surprising, but the latter is likely an adaptation to the lower oxidative stress levels in the SOY group, as also indicated by literature data [80]. Cr supplementation had no such dramatic effects. Further, dietrelated differences in oxidative stress were not reflected in oxidative DNA lesions, because these were either below the detection threshold, or they were rapidly removed by repair systems. Thus, mt and nDNA are unlikely targets and/or mediators of the diet-related effects described herein. Regarding Cr, it should be emphasized that supplementation can only modestly increase cardiac Cr levels, as observed here and in earlier studies [69]. With increasing cellular Cr, a feedback mechanism downregulates the creatine transporter in charge of cellular Cr uptake [69]. Nevertheless, even this moderate increase in intracellular Cr was sufficient to trigger some significant cardiac effects. Finally, our study calls for a note of caution with respect to diets used in animal studies. Already basic formulations likely contain ingredients with considerable biological activity. In particular, the soy-derived products commonly used in rodent chow (soy meal or protein isolate) must be considered as bioactive agents or even nutraceuticals [28]. The present study used a soy-based diet as a genuine Cr-free control chow for Cr supplementation studies. However, replacing 4% fish hydrolysate with 4% soy protein isolate already generates a bio-active diet. Soy-derived products contain isoflavones (genistein, daidzein, and equol) that are qualified as phytoestrogens due to their ability to act in the body as estrogens or selective estrogen receptor modulators [72]. Thus, caution is advised when translating results obtained with animals fed a high-soy diet directly to humans, especially those consuming a traditional Western diet. The quantity of circulating phytoestrogens in rats ingesting a soy-based diet may be comparable to Asian people who eat a soy-based diet [28]. Even if soy dietary supplements became popular in vegetarian/vegan cuisine, the overall benefit and/or safety of this diet is still a matter of debate (for a review, see [27][28][29][30]). Obviously, phytoestrogens may negatively affect the reproduction system, mainly in males and children [28]. In view of these controversies, the controlled use of soy or other bioactive compounds in animal diets and the explicit analysis of their effects is highly desirable. In this context, the choice of animal gender should also be considered. For example, practically all animal studies on DXR cardiotoxicity were conducted with male animals and these are then compared to human studies performed with both genders, although gender may affect the cardiac response to DXR. Earlier literature suggested that female sex is a risk factor for DXR cardiotoxicity [81][82][83], but most recent reports show that female sex hormones may protect against DXR cardiotoxicity by reducing oxidative stress and proinflammatory responses [84,85]. One may even ask whether phytoestrogens could successfully mimic this effect. Conclusions In conclusion, a soy-based diet alone or supplemented with Cr, fed for four weeks to rats, is sufficient to alter cardiac function, cell signaling, and biochemical markers of the energy state and oxidative stress. These effects are relevant for cardiovascular health but were not sufficient to alleviate cardiac dysfunction induced by a supraclinical DXR concentration in the perfused rat heart model. However, whether these diets could affect the long-term response to chronic and clinically relevant DXR doses in the rat model described here, or in human patients treated with DXR, remains to be established.
2022-01-31T16:08:57.049Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "630de6312ec6bbe3a668abe47b8627f9ff2cf3e9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/3/583/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fd65d96d52722d9e095fc99faab7a6653d30731b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6897834
pes2o/s2orc
v3-fos-license
Spontaneous Tumours in Guinea Pigs Jel ínek F.:Spontaneous Tumours in Guinea Pigs. Acta Vet. Brno 2003, 72: 221-228. The aim of the study is to describe spontaneous tumours in guinea pigs. Twenty neoplasias from 19 guinea pigs were examined histologically. In 15 cases biopsy samples were examined, samples from four animals were collected during autopsy. Except for one, all animals were kept as pets. Skin tumours were diagnosed in five of them. They appeared in different locations abdomen, plantar side of hind leg, back (in two animals), and rump, and were of different sizes, the largest one was five cm in diameter. All tumours were of follicular origin two trichofolliculomas, two trichoepitheliomas, one malignant pilomatricoma. The age of affected animals ranged from two to 7.5 years. Tumours of the mammary gland were present in five guinea pigs. Adenocarcinoma was diagnosed in two males, sarcoma of myoepithelial origin was found in one female. Tubular adenoma was present in one two-yearold female, and adenomatous hyperplasia of the mammary gland was observed in another female of the same age. In six guinea pigs, three females and three males, between three and five years of age, there were tumours in subcutaneous tissue. Three were lipomas, in one animal the lipoma was multiple. Liposarcoma was found in one male, myxoid liposarcoma was diagnosed in another one. Ossifying fibroma was histologically diagnosed in one female. Lymphatic leukaemia was observed in three males. All animals were 4-year-old. Hepatocellular adenoma was found in a 5-year-old female suffering also from trichofolliculoma as mentioned above. Data about tumours in guinea pigs are relatively rare, and therefore information along this line is useful both for clinical practice and comparative pathology. Histopathology, neoplasia, skin, mammary gland, soft tissues, haemopoietic tissue, comparative Spontaneous tumours in guinea pigs are, according to literature data, rather rare. With exception of leukaemia in certain inbred strains, neoplasias are practically non-existent in animals less than 1 year of age (Wagner and Manning 1976). In animals surviving three years the frequency of tumours is as high as 15% (Blumenthal and Rogers 1965). In some laboratory strains, animals older than three years, had tumour incidence ranging from 14.4% to 30% (Wagner and Manning 1976). General overview of tumours in guinea pigs is presented by Blumenthal and Rogers (1965), Wagner and Manning (1976), Squire et al. (1978), and Percy and Barthold (1993). Report of 14 spontaneus tumours was published by Kitchen et al. (1975), lymphoblastic leukaemia in two strains was described by Hong et al. (1970). Zwart et al. (1981) have described three cutaneous tumours in guinea pigs. Guinea pigs, like rabbits and rats, are becoming more and more popular as pet animals; e.g. they live in 8.2% of Czech pet-keeping households along with dogs, as indicated by a recent survey (Baranyiová et al. 2001). Guinea pigs usually survive for about three years, and when adult or old they suffer, in addition to other diseases, also from neoplasias. Therefore, knowledge about tumours in guinea pigs is increasingly important for both clinical practice and comparative pathology. Materials and Methods All but one animal were kept as pets in households. They were both short-haired and long-haired, mainly tricolor, but also black, without any genetic identification. One guinea pig, suffering from leukosis, originated from a laboratory animal colony and belongs to strain C2BB/R+. Samples submitted for histological examination were predominantly biopsies obtained by surgical extirpation. Only in four cases they were collected in the course of necropsy of euthanized guinea pigs carried out by veterinarians. Samples were fixed in 10% buffered formalin and processed by common paraffin technique. Histological sections 4 m m of thickness were, after deparaffinization, stained with haematoxylin and eosin. In indicated cases were performed staining after Giemsa, PAS reaction, alcian blue at pH 2.5 with PAS reaction, and impregnation after Gomori. Malignant pilomatricoma in skin and four tumours of mammary gland were examined also by means of immunohistochemistry. Cytokeratins were proved by monoclonal antibody MNF 116 (DAKO), cytokeratin K18, expressed in monolayer epithelium, was identified by monoclonal antibody DC-10 (EXBIO), smooth muscle actin was determined by antibody HHF35 (DAKO), vimentin by monoclonal antibody clone V9 (DAKO), and S 100 protein was identified by polyclonal antibody (DAKO). Immunohistochemistry was performed on sections from paraffin material by the common immunoperoxidase method. Endogenous peroxidase activity was quenched by 3% peroxide at room temperature for 15 min. For detection of cytokeratins the slides were digested with trypsin (0.1% in 0.1% calcium chloride) for 15 min at 37 o C. The remaining antigens were demasked by boiling of the slides in 0.1 M citrate buffer pH 6.0 for 10 min. Binding of antibodies was performed in humid chambers at room temperature for 60 min. The reaction was visualized by means of streptavidin-biotin universal detection system (Immunotech). The sections were counterstained with haematoxylin. Tumours in the skin In guinea pig No. 2 there were two tumours. One was in rump, the second was in shoulder region. In animals No. 3 and 4 the tumours were located in back and rump regions, respectively and in animal No. 5 the neoplasia was in abdominal region. The tumours were 1-5 cm in diameter. On cut surface of trichoepitheliomas and trichofolliculomas there was macroscopically apparent pasty material. Trichofolliculomas were composed of more primary follicular formations that vere cystically dilated and keratinized through a granular cell layer. From the primary follicles multiple secondary follicles of different stage of maturation radiated outward. Trichoepitheliomas were composed of random admixture of budding epithelial islands and cystic structures. The islands vere composed of basaloid cells with peripheral palisading. In addition to infundibular keratinization there was also matrical keratinization. Epithelial islands vere encompassed with fibrous or myxomatous stroma. Malignant pilomatricoma was located on the plantar side of hind leg. The tumour was dome-shaped, approximately 2 cm in diameter. According to histological structure the tumour consisted of three parts. One was well differentiated pilomatricoma, composed of multiple cystic formations of different size that were lined predominantly by basaloid keratinocytes. Zones of squamocellular epithelium, mainly without granular cell layer, were also present. The basaloid cells had scant cytoplasm and ovoid, hyperchromatic nuclei. Mitotic activity was mild to moderate. Among and inside of the cells there were many apoptotic bodies. Lumina of cysts contained predominantly masses of keratinized ghost cells but lamellar keratin was also present. The second part of the tumour consisted of islands of epithelial cells separated by connective tissue. Many islands were solid, others contained keratin or ghost cells in their central part and small solid foci or thin bands of epihelial cells grew out from their periphery. In the islands the epithelium was differentiated into smaller cells of matrical nature and larger cells with clear or slightly eosinophilic cytoplasm and ovoid nuclei that contained fine chromatin. Nucleoli in neoplastic cells were apparent and mitotic activity was fairly high. Many apoptotic bodies were either among the epithelial cells or they were phagocytized by them. Rudimentary sebaceous glands were attached to the periphery of many epithelial islands. The third part was composed of spindle, basaloid cells arranged in irregular islands and bundles. Cytoplasm of the neoplastic cells was rather basophilic, nuclei were ovoid with fine chromatin and with one or more conspicuous nucleoli. Mitotic activity was rather high and atypical figures were also present. Among and inside of the cells there were many apoptotic bodies. Matrical keratinization was minimal. Multiple rudimentary sebaceous glands or cells were inside the islands of neoplastic cells. Moreover there were multiple foci of metaplastic lamellar bone tissue (Plate VII, Fig. 1). In this neoplastic tissue were multiple necrotic foci and the periphery of the tumour was necrotic with mixed inflammatory cellulation. Using monoclonal antibody MNF 116 (DAKO), positivity of cytokeratins was proved in majority of epithelial cells in the first part of the tumour. In the second part predominantly the cells in outgrowths of the islands were positive (Plate VII, Fig. 2). In the third part, the neoplastic cells were negative with exception of single small groups of differentiated epithelial cells. Positivity of S100 protein was almost identical with the reaction of pancytokeratin antibody. Cytokeratin K 18, typical of simple epithelium, was not observed in neoplastic cells, only in the sebaceous glands there was mild positivity. Vimentin was present in fibrocytes of the connective tissue and in the cells at the periphery of metaplastic bone tissue in the third part of the tumour. In accordance with Goldschmidt et al. (1998) the tumour was diagnosed as a malignant pilomatricoma. Tumours of the mammary gland In male No. 1, a tumour of left mammary gland, size of 2 ¥ 1 ¥ 1 cm, had been observed for four months. In male No. 2 the tumour was dome-shaped, 2 cm in diameter. Both tumours were histologically diagnosed as adenocarcinomas. They were arranged in tubular and cystopapilar formations, and in the second case there were also foci of squamous metaplasia and conspicuous inflammatory cellulation. Angioinvasion was not apparent. Tumour from animal No. 2 was examined by means of immunohistochemistry. Reaction with pancytokeratin antibody (MNF 116, DAKO) was only slight in the columnar epithelium whereas in the sqamous epithelium it was strong. Reaction with antibody to cytokeratin K 18 (Exbio) was positive in the columnary epithelium in tubular and papillar structures (Plate VIII, Fig. 3). Detection of smooth muscle actin revealed nodular proliferations of the myoepithelium. In tubular formations the myoepithelial cells were only scarce. The dimensions of the sarcoma of myoepithelial origin were 2 ¥ 1 ¥ 1 cm. The tumour was rather well limited. Histologically, it consisted of interlacing bundles of spindle-shaped cells. The nuclei were oval, round or irregular with fine chromatin and only in some of them were conspicuous nucleoli. Mitotic figures were rather frequent, some of them were atypical. Cytoplasm was pale basophilic and reticulated, cytoplasmic membrane was not well visible. Angioinvasion was not observed. Immunohistochemistry revealed positivity for smooth muscle actin and S100 protein in the neoplastic cells. The reactions for cytokeratins and vimentin were negative. In animal No. 4, a tumourous formation, 2 cm in diameter, was present. Histologically it consisted of tubular glandular structures arranged in islands of different sizes and shapes that were disseminated in loose connective and fat tissues. Tubuli of different diameter with proteinaceous secretion in their lumina prevailed in the adenomatous tissue. The epithelium was predominantly cubic, here and there with sign of apocrine type of secretion. No mitotic figures and no cytologic abnormalities were observed. Some rudimentary hair follicles with sebaceous glands or cells were also present. The lesion was diagnosed as adenomatous hyperplasia of the mammary gland. Immunohistochemical examination for cytokeratins, using pancytokeratin antibody MNF 116 (DAKO), revealed positivity only in the epitelium of hair follicles. Epithelium of the tubular formations was negative. Reaction for smooth muscle actin was positive in majority of the interstitial cells and in some epithelial cells in the adenomatous formations. This reaction revealed well the myoepithelial cells situated at the base of the tubuli. In guinea pig No. 5, the nodule in the mammary gland was approximately 1 cm in diameter. Its histological structure was characteristic for tubular adenoma. The tubules were small in diameter and contained proteinaceous material in the lumina. Results of immunohistochemistry were the same as in the above mentioned case, only the reaction for actin revealed more myoepithelial cells. Tumours in the subcutaneous tissue Lipomas. In guinea pig No. 1, the tumour was located on the right side of the thoracic wall and its size was 2 ¥ 1 ¥ 1 cm. In animal No. 3, the tumour was 2-3 cm in diameter and it was located in the pubic region. Multiple lipomas were in guinea pig No. 4. They were located on the ventral side of the body, in the right axila, on the right side of thorax, and they were of different sizes. Unfortunately, the clinician did not determine their size. Liposarcoma was situated in the left groin and its dimensions were 5 ¥ 3 ¥ 3 cm. This tumour reached this size in the course of one month. Histologically, it was relatively well differentiated liposarcoma. The majority of neoplastic cells contained in the cytoplasm large fat vacuol surrounded by narrow rim of cytoplasm. In the nuclei was fine chromatin and anisokaryosis was apparent. Mitotic activity was low (Plate VIII, Fig. 4 Myxoid liposarcoma was located on dorsal part of the neck. The tumour had been observed for one month and during this period it reached 1 cm in diameter. Histology revealed tumourous tissue which consisted of large quantity of vacuolated amorphous intercellular substance. Neoplastic cells were polymorphous with cytoplasmic processes. The amorphous intercellular substance contained acid glycoproteins. Vacuoles were residui of fat that was extracted during histological processing. Infiltration of tumourous tissue into the surrounding tissues was well apparent. Diagnosis was made on basis of description by Hendrick et al. (1998). Ossifying fibroma was situated on ventral side of thorax. It was 10 cm long and its transversal dimensions vere 2 ¥ 1 cm. Histological structure consists of proliferative collagen connective tissue with conspicuous bone metaplasia. The neoplastic tissue was not encapsulated but it was well limited and no histological signs of malignity were apparent. Tumours of the haemopoietic tissue In all three cases the neoplastic cells were of lymphocytic nature. Compared to normal lymphocytes they were rather large, the nuclei were round, oval or irregular with indentations or clefts and contained fine chromatin. Nucleoli were apparent only in small proportion of the cells. Cytoplasm was slightly basophilic in form of narrow rim around the nucleus. Cohesivity among the cells was low. In the first case there was high mitotic activity and in some cells there were atypic mitotic figures. In the remaining two cases the mitotic figures were almost absent. Gross pathology and histopathology were similar in all three animals. Necropsy revealed general lymphadenopathy, including mesenteric lymph nodes, moderate splenomegaly and multiple little light foci in many organs including intestine. In case No. 2 there was also hydrothorax and hydropericard, in guinea pig No. 3 was anasarca, milk-turbid effusion in thoracic and abdominal cavities and haemorrhages in the lymph nodes. Histopathological examination revealed diffuse infiltration of lymph nodes and their perinodal connective tissue with neoplastic lymphocytes. The original structure of the lymph nodes was entirely effaced or only some remnants of original structure persisted. In guinea pig No. 1, also in the spleen was diffuse infiltration with the neoplastic cells and multiple foci of necrosis were present. In animals No. 2 and 3, the original structure of the spleen was preserved but the sinuses and cords of the red pulp were infiltrated with neoplastic cells. In mucosa associated lymphatic tissue (MALT) of the intestine there was diffuse infiltration of lymphoma cells. In subepicardial connective tissue of the heart there were segments of infiltration with leukaemic cells. In these segments the epicardium was absent or damaged and reparative processes characterized by proliferation of blood capillaries and fibroblasts were observed. Myocardium of all three animals was free of neoplastic cells. In the lungs were sheaths of neoplastic cells around the blood vessels and mild diffuse infiltration of pulmonary interstitium. Perivascular sheaths of lymphoma cells were also in the portal fields of the liver. Besides this, mild infiltration of sinusoids or foci of neoplastic lymphocytes were present in the liver parenchyma. Large or small aggregates of neoplastic cells were observed also in the kidneys, adrenals, and epididymis. The femoral bone marrow was examined histologically only in guinea pig No. 3 but infiltration with neoplastic lymphocytes was not observed. In accordance to a short course of clinical disease the high grade leucosis could be considered. From the cytological point of view, the cells were similar to centroblasts and centrocytes. Immunohistochemistry was not performed because our laboratory did not possess appropriate antibodies. Tumour of the liver This tumour was revealed accidentally in one female, five years of age, by necropsy done by one clinician. In the same animal also trichofolliculoma was found. The tumour submitted for histological examination was globoid, 1.5 cm in diameter. It consisted of hepatocytes arranged in lobules with central vein but portal triads and interlobular biliary ducts were not developed. Based on histological examination, the tumour was diagnosed as hepatocellular adenoma. Blumenthal and Rogers (1965) have reported about 140 tumours in guinea pigs. Only in one animal they recorded a tumour of the skin that was diagnosed as epithelioma adenoides cysticum. Kitchen et al. (1975) examined 14 spontaneous tumours in guinea pigs and in three cases trichoepithelioma was diagnosed. Wagner and Manning (1976) reported about 29 trichofolliculoma and one trichoepitelioma. They did not observe any other types of skin tumours. The authors state that skin tumours are the most frequent of all reported tumours in guinea pigs. Of these, the trichofolliculomas are probably the best known. Zwart et al. (1981) reported about two trichofolliculomas and one sebaceous gland adenoma. In our collection there were two trichofolliculomas, two trichoepitheliomas and one malignant pilomatricoma. In the available literature no information on pilomatricoma or malignant pilomatricoma in guinea pigs was found. Immunohistochemistry revealed different grades of differentiation of the neoplastic cells in the malignant pilomatricoma. Especially in the second part of the tumour was well visible distinction between the cells inside and at periphery of neoplastic islands. Poorly differentiated cells in the third part did not express cytokeratins. In three of five our cases, the skin tumours were situated in dorsal region of the body. Similar predilection exists also in dogs (Gross et al. 1992). Discussion In five cases of mammary gland tumours there were two adenocarcinomas and both appeared in males of middle age category. In one female of similar age category malignant tumour of myoepithelial histogenesis was diagnosed. In two young females were benign processes -one adenoma and one adenomatous hyperplasia of the mammary gland. Blumenthal and Rogers (1965) reported about 11 cases of neoplasia in mammary gland. Three were benign (adenoma and cystadenoma) and eight were adenocarcinomas. Three of them were in males. Kitchen et al. (1975) observed two adenocarcinomas, one in female and one in male -both of middle age, one adenoma in male, one malignant mixed tumour with pulmonary metastases in a female 7.5-year-old. In accordance with literature and my own experience, mammary neoplasias appear in males of different animal species, including humans, rarely. Some species of rodents are exception, and, e.g., in old rat males the frequency is relatively high. In 16% of males, over 24 months of age, Wistar strain, bred in the Czech Republic, the author diagnosed mammary tumours (unpublished data). Negative demonstration of cytokeratins in mammary adenocarcinoma, by means of antibody MNF 116 (DAKO) was surprising. In accordance to own experience this antibody is very well usable in mouse, rat, hamster, dog and cat. Demonstration of cytokeratin K18 in the guinea pigs by means of EXBIO antibody was succcessful. From six tumours located in subcutaneous tissue three were lipomas, one of them was multiple. In case of liposarcoma the histopathological diagnosis was in good relation to growth rate even though mitotic activity was fairly low. Blumenthal and Rogers (1965) diagnosed two fibrolipomas, two neurilemmomas, seven fibrosarcomas, three fibroliposarcomas and one neurogenic sarcoma. Altogether they observed neoplasia in subcutaneous tissue in 15 cases from 140 animals. Kitchen et al. (1975) diagnosed, among 14 tumours in guinea pigs, two lipomas and one schwannoma in the subcutaneous tissue. Wagner and Manning (1976) observed eight cases of fibrolipoma, and by one fibroma, fibrosarcoma and lipoma. Percy and Barthold (1993) presented only a list of the subcutaneous tumours -lipoma, fibrosarcoma, fibroma and carcinomas without further characterization. Ossifyng fibroma, a rare tumour, was diagnosed in accordance with description by Palmer (1993). In the available literature no information about this tumour and myxoid liposarcoma in guinea pigs was found. Squire et al. (1978) state that skin tumours and tumours in subcutaneous tissue are rare but it is not in accordance with the results of other authors. E.g. in the collection of Wagner and Manning (1976) neoplasias of the skin and subcutis range to 15.4%. In three males of our collection generalized leukosis of lymphocytic nature was diagnosed. This neoplasia was clinically manifested by lymphadenopathy and shortly before death of the animal by alteration of general health state. In our cases the clinical course of the disease was short. In guinea pigs kept as pets it is impossible to determine the frequency of neoplasias, including the ones of haemopoietic tissue and lymphomas but in accordance with veterinary physicians these neoplasias are not rare. Majority of these cases are not exactly diagnosed because the owners have no interest in laboratory examination with regard to poor prognosis of the disease. From 140 neoplasias collected by Blumenthal and Rogers (1965) there were 10 cases of leukosis and 3 cases of malignant lymphoma. Kitchen et al. (1975) observed only one case of histiocytic lymphosarcoma among 14 neoplasias. Wagner and Manning (1976) diagnosed leukemia in 13, and lymphosarcoma in nine guinea pigs. Squire et al. (1978) state that lymphomas and lymphocytic leukaemias are not uncommon in middle-aged guinea pigs. In laboratory breedings the incidence of lymphomas and leukemias is related to the strain of guinea pigs. Hong et al. (1980) reviewed spontaneous lymphoblastic leukaemia in several laboratory strains. These authors have diagnosed only nine cases of lymphoblastic leukaemia in 4,500 examined guinea pigs. Seven cases occcurred in strain 2/N, one in 13/N and one in Dunkin-Hartley/FD strain. Both gross pathology and histopathology were similar to our cases. Similarly to cited authors, we did not do further classification of leukosis by immunohistochemistry because we did not possess the appropriate antibodies. No liver tumour is presented in the collections of Blumenthal and Rogers (1965) and Kitchen et al. (1975). Wagner and Manning (1976) have reported liver cell adenoma, cavernous haemangioma and gallbladder papilloma. General opinion is that in guinea pigs tumours of the liver are rare. In our collection of neoplasias, no lung tumour was observed though, according to Squire et al. (1978), they are not rare in form of adenomas and adenocarcinomas. According to Percy and B arthold (1993) the pulmonary tumours form 35%, and tumours of reproductive organs 25% of all spontaneous tumours in guinea pigs. Also neoplasias of the cardiovascular system are, according to the literature, not uncommon. In other organs frequency of tumours is low. In spite of the differences in the literature concerned of tumours frequency and classification, there is general agreement that in guinea pigs older than three years neoplasias are not rare. According to Wagner and Manning (1976), the frequency of neoplasias is ranging from 14.4% to 30% in guinea pigs of this age category. The aim of this paper is to contribute to so far scarce literature concerning the tumours in guinea pigs.
2017-08-27T06:07:50.861Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "03ea83d9265c989042f19e5c10e460d913a240d7", "oa_license": "CCBY", "oa_url": "https://actavet.vfu.cz/media/pdf/avb_2003072020221.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "03ea83d9265c989042f19e5c10e460d913a240d7", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
79158342
pes2o/s2orc
v3-fos-license
Influence of MNRI on the Immune Status of Children with Down Syndrome Down syndrome is one of the most common genetic chromosomal abnormalities, which occurs in 1 out of 700 to 1000 newborns. The cause of the defect is a change in the pair at the 21st chromosome, which was discovered by Lejeune et al. almost a hundred years after the first description of the syndrome [1]. There are three different syndrome variations: trisomy is the existence of three copies of the 21st chromosome, comprising 95% of cases. Translocation is an attachment of a part of the 21st chromosome to another chromosome; the frequency of this occurrence is 3-4%. Mosaicism, an error in cell division after fertilization, which results with an extra 21st chromosome in some cells, occurs in 1-2% of cases. In 90% of cases a child receives an additional 21st chromosome from her mother and 10% from the father. A woman of fertile age has a 0.54% risk to have a child with Down syndrome and this risk increases with age (up to 4.2% woman of 45 and older). In all cases of chromosomal disorders the clinical characteristics are typical and were described by Down [2]. The definition of the "syndrome" includes a various combinations of symptoms and features but always includes two imminent characteristics: intellectual disability and muscle hypotonia. Introduction Down syndrome is one of the most common genetic chromosomal abnormalities, which occurs in 1 out of 700 to 1000 newborns. The cause of the defect is a change in the pair at the 21 st chromosome, which was discovered by Lejeune et al. almost a hundred years after the first description of the syndrome [1]. There are three different syndrome variations: trisomy is the existence of three copies of the 21 st chromosome, comprising 95% of cases. Translocation is an attachment of a part of the 21 st chromosome to another chromosome; the frequency of this occurrence is 3-4%. Mosaicism, an error in cell division after fertilization, which results with an extra 21 st chromosome in some cells, occurs in 1-2% of cases. In 90% of cases a child receives an additional 21 st chromosome from her mother and 10% from the father. A woman of fertile age has a 0.54% risk to have a child with Down syndrome and this risk increases with age (up to 4.2% woman of 45 and older). In all cases of chromosomal disorders the clinical characteristics are typical and were described by Down [2]. The definition of the "syndrome" includes a various combinations of symptoms and features but always includes two imminent characteristics: intellectual disability and muscle hypotonia. The most common phenotypes are a flat face with the depressed nasal bridge and nape, short and wide neck, underdeveloped ear lobes, epicanthus, slightly opened mouth, noticeably short limbs with underdeveloped and bent digits and a single line that runs across the palm of the hand called a simian crease. Slow physical development is a typical feature of children with Down syndrome of all ages. This can be combined with abnormal development of the cardiovascular system (50%) and other systems along with hearing difficulties, nearsightedness and cataracts, hypothyroidism, scoliosis, and infertility. The main disorders are low intellectual development, slow development of abilities and skills, and the disharmonious development of other functions [3,4]. In some cases, children aged 2-4 years with Down syndrome can develop atypical autism [5]. Down syndrome is very rarely accompanied by epileptic symptoms. There is a theory that one of the key reasons for this intellectual disorder is the increased gene-dosage of superoxide dismutase, which is located on the 21 st chromosome [6]. This pathology is a subject of multiple studies by specialists of different fields. Various methods of social rehabilitation directed toward neurosensorimotor and oral-motor functions [7], amino-acid metabolic therapy, and applying neurotrophic factors with neuroprotective and neurodegenerative effects which increase neuroplasticity and stimulate neurogenesis [8]. Thanks to the modern approaches of special education and care recommended by the National Down Syndrome Society, the average life expectancy has increased and, according to some studies, is at approximately 50 years [9]. Many children with Down syndrome often suffer from illnesses, but despite the existing knowledge about the specifics of the immune system of such children [8,10], the impact of different interventions in congenital and adaptive immunity remains insufficiently unexplored. Abstract The clinical and immunological characteristics of 49 children with Down syndrome were studied. Thirty-four boys and 15 girls between the ages of zero and six years old were observed. It was revealed that children in the Study Group with Down syndrome developed a greater number of disorders starting at the earliest stages of pregnancy and delivery, such as fetal malnutrition, congenital heart defects, and pathology of vision, than children in the control group (p<0.05). All of the children in the Study Group had allergic reactions and were frequently ill. There was a noticed decrease in the numbers of subpopulations of T-lymphocytes (СD45/CD3), CD3/CD4, CD3/CD8 and the absolute number of B-cells (CD45/CD19), and at IgG pool, indicating a certain deficiency in cell-mediated and humoral immune responses which provides a base for frequent diseases, including bacterial diseases. Also an increase in the prevalence of pre-activated cells (CD45/ CD25) and NK cells (CD16/CD32/CD56), and a clear increase of IgE (1489.5 ± 467.9 и 59.67 ±11.8 IU/L in the control group, p<0.05) was noted, which explains the predisposition in children with Down syndrome to IgEdependent humoral immune responses and allergic reactions. These specific indictors served as evidence that an evaluation of MNRI as a therapeutic program for improved immunity could be very beneficial. Tests done after two weeks of MNRI therapy showed normalizing of a significant number of abnormal indicators of T-and B-lymphocytes, NK-cells, immunoglobulin levels, and pro-and anti-inflammatory cytokines (IL-2, IL-4, IL-6, IL-10, IL-12, IL-17, IFN-γ, TNF-α). This defines the purpose of this study: to research the influence of Masgutova Neurosensorimotor Reflex Integration (MNRI) treatment on the immune status of children with Down syndrome. (Table 1). This Study Group was divided into age groups: under one year old -10 children; 1-2 years old -11 children; 2-3 years old -12 children, and 3-6 years old -16 children. Fifty-six (29 boys and 27 girls) healthy children were observed and examined as the Control Group. The study did not include children with acute inflammatory diseases, as well as with chronic eczema and atopic dermatitis during the exacerbation. An analysis of sensorimotor development of the children with Down Syndrome within their age related differences was done with the use of standardized diagnostic criteria of neurological development [11], their diagnosis of neuropsychological development in the first three years of life compared with children with special needs [12], the Carolina Curriculum for Infants and Toddlers with Special Needs, and the Battelle scale [13]. The evaluation of levels of the neuropsychological development of children in actual research was done by rating quality-and-quantity parameters of the child's development, based on performance or completion of tasks corresponding to the child's age. The rating of neuro-developmental delay in children in the Study Group and Control Groups was based responses and evaluation of ten basic parameters: visual orientation, auditory orientation, sensory development, emotional and social development, speech comprehension, expressive/active speech, gross-motor coordination, and manual abilities based on skills, playing games, and manipulation of objects. The initial immune status and dynamics of lymphocytes subpopulations, immunoglobulins and cytokines were studied in all 49 children with Down syndrome -in the Study Group after MNRI Neurosensorimotor Reflex Integration, and in the Control Group, where children did not receive the MNRI Program. Content of A-, G-and M-class immunoglobulins was determined by the radial immunodiffusion method in agar gel [14] by using a diagnostic ELISA kit test (Vektor-Best, Novosibirsk, Russia). The IgE test was done by Total IgE ELISA-BEST kit (Vektor-Best, Novosibirsk) following the ELISA method. Evaluation of the level of maturity and neurosensorimotor integration of dynamic and postural reflexes The MNRI program includes a diagnostic and therapeutic assessment procedure [15,16]. The main purpose of the diagnostics was to evaluate the level of maturity and neurosensorimotor integration of the dynamic and postural reflexes. This procedure allows developmental deficiencies in sensorimotor areas and defense mechanisms to be revealed. These deficiencies are considered the result of a delay or poor development of primary sensorimotor patterns -reflexes, or a stressful influence on them. The Assessments of reflexes included such patterns, as: the Asymmetrical Tonic Neck, Hands Supporting, Bauer Crawling, and Leg Cross Flexion-Extension, Spinal Galant and Perez, Moro, Robinson Hand Grasp, and other reflexes. In total, 30 reflexes were tested. The study of the reflex patterns is important and unique for contemporary therapeutic modalities as the assessment of the reflexes gives a much more exact analysis of the developmental deficits in the primary neurosensorimotor area, also in self-regulation and defense mechanisms, which is significant and essential for therapies and corrections. The tests contain five main parameters of evaluation: 1) Sensory-motor circuit -a motor or proprioceptive response to a specific stimulus (coordinated work of sensory and motor neurons); 2) Direction of a response -movement or posture (coherent work of the alpha and gamma motor neurons for movements); 3) Intensity/strength of the response (tone of muscles, ligaments and tendons. The reaction is graded respectively as normal, dysfunctional, pathological, hyper or hypo-active, a-reflexia = absence); 4) Response time/latency (normal/hyper/hypo-reaction, a-reflexia = absence); 5) Symmetry in response (similar response in circuit, direction, intensity and timing for right and left sides of body). The parameters were evaluated on a scale of 0 -20 points. The test results were analyzed using criteria offered for statistical analysis by Professor Anna Kreff [17], where 10 -11.99 points means that a reflex is at the intermediate stage between dysfunction and functional development. The normal is 16 -17.99 points. The evaluation of anxiety The C.D. Spielberger and Yu. L. Khanin method was applied to patients to reveal their level of anxiety [18,19]. This test is one of the most often used tools of psychometric evaluation of anxiety level at present. When looking at people who suffer from anxiety, there is reactive anxiety and anxious personality traits. These were examined using a questionnaire with 40 questions to parents of children with Down syndrome. The statistical processing of the results This was accomplished with parametrical and non-parametrical basic statistics with the use of the Mann-Whitney U-test and Wilcoxon test and a standard statistical software package for Windows 7 (StatSoft 7.0), Microsoft Excel and WinMDI software. The differences were considered as significant at p<0.05. Results There was an evaluation of the medical histories of 25 children with Down syndrome and 37 children from the Control Group (not all children had early medical history information) ( Table 2). It was known that three mothers of the children with Down syndrome had pregnancies with the risk of miscarriage; two women had a history of miscarriage and infants born as stillborn. Only one woman from the Control Group had abnormal pregnancies. Seven children with Down syndrome were prematurely born at 35-36 weeks of gestation, which significantly differs from the Control Group. It was also noted that the children with Down syndrome had neonatal hypoxia-ischemia significantly more often than the control group. Thirteen out of 25 children received care at a Neonatal Intensive Care Unit and two children had received mechanical ventilation. Individual evaluation of the basic parameters of physical development such as height and body weight revealed essential deviations at birth in the children with Down syndrome, in comparison with the control group, which indicates there was prenatal hypotrophy. The level of motor and mental skills development in children with Down syndrome depended on accompanying disorders. In the children from the first group, 75.5% had congenital heart defects (37 children) that demanded surgery during their first year of life. By the age of two all the children developed posture problems and flat feet. Chronic allergies such as eczema and atopic dermatitis were displayed in 97% or 48 of these children. By the age of three they developed vision pathologies such as hypermetropia, myopia, or astigmatism (Table 3). Children with Down syndrome get sick often, with the average being 5.45 ± 0.64 times per year. Children from the control group were sick not more than 2 -3 times a year (average 1.8 ± 0.4, p<0.05). Another difference was also in the way the kids got sick. The group containing children with Down syndrome had ARTI (Acute Respiratory Tract Infection) often complicated by sinusitis, otitis, pneumonia, and development of bronchial obstructive syndrome. All the children needed antibiotic treatment frequently. Only two children from the control group needed antibiotics. This demonstrates the existence of a defect in the cell-mediated and humoral immunity and probable dominance of IgE-dependent allergic reaction. The children with Down syndrome showed different levels of intellectual disability (Table 4). Because it is impossible to evaluate the degree of mental development in children under two years old, their level wasn't specified. Neurological problems such as ischemic brain injury during labor and residual changes make it harder to develop physiological reflexes and also cause neurodevelopmental deficits. This leads to the late acquisition of skills and aggravates pre-existing psychomotor delay caused by genetics, particularly in cases of hypotrophy of types 2 and 3, together with muscular hypotonia and joint hypermobility. The results of three age-related methods and the subsequent tests allowed us to offer a developmental prognosis for the adaptation, communication, and socialization of children with Down syndrome, for practical use in child care facilities ( Table 5). The study of clinical and biochemical blood test indicators of the children with Down syndrome revealed differences from the Control Group (Table 6). Even the average level of leukocytes and lymphocytes was within the normal range but was still lower than in the control group. This may be evidence of a decrease in immune responses. The biochemical indicators were within the age range. There was evidence of an increase in alkaline phosphatase levels, which is typical according to some researchers for children with hypotrophy and premature birth who take medication for neurological disorders, possibly due to a liver enzymes disorder [20]. The specifics of the immunological status in children with Down syndrome and influence of MNRI on effectors of the immune system The structures of lymphocytes subpopulations, cytokines, and immunoglobulins levels in blood were studied with the goal to evaluate the specifics of the immune status in 49 children with Down syndrome from age one to three years and older than three. The analysis of lymphocytes subpopulations ( shares, takes offense, hits back, says "I" and "me". T-lymphocyte, T-cytotoxic counts (CD45/CD3, CD3/CD8), T-helper cells (CD3/CD4), absolute counts of B-lymphocytes (CD45/CD19), with the increase of number of pre-activated lymphocytes (CD45/ CD25) and natural killers (CD16/CD32/CD56), that may be a result of a compensatory reaction of the immune system. All the above specifics were registered in the children with Down syndrome at the age from one to three years and older than three should be considered as immunological characteristics of this syndrome. The Neurosensorimotor Reflex Integration therapy (Table 8) led to the increase of absolute counts of Т-lymphocytes (СD45/CD3) in group 7 (over three years old) 1.69 ± 0.03 to 2.45 ± 0.31 (by 1.5 times) and group 5 (children with Down syndrome, after MNRI) 1.29 ± 0.16 to 3.22 ± 0.23 (by 2.5 times). There was also an increase of T-helper levels (CD3/CD4) compared with levels before the MNRI therapy in group 5, 0.65 ± 0.2 to 1.31 ± 0.2 (by 2 times). The same indicator was increased in group 6 (under one year, after MNRI), 0.64 ± 0.18 to 1.42 ± 0.3 (by 2.2 times), in the group 7 with 0.49 ± 0.09 to 0.8 ± 0.1 (by 1.63 times). An increase of cytotoxic T cell numbers -0.49 ± 0.09 to 0.95 ± 0.1 (by 1.94 times) was observed in the children over three years old (group 7). There was also a correction in B-lymphocytes (CD45/CD19) -0.39 ± 0.06 (group 4) to 0.89 ± 0.07 (group 7) (by 2.9 times) and in the number of activated CD45/CD95 blood cells 0.59 ± 0.04 (group 4) to 1.3 ± 0.1 (group 7) (by 2.2 times). Figure 1 presents a cytofluogram of a healthy boy (2.5 y) vs. a boy with Down syndrome (2.7 y) prior and after the MNRI treatment, and shows also the similar changes observed in their corresponding groups. Children with Down syndrome that underwent MNRI therapy displayed a correction of absolute cell counts in the cellular immune responses, the content of activated CD45/CD3, CD3/CD4, CD3/CD8 and CD45/CD19 lymphocytes in peripheral blood. The evaluation of humoral immunity was based on immunoglobulins IgM, IgG, IgA, and IgE levels. The children with Down syndrome showed essential decreases in IgG pool, and a tendency to decrease IgM level with their IgA level remaining stable (Table 9). Even when there was no essential immunoglobulin deficiency, the decrease of IgM and IgG levels together with the reduced T-and B-cell numbers predetermined a frequent sickness rate (Table 3) and low level of immune responses ( Table 8). The indicators of IgE (1489.5 ± 467.9 and 59.67 ± 11.8 IU/L, p<0.05) in the Study Group (Table 9) turned out to be significantly high which reflects a predisposition to humoral responses, IgE-dependent, and allergic reactions. According to the existing studies, similar changes such as a decline in cell-mediated immunity in the form of decreasing numbers of T-lymphocyte subpopulation and low IgG and IgM levels in humoral immunity with elevated IgE, are typical for children with organic damage within the central nervous system, which can be caused by chromosomal abnormalities, and for children with allergic disorders [10,21]. The increase of IgM (from 0.75 to 0.97 g/L) and IgG (from 5.59 to 8.3 g/L, p <0.05) levels and the decrease of IgE by almost three times (by 2.9 times, from 1489.5 to 532.5 g/L, p<0.05) were the results of the MNRI therapy. It was observed (Table 9) reduced pro-inflammatory cytokine concentrations IL-2, Il-12, IL-17, IFN-γ (respectively by 2.1; 3.34; 4.3; 1.5 times, p<0.05) in the children with Down syndrome in comparison with the control group (healthy children). Also there was an increase in IL-4, IL-6, TNF-α levels (respectively by 3.4; 9.2; 1.56 times, p<0.05). MNRI (Table 9) caused a noticeable immune-corrective effect on the indicators of the cytokine status in the children with Down syndrome. There was an increase in the initially low levels of IL-2 (from 21.15 to 42.2 ± 4.5 pg/ml), IL-12 (from 21.7 to 63.2 pg/ml), IL-17 (from 12.7 to 45.2 pg/ml), IFN-g (from 23.7 to 39.7 pg/ml). MNRI also helped to decrease the initially higher levels of the following cytokines: IL-4 (from 78.35 to 31.3 pg/ml), IL-6 (from 218.7 to 51.2 pg/ml), IL-10 (from 323.6 to 111.7 pg/ml), TNF-a (from 115.7 to 69.3 pg/ml). It is known that the increase in pro-inflammatory cytokine concentration is a part of a chronic inflammatory process, the same way as an increase of anti-inflammatory IL-4 and IL-1 demonstrates poor performance of the immune system in children with Down syndrome who get sick often [21][22][23]. IL-4 induces switching immunoglobulin synthesis to IgE [24]. The reduced IFN-γ synthesis, in comparison to the group containing healthy children, indicate a probable exhaustion of anti-inflammatory resistance in the children with Down syndrome (Table 9) and the immune response is taking the Th-2 way. Discussion Research data proves that children with Down syndrome need cumulative specialized evaluation of their neuropsychological development as they have special needs. All of the children with Down syndrome are predisposed to viral and bacterial infections, allergic reactions, including bronchial obstructive syndrome [4,25]. This reflects the changes in their immune system as their deficiency of cell-mediated and humoral immune responses is, according to the textbooks, one of the frequent causes of diseases [4,8,25,26]. The increase of the pro-(IL-6, TNF-α) and anti-inflammatory (IL-4, IL-10) cytokines indicate low level of immune system function, while the reduced IL-2, IL-12, IL-17, IFN-γ synthesis reflect poor infection resistance. In this Study Group of children, the average number of diseases in one year was three times higher than in the control group. The respiratory diseases were also more severe and had a larger number of complications in the group with Down syndrome. The predisposition to IgE-dependent humoral immune responses and allergic reactions is comparable to IgE indicators. Children with Down syndrome who went under MNRI therapy displayed a correction of absolute cell counts in cellular immune responses, of activated T-cells, helper T-cells, cytotoxic T cells (CD45/ CD3, CD3/CD4, CD3/CD8) and B-lymphocytes (CD45/CD19), immunoglobulins (IgM, IgG), and indicators of the cytokine status (IL-6, TNF-α, IL-4, IL-10, IL-2, IL-12, IL-17, IFN-γ) in peripheral blood, also the number of exacerbations of respiratory disorders has been reduced. A statistically significant increase in the number of cells expressing differentiation antigens and natural killer cells (CD16) after MNRI was noted. Natural killer cells are the key effectors of innate immunity; they have an important biological role in the mechanisms of immune surveillance (the targeting of tumor cells), in the destruction of viruses and parasite-infected cells, and in the regulation and differentiation of bone marrow cells (they eliminate rapidly proliferating hemopoietic cells) in people with graft-versus-host reaction [27]. However, the question concerned the ability of the individual cell populations to produce cytokines in children with Down syndrome is a separate topic and a target for the future research; this paper is focused on study on the cytokine production by a common pool of peripheral blood lymphocytes (PBMC). The results of the research demonstrate the fact, that the MNRI therapy regulates the production of pro-and anti-inflammatory cytokines, and the regulatory cytokines IL-12, IFN-γ and IL-12 and thus positively affects the interaction of the immune, endocrine, and nervous systems and ultimately homeostasis. We cannot exclude the direct effect of MNRI on circulation and the lymphatic system, because our results revealed a significant decrease in muscle hypertension, hydropic symptoms, vessel spasms, and tissue inflammation after MNRI therapy. We suggest that adding MNRI to the treatment of children with Down syndrome can correct impaired immune system mechanisms, contribute to the resolution of chronic respiratory disease, and enable a longer remission from recurrent disease. However, additional studies of the effects of MNRI therapy on mechanisms regulating immune, endocrine, and nervous system function in children with Down syndrome are of special scientific interest. The summary of the above leads to the conclusion that poor immunological function in children with Down syndrome is one of the symptoms of the syndrome and should be considered while treating aggravated diseases. The Neurosensorimotor Reflex Integration therapy should be recommended as a rehabilitation method for this group of children.
2019-03-16T13:12:48.056Z
2017-01-13T00:00:00.000
{ "year": 2017, "sha1": "bdb9d264f4efd620048073eb4cd8fe5b0996a0ac", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2155-9899.1000483", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "13152759dd8e2231c9045f85c3168b5788e0acea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10781089
pes2o/s2orc
v3-fos-license
Soluble Urokinase‐Type Plasminogen Activator Receptor: A Useful Biomarker for Coronary Artery Disease and Clinical Outcomes? While coronary artery disease (CAD) is a leading cause of death and disability worldwide, causal pathways for the progression of underlying atheromatous coronary plaque formation remain poorly understood. 1 Numerous clinical characteristics and associated laboratory findings (ie, elevated LDL-C levels) have been firmly linked with an increased likelihood of developing atherosclerosis. Some of these variables are modifiable, such as obesity, smoking, hypercholesterolemia, hypertension, and lack of exercise, while other demographic factors such as age, sex, and family history cannot be changed. 2,3 Risk assessment at the individual patient level, however, should not relate to simply accounting for the number of risk factors but rather should focus upon how to delineate the complex interplay among many established clinical and laboratory risk factors. With numerous recent advances, novel biomarkers have been identified from plasma samples of patients with possible or suspected CAD. One such biomarker that looks to be promising is the inflammatory protein—soluble urokinase-type plasminogen activator receptor (suPAR). In the current issue of the Journal of the American Heart Association, Eapen et al 4 present the findings of a study that assessed the association of plasma suPAR with the presence and severity of CAD as well as the role of suPAR as a predictive marker for death and myocardial infarction (MI) (over a mean of 2 years) in 3367 patients undergoing cardiac catheterization. In this study, suPAR levels were associated with both the presence and severity of CAD, and with an increased risk of subsequent death or myocardial infarction (MI) (hazard ratio [HR]: 1.9), cardiac death (HR: 2.62), and MI (HR: 3.20). The addition of suPAR levels to a prediction model that incorporated traditional risk factors modestly improved the discriminatory capabilities of the model (the C statistic changed from 0.72 to 0.74). Urokinase-type plasminogen activator (uPA) and its cell surface-receptor (uPAR) regulate cellular functions linked to adhesion and migration and are involved in the tissue remodeling processes. 5 The soluble form (suPAR) is present in the serum and other bodily fluids, and the soluble receptor accounts for 10% to 20% of the total receptor in vascular endothelial and smooth muscle cells. Numerous observational studies have shown systemic levels of suPAR to be associated with an increased risk of cancer, various infectious and inflammatory diseases, rheumatoid arthritis, and hepatic fibrosis. 6 Furthermore, elevated levels of suPAR have been shown to have prognostic value for patients with neoplasms, systemic inflammatory diseases, and those with various infectious diseases. 6 W hile coronary artery disease (CAD) is a leading cause of death and disability worldwide, causal pathways for the progression of underlying atheromatous coronary plaque formation remain poorly understood. 1 Numerous clinical characteristics and associated laboratory findings (ie, elevated LDL-C levels) have been firmly linked with an increased likelihood of developing atherosclerosis. Some of these variables are modifiable, such as obesity, smoking, hypercholesterolemia, hypertension, and lack of exercise, while other demographic factors such as age, sex, and family history cannot be changed. 2,3 Risk assessment at the individual patient level, however, should not relate to simply accounting for the number of risk factors but rather should focus upon how to delineate the complex interplay among many established clinical and laboratory risk factors. With numerous recent advances, novel biomarkers have been identified from plasma samples of patients with possible or suspected CAD. One such biomarker that looks to be promising is the inflammatory protein-soluble urokinase-type plasminogen activator receptor (suPAR). In the current issue of the Journal of the American Heart Association, Eapen et al 4 present the findings of a study that assessed the association of plasma suPAR with the presence and severity of CAD as well as the role of suPAR as a predictive marker for death and myocardial infarction (MI) (over a mean of 2 years) in 3367 patients undergoing cardiac catheterization. In this study, suPAR levels were associated with both the presence and severity of CAD, and with an increased risk of subsequent death or myocardial infarction (MI) (hazard ratio [HR]: 1.9), cardiac death (HR: 2.62), and MI (HR: 3.20). The addition of suPAR levels to a prediction model that incorporated traditional risk factors modestly improved the discriminatory capabilities of the model (the C statistic changed from 0.72 to 0.74). Urokinase-type plasminogen activator (uPA) and its cell surface-receptor (uPAR) regulate cellular functions linked to adhesion and migration and are involved in the tissue remodeling processes. 5 The soluble form (suPAR) is present in the serum and other bodily fluids, and the soluble receptor accounts for 10% to 20% of the total receptor in vascular endothelial and smooth muscle cells. Numerous observational studies have shown systemic levels of suPAR to be associated with an increased risk of cancer, various infectious and inflammatory diseases, rheumatoid arthritis, and hepatic fibrosis. 6 Furthermore, elevated levels of suPAR have been shown to have prognostic value for patients with neoplasms, systemic inflammatory diseases, and those with various infectious diseases. 6 In a Danish population-based cohort (n=2602) elevated baseline suPAR levels were independently associated with an increased likelihood of cardiovascular disease, as well as diabetes, cancer, and all-cause mortality. 7 In this study, elevated suPAR levels appeared to be more strongly related with these outcomes in men compared with women, and in younger compared with older participants. Sehestedt et al showed in a population-based study of patients without a history of cardiovascular disease (n=2038) that elevated suPAR levels were associated with subclinical organ damage as well as cardiovascular events (a composite of cardiovascular death, MI, and stroke) during a median follow-up of more than 10 years. 8 In another population-based study, the prognostic implications of elevated suPAR levels was assessed together with the Framingham risk score; and the study found that suPAR levels improve the overall risk prediction when combined with hs-CRP (high sensitivity Creactive protein). 9 Besides the aforementioned populationbased studies, data from experimental studies also indicate that suPAR from vascular cells is up-regulated by proatherogenic and pro-angiogenic growth factors and cytokines that accumulate in the vessel wall, which suggests a link with suPAR, atherosclerosis, and the subsequent development of symptomatic CAD. 5 These salient observations correlate with the results from an epidemiologic study of patients with acute ST-elevation myocardial infarction treated with primary percutaneous coronary intervention, which demonstrated that elevated suPAR levels were significantly associated with the risk of death or re-infarction. 10 Collectively, these studies have demonstrated the potential of elevated suPAR levels for improving the risk prediction of patients with an increased risk for developing CAD and for those with established, symptomatic CAD. Within this context, the findings of the current work by Eapen et al 4 add further to the developing body of evidence that supports elevated suPAR levels as a novel risk factor for CAD. The aforementioned studies with suPAR levels have either been based on healthy community-based populations with relatively low incidence of CAD, or used a very specific study population of symptomatic CAD patients such as those with STEMI. The findings of the study by Eapen et al can be applied to a much broader population of patients undergoing cardiac catheterization, and as such are representative for a wide spectrum of patients with clinical indications for diagnostic cardiac catheterization. Interestingly, the predictive capabilities of suPAR levels were similar for patients with versus without an acute MI as the indication for catheterization. Despite these findings, there are a number of concerns with the present analysis. First, only baseline suPAR levels were measured and assessed so the prognostic implications of dynamic changes in serial suPAR levels could not be ascertained. Second, while the sample size of the study cohort undergoing cardiac catheterization is large (>3000), the estimated volume of patients undergoing this procedure over a 6-year period across a number of large hospitals would be expected to be much higher. Third, the ascertainment, collection, and verification of non-fatal MI events were not described in great detail, so the internal and external validity of this endpoint is uncertain without such information. Finally, the rationale for including patients with insignificant CAD identified during cardiac catheterization in the study was not discussed, which is puzzling especially given the differential association of suPAR levels with death or MI in those with versus without significant CAD (higher risk with insignificant CAD). As the search for more accurate and reproducible methods of risk stratification for patients with suspected or confirmed CAD continues, the accumulated data on suPAR levels suggest that this laboratory-based biomarker may provide modest, additive benefit for predicting the risk of future cardiovascular events. The next phase in the journey for improved risk stratification will involve integration of this promising biomarker with many other biomarkers and clinical characteristics to develop improved, dynamic models that can delineate risk at multiple time points along the decades-long pathway of disease progression for CAD.
2016-05-16T19:47:02.292Z
2014-10-01T00:00:00.000
{ "year": 2014, "sha1": "38984a171eadf7f28c4be1f152eda9a2ef049235", "oa_license": "CCBYNC", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.114.001431", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c6efcfc5d26e2d684374adb2ef6749505cdedd78", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
219574871
pes2o/s2orc
v3-fos-license
Hydroxyapatite nanoparticles derived from mussel shells for in vitro cytotoxicity test and cell viability Hydroxyapatite (HA) nanoparticles derived from mussel shells were prepared using the wet precipitation method and were tested on human mesenchymal and epithelial cells. Shells and HA powder were characterized via X-ray diffraction analysis (XRD) and scanning electron microscopy along with energy dispersive X-ray spectroscopy (SEM/EDX), high resolution transmission electron microscopy (HR-TEM) and Fourier transform infrared spectroscopy (FTIR). The in vitro cytotoxic properties of HA and mussel shells were determined using sulphorhodamine B (SRB) assays for MCF-7 cells (HepG2) and colon (Caco-2) cells. Cell viability tests confirmed the nontoxic effects of synthesized HA and mussel shells on human mesenchymal stem cells (h-MSCs) and epithelial cells. Toxicity values were less than 50% of the cell's validity ratio based on analyses using different concentrations (from 0.01 to 1,000 μg). The results indicate that MSC and epithelial cell attachment and proliferation in the presence of both HA and shell occurred. The proliferation capability was established after 3 and 7 days. SEM images revealed that stem cells and epithelial cells attached to the scaffold indicated full and complete integration between the cells and the material. It seems that due to the ion exchange between bovine serum albumin solutions (BSA) and HA, the FTIR data confirmed an increase in the amide I and amide II bands, which indicates the compatibility of the BSA helix structure. This study sheds light on the importance of merging stem cells and nanomaterials that may lead to improvements in tissue engineering to develop novel treatments for various diseases. Introduction In the medical field, the use of nanotechnology has advanced. The need for inventing new therapeutic biomaterials that can be employed as substrates for cell reproduction, bonding, multiplication and development has recently become essential [1]. For this purpose, hydroxyapatite (n-HA) nanoparticles were created using the wet chemical precipitation method from calcium-rich biowastes (mussel shells) [2,3]. Hydroxyapatite is a biomineral with excellent biocompatibility and osteoconductivity, which makes it a commonly used material for drug delivery, orthopedic intervention, tissue engineering and dental implant applications [4,5]. HA demonstrates good bioactivity and porosity and is an excellent candidate for bone repair and substitution. Similarly, due to its surface properties of ion exchange, low solubility and high water stability, HA has attracted attention as a vital absorbent for removing heavy metals from polluted water [6]. Tissue engineering consists of in vitro construction of tissues for implantation into the body to preserve or expand the forms and/or functions of particular tissues [7]. Presently, with the development of molecular biology and cell culture techniques, the growth behavior of cells seeded on materials has received attention. For cell cultures, human mesenchymal stem cells (h-MSCs) are commonly used when investigating cell viability for bone tissue engineering purposes [8]. In different biotechnological and medical applications, the protein adsorption method is the primary step after the contact of body fluids with solid surfaces. Protein adsorption of implant materials is of great importance for bond formation. This adsorption can proceed due to electrostatic and hydrophobic interactions between the protein and material surfaces [16]. Hydroxyapatite works as a hard tissue-implant material that enhances bone growth within the bone tissue. The bone implant interface includes a so-called retention zone, which consists of a protein matrix rich in calcium and phosphorous [17]. Based on the essential role of protein adsorption in tissue engineering, the present work deals with testing mussel shell and HA nanoparticle powder for their in vitro cytotoxic properties using sulforhodamine B (SRB) assays against MCF-7, liver (HepG2) and colon (Caco-2) cells. Hydroxyapatite synthesis A variety of techniques for synthesizing hydroxyapatite have been established. In this study, we used the precipitation method to produce hydroxyapatite from mussel shells. This procedure is a low cost, simple process with high yield and is therefore appropriate for large-scale production. In the present work, we created hydroxyapatite nanoparticles based on mussel shells, which have a high calcium carbonate content (CaCO 3 ) of nearly 98.62% [18]. The mussel shells were ultrasonically cleaned of organic matter using tap water before being dehydrated at room temperature. Washed, dehydrated shells were ground into fine particles using an agate ball mill. Calcium carbonate (29.36 g of powder) was transformed into a calcium nitrate Ca(NO 3 ) 2 solution using concentrated nitric acid (HNO 3 ) while stirring vigorously, and this resulted in the discharge of carbon dioxide (CO 2 ) gas. A stoichiometric quantity of ammonium dihydrogen-phosphate (NH 4 H 2 PO 4 ) solution was gradually added to the Ca(NO 3 ) 2 solution while stirring [19]. The pH of the mixture reached 9 by the addition of 3 M 25% ammonium hydroxide (NH 4 OH). Next, the mixture was stirred magnetically and the precipitate was then dehydrated at 70 C for a few days and was then crushed in a mortar. Finally, the resultant HA powder was formed after calcination at 900 C. Characterization of materials The chemical composition of the mussel shells was determined using Xray fluorescence (XRF) with a modern wavelength dispersive spectrometer (Axios PAN analytical 2005, Netherlands). A thermogravimetric (TGA) analyzer (TGA-50, Shimadzu, Japan) was used to examine the synthetized HA and shells. The TGA curve increased from room temperature to 1,000 C with a warming rate of 10 C/min. The shells and synthesized HA powders (calcined at 900 C) were analyzed by X-ray diffraction (XRD) using a D8 Advanced CuO target (Bruker, Germany)-based generator X-ray diffractometer using CuKα radiation (λ ¼ 1.542 Å). X ray diffraction graphs were recorded in the range of 2θ ¼ 10-70 at a scan speed of 2 C/min. The fresh shell surfaces and synthesized HA were sputtered with a thin gold layer before being examined by scanning electron microscopy (SEM/EDX, model FEJ Quanta 250 Fei, Holland) and the material configurations were chemically analyzed via attached energy-dispersive X-ray spectroscopy (EDX). Likewise, for examining the HA particles, a dilute suspension of HA particles was prepared and dropped onto copper grids sustained with a carbon film. The HA particle shapes and sizes were determined by phasecontrast imaging using high-resolution transmission electron microscopy (HR-TEM, JEOL JXA-840 A, Electron probe microanalyzer, Japan). Infrared spectra were obtained with a Fourier transform infrared (FTIR) spectrophotometer (model FT/IR-6100 type A, USA). FTIR spectra were obtained in the range of 2,200-400 cm À1 using the KBr technique. Cell cultures HA and mussel shell powders were subjected to cellular cytotoxicity evaluations on both normal and cancer cell lines. Doxorubicin (Dox), which is a commonly used chemotherapy medicine used to treat different cancers, was used as a positive control [20]. Dox is an influential iron-chelator that directly binds to DNA via intercalation between the base pairs on the DNA helix [21]. The excessive oxidative stress caused by Dox changes the diversity of cellular molecules [22]. Samples were dissolved in 20% dimethyl sulfoxide (DMSO). The DMSO was diluted so that its final concentration was 1 mg/mL to mitigate its cytotoxic effects at high concentrations [23]. Sulforhodamine B (SRB) assays were used to resolve the in vitro cytotoxic activities of the HA and mussel shells. A mammary gland breast cancer cell line (MCF-7), human hepatocellular carcinoma cell line (HepG-2), and colon carcinoma cell line (Caco-2) were all maintained at the Cell Culture Lab, Egyptian Organization for Biological Products and Vaccines (VACSERA Holding Company) Cairo, Egypt. SRB assays were based on the uptake of the negatively charged pink amino-xanthine dye [24]. Similarly, cytotoxic evaluations of HA and shells for two types of normal cells (e.g., h-MSC and epithelial cells) were carried out. These cell lines were obtained from the VACSERA-Cell Culture Unit, Cairo, Egypt. The reagents RPMI-1640 medium, SRB and DMSO were purchased from the Sigma Company St. Louis, USA. Fetal bovine serum was obtained from GIBCO, UK. The cells were cultured in RPMI-1640 medium with 10% v/v fetal bovine serum. Two types of antibiotics (e.g., penicillin 100 units/mL and streptomycin 100 mg/mL) were used throughout the experiment. The cells were grown in a moistened incubator with a CO 2 atmosphere (5% v/v) at 37 C and were seeded at a density of 1.0 Â 10 4 cells/well in a 96-well plate at 37 C for 48 h in 5% CO2. After incubation, the cells were treated with different concentrations of compounds and were incubated for 3 and 7 days and then compared with untreated control cells. For each individual dose, triplicate wells were prepared. The medium was discarded and fixation was accomplished using 10% trichloroacetic acid (TCA) at 150 mL/well for 1 h at 4 C. The cells were washed three times using distilled water (TCA reduced the SRB binding of protein). The wells were discolored using SRB 70 mL/well for 10 min at room temperature with 0.4% 70 mL/well (kept in a dim place). After staining, washing was accomplished using 1% glacial acetic acid to eliminate the unbound dye (until clear drainage was reached). The plates were air dehydrated for 24 h and the dye was solubilized with 50 mL/well of 10 mM tris base of pH 7.4 for 5 min on a shaker at 1,600 rpm. The optical density of each well was determined at a wavelength 570 nm using an ELISA microplate reader (EXL800 USA)) Center of Genetic Engineering, Faculty of Science, Al-Azhar University, Cairo, Egypt). The relative cell viability percentage was calculated using the following formula: [(A 570 of treated sample/A 570 of untreated sample) 100] Sigma Plot software ver. 12.0 (Systat Software, Inc) was used for calculation of the IC50 values [25]. Protein adsorption Protein adsorption levels on the tested surfaces were determined. Bovine serum albumin (BSA) was used as a representative protein. Approximately 0.2 mg of BSA was added to 200 ml of phosphate buffer solution (PBS) (dissolve 8 g of NaCl, 0.2 g of KCl, 1.805 g Na 2 HPO 4 .2H 2 O and 0.30717 g K 2 HPO 4 in 800 ml distilled H 2 O. Adjust pH to 7.4 using HCl. Complete volume to 1 L using distilled H 2 O) at pH 7.4 and 37 C. Four milligrams of each sample was added to 40 ml of the previous mixture. Adsorption was allowed to proceed in an incubator for 1 h at 37 C. Upon adsorption, the samples were carefully rinsed with (PBS) 3 times and with water to remove unbound proteins (non-adsorbed) and salt residues and were then dried at 37 C. Protein adsorption on the surface of the samples was determined by means of FTIR. Characterization of samples The mineral composition of mussel shells was determined using the Xray fluorescence (XRF) technique. It showed that the primary elements were Ca: 55.317% and C: 43.300% and the minor elements were Na: 0.465%, Si: 0.193% and Sr: 0.208%. The results of thermogravimetric analysis (TGA) of HA and mussel shell powders are shown in Figure 1. The mussel shell samples exhibited early dehydration due to moisture followed by decarbonization at~850 C. The total weight loss was 43.819%. The prepared hydroxyapatite tracked the following steps: Drying of the solution at 100 C for a long duration (12 h) removed water; however, the precipitated hydroxyapatite [Ca 10 (PO 4 )6(OH) 2 ] was accompanied by the emission of CO 2 , residual H 2 O and ammonium nitrate (NH 4 NO 3 ). TGA analysis of the HA sample showed dehydration (up to 177.79 C) followed by decarbonation (-CO 2, between 700-900 C) and denitration and dehydroxylation [-NH 4 NO 3 up to 845 C] (Figure 1) [26,27]. The total weight loss was 44.313%. The X-ray diffraction patterns of the shell powder demonstrated the presence of aragonite and a small amount of calcite as shown in Figure (2). X-ray analysis of the prepared HA (sintered at 900 C) showed the formation of pure hydroxyapatite. Furthermore, in the HA diffractogram, sharp peaks and a straightforward baseline indicated that the synthetized HA was finely crystallized [3]. The SEM micrograph of HA, which was sintered at 900 C/2 h, displayed an agglomerate of ultrafine grain shapes of uniform size and solid structure. The HA clusters contained nanocrystals with diameters from 80 to 200 nm ( Figure 3A). TEM micrographs revealed additional details of the nanohydroxyapatite clusters and clear nanoparticles between 10 and 40 nm ( Figure 3B). The results of EDX microanalysis of mussel shell and HA powders sintered at 900 C are shown in Figure 4. EDX microanalysis of the mussel shell powder demonstrated that the constituent was CaCO 3 (i.e., Ca, C and O), which agrees with the XRF results. EDX for HA showed that the constituent was HA (i.e., Ca, P and O) and that the Ca/P atomic ratio was 1.68, which is almost identical to the stoichiometric ratio of 1.67 for pure HA. Ramesh et al., reported that the Ca/P ratio is a significant parameter that determines the properties of hydroxyapatite bioceramics [28]. For calcium-deficient HA, the Ca/P ratios are less than 1.67, while for calcium-rich HA, the Ca/P ratios are greater than 1.67. All characteristic bands of HA were observed in the FTIR spectrum, as presented in Figure 5. The spectrum showed absorption bands at 1,000-1,100 (υ3-asymmetric stretching vibration) and 577-603 (υ4asymmetric bending vibration), which were attributed to phosphate (PO 4 3À ) absorption. The absorption band at 1,390 cm À1 resulted from the vibration of the CO 3 2À group and indicated the presence of carbonated hydroxyapatite (c-HA). The absorption band at 3,570 cm À1 was assigned to the stretching mode of hydrogen-bonded OHions [29]. The in vitro cytotoxic activities of the HA and shells toward human cancer cells SRB assays have been used to explore cytotoxicity in cell-based studies. The SRB assays evaluated the cytotoxicity of HA and shells for three cancer-cell lines (e.g., MCF-7, HepG2 and Caco-2) over a Table 1 and Figure 7. Figure 6 shows the morphological observations of cells that were obtained after the samples were exposed to two types of normal human cells and three types of cancer cells. No dead cells were detected in the well plates containing the HA or shells, which confirmed that both samples had no negative effects in the natural cell environment. After 3 days of incubation, the growth of normal and cancer cells was estimated and the number of cells increased for seven days (Table 1). Cell viability assays revealed that IC50 (e.g., minimum concentration required to induce 50% of cell death after exposure to samples) could not be attained. For different concentrations of HA and shells, the toxicity values were less than 50% of the cells' validity ratio. These results may be attributed to the nontoxic properties of the tested materials ( Figure 7). Moreover, two types of normal cells (e.g., MSCs and epithelial), which were seeded on tissue cultures, showed significant differences in cell numbers after 3 and 7 days of incubation Figure (8A and B). The number of listed MSCs cells after 3 days of incubation of HA and mussel shells powders increased to 1.9Â10 3 AE 2.3 and 1.86Â10 3 AE 2.7, respectively, compared to the control, which exhibited 2.83Â10 3 AE 1.9 cells. In contrast, the numbers of inserted epithelial cells after 3 days of incubation of HA nanoparticles and shells also increased to 2.7Â10 3 AE 2.7 and 2.5Â10 3 AE 3.0, respectively, compared to the control, which exhibited 3.82Â10 3 AE 2.5 cells. The number of registered MSCs cells after 7 days of incubation of HA and shells increased to 2.82Â10 3 AE 2.7 and 2.765Â10 3 AE 1.8, respectively, compared to the control, which exhibited 2.98Â10 3 AE 3.1 cells; the number of recorded epithelial cells after 7 days of incubation of samples HA and shells increased to 3.6Â10 3 AE 2.9 and 3.4Â10 3 AE 3.4, respectively, compared to the control, which exhibited 4.1Â10 3 AE 2.6 cells ( Table 2). These results suggest that attachment and proliferation of MSCs and epithelial cells in the presence of both HA and shell occurred. The scanning electron microscopy (SEM) images show that the stem cells and epithelial cells require the HA and mussel shell, indicating complete association between the cells and tested materials ( Figure 9). Interestingly, the improvement of stem cell biology and nanotechnology has enhanced the opportunities for tissue engineering by increasing the attachment, proliferation and differentiation of stem cells in vitro. Accordingly, the modified nanomaterials require further in vivo studies to aid in the improvements of organ transplantation [30]. HA samples can be a profoundly osteoconductive biomaterial of clinical importance. HA stimulates the expansion of bone arrangement on the embedded surface, and newly synthesized bone is found in coordinated contact with the HA layer [31,32,33,34]. MSCs can differentiate into osteoblasts, and they can differentiate to a variety of other cells such as chondrocytes, osteoblasts and adipocytes [35]. The aim of the present study was to determine whether clusters of HA nanoparticles and mussel shell would improve adhesion of MSCs and epithelial over the two different time periods (e.g., 3 and 7 days) and to compare the results to the two immersion periods. It was found that MSCs and epithelial cells bound more to HA and mussel shell during an immersion time of 7 days when compared with a 3 day immersion time, and these results agree with those of other investigators. However, we observed that HA and mussel shells are not sufficient to promote full cell spreading, as is shown in Figure (8). Growth of mesenchymal stem cells and epithelial cells also appeared naturally without the addition of any external growth factors (GF). Figure 6. Bar chart representing the differences in IC 50 for different tumor and normal cells (e.g., MCF-7, HePG2, Caco-2, MSCs and epithelial cells) after incubation with HA (calcined at 900 C/2 h) sample and mussel shells. FT-IR analysis of adsorbed BSA FT-IR is sensitive to secondary protein structures. Proteins can bond with HA by electrostatic forces between the calcium ions and carboxyl groups and phosphate ions and amino groups [36]. The Amide I band is related to the C¼O stretching of the peptide bonds [37]. The process of protein adsorption onto HA began with the formation of an anion layer such as H 2 PO 4 À3 and OH À on the HA surface, which was followed by a dispersive double electrical layer around the surface [38]. At that point, protein atoms were adsorbed through specific electrostatic interactions between charged groups of proteins and Ca 2þ and hydrogen holding (irregular interactions), which may occur between neighboring protein atoms with polar surfaces [39]. For acidic proteins such as BSA, the carboxyl group is adsorbed to Ca 2þ through the displacement of PO 4 À3 . At the HA surface, the modification of BSA would be changed by NH 3 , which may form hydrogen bonds with the phosphate group in HA [40]. The adsorption action of BSA on HA nanoparticles is mainly due to electrostatic interactions between the Ca 2þ cations and PO -3 4 anions of HA nanoparticles with COOanions and NH 4 þ cations of the BSA protein [39,41]. The characteristic vibrational groups of BSA adsorbed on HA nanoparticles are shown in Figure 10. The characteristic PO 4 3band appeared in a range from 1,000 to 1,100 cm À1 . The bands at 1,636 cm À1 and 1,474 cm À1 were attributed to the C¼O stretching vibration of amide I and bending vibration of N-H of amide II, respectively. After protein adsorption, splitting was detected at 1,049 and 925 cm À1 , which could be attributed to the P¼O and P-O stretching bands of the PO 4 3group, respectively [42]. It seems that, due to the ion exchange between BSA and HA, there was an increase in the amide I band recorded at 1,636 cm À1 and an increase in the amide II band (1, 474 cm À1 ), which prove the compatibility of the helix structure of BSA [43]. The typical band of the amide group indicated the bonding behavior of BSA and HA nanoparticles. This bonding might be due to electrostatic interactions, which are dominant compared to the van der Waals forces related to the hydrophilic behavior. Hereafter, this distinction supports the hydrophobicity of the protein molecules [44]. Conclusions Hydroxyapatite nanoparticles were synthesized beginning with mussel shells. The samples were characterized by XRD, SEM/EDX, TEM and FTIR analyses, which demonstrated the presence of hydroxyapatite as clusters at nanoparticle sizes. The tested materials showed weaker cytotoxicity toward all solid tumor cells. HA and mussel shells are safe for human mesenchymal stem cells (h-MSCs) and epithelial cells. We found that additional MSCs and epithelial cells bound more to HA and mussel shells with an immersion time of 7 days compared to an immersion time of 3 days. The protein adsorption results showed that BSA had a strong binding ability to the HA surface. The development of stem cell biology and nanotechnology has improved the characteristics of tissue engineering by increasing the attachment, proliferation and differentiation of stem cells in vitro. Hence, modified nanomaterials need additional in vivo studies to improve organ transplantation. Author contribution statement Gehan T. El-Bassyouni, Samah S. Eldera, Sayed H. Kenawy, Esmat M.A. Hamzawy: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Funding statement This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Figure 10. FTIR spectra of HA (calcined at 900 C/2 h) samples before and after protein adsorption.
2020-06-11T09:07:31.371Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "1ad08d5126b2808eb03a75eded0f5be83506362c", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844020309294/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9190ad7d51919b0e2a1b2df531eb3dade39fdf0", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
257928019
pes2o/s2orc
v3-fos-license
Association polymorphism of guanine nucleotide–binding protein β3 subunit (GNB3) C825T and insertion/deletion of the angiotensin-converting enzyme (ACE) gene with peripartum cardiomyopathy Introduction Peripartum cardiomyopathy (PPCM) is a potentially life-threatening pregnancy-related heart disease. Genetic roles such as gene polymorphisms may relate to the etiology of PPCM. This study analyzes the association between single nucleotide gene polymorphism (SNP) guanine nucleotide–binding protein beta-3 subunit (GNB3) C825T and insertion/deletion (I/D) of the angiotensin-converting enzyme (ACE) gene with the incidence of PPCM. Methods An analytic observational study with a case–control design was conducted at the Integrated Cardiac Service Center of Dr. Soetomo General Hospital, Surabaya, Indonesia. PPCM patients of the case and control groups were enrolled. Baseline characteristic data were collected and blood samples were analyzed for SNP in the GNB3 C825T gene and for I/D in the ACE gene by using the polymerase chain reaction, restriction fragment length polymorphism, and Sanger sequencing. We also assessed ACE levels among different ACE genotypes using a sandwich-ELISA test. Results A total of 100 patients were included in this study, with 34 PPCM cases and 66 controls. There were significant differences in GNB3 TT and TC genotypes in the case group compared with that in the control group (TT: 35.3% vs. 10.6%, p = 0.003; TC: 41.2% vs. 62.5%, p = 0.022). The TT genotype increased the risk of PPCM by 4.6-fold. There was also a significant difference in the ACE DD genotype in the case group compared with that in the control group (26.5% vs. 9.1%, p = 0.021). DD genotypes increased the risk of PPCM by 3.6-fold. ACE levels were significantly higher in the DD genotype group than in the ID and II genotype groups (4,356.88 ± 232.44 pg/mL vs. 3,980.91 ± 77.79 pg/mL vs. 3,679.94 ± 325.77 pg/mL, p < 0.001). Conclusion The TT genotype of GNB3 and the DD genotype of the ACE are likely to increase the risk of PPCM. Therefore, these polymorphisms may be predisposing risk factors for PPCM incidence. ACE levels were significantly higher in the DD genotype group, which certainly had clinical implications for the management of PPCM patients in the administration of ACE inhibitors as one of the therapy options. Association polymorphism of guanine nucleotide-binding protein β3 subunit (GNB3) C825T and insertion/deletion of the angiotensin-converting enzyme (ACE) gene with peripartum cardiomyopathy 1 . Introduction Maternal mortality ratio (MMR) is an indicator that describes national maternal health and welfare. Global MMR reached 214 per 100,000 live births in 2016 (1). In developing countries, MMR is 20 times higher than in developed countries (1). In 2012, Indonesia's MMR was 359 per 100,000 live births (2). An evaluation of the 2015 Millennium Development Goals revealed that 38 mothers in Indonesia died from diseases or complications related to pregnancy and childbirth every day (MMR: 305 per 100,000 live births). The causes of maternal death are mainly bleeding, infection, and cardiovascular disease, including hypertension during pregnancy and heart failure (3). Peripartum cardiomyopathy (PPCM) is a potentially lifethreatening pregnancy-related disease (4). PPCM is characterized by left ventricle (LV) dysfunction in the late peripartum period or in the first months of postpartum without a known history of heart disease (5). To date, there are many hypotheses about the etiology of PPCM, but none is considered as the primary explanation for all cases. PPCM is known to have a pathogenesis that involves many factors such as maternal autoimmune response, inflammation, oxidative stress, imbalance of cardiac proapoptotic factors and anti-angiogenic factors, micronutrient deficiencies, and genetic causes (6). Due to the complexity of the etiology, genetic factor, especially gene polymorphism, may play an essential role (7). Two major PPCM registries, Investigation of Pregnancy Associated Cardiomyopathy (IPAC) (8) and EURObservational Research Programme (EORP) (9), reported various incidence rates of PPCM among countries in different regions, which may be related to genetic predisposition in different races. The guanine nucleotide-binding protein subunit β3 (GNB3) gene encodes the β3 subunit of G protein (Gβ3) located on chromosome 12p13 that consists of 11 exons and 10 introns. The single nucleotide polymorphism (SNP) of GNB3 at exon 10, C825T, is associated with an increased prevalence and poor outcome of PPCM in individuals of African progeny (10). T allele polymorphisms in the GNB3 gene are associated with increased intracellular signaling, increased risk of hypertension, low plasma renin, and cardiac remodeling (10). To date, there are no studies on GNB3 C825T gene polymorphism, especially in Asian populations. The role of the insertion/deletion (I/D) 287-bp sequence inside intron 16 of the angiotensin-converting enzyme (ACE) gene and ACE activity in the etiology, pathogenesis, prognosis, and clinical implications of the cardiovascular system has been extensively studied. The deletion polymorphism of the ACE allele is associated with increased levels of ACE (11). In addition, the ACE DD genotype is positively correlated with specific cardiomyopathy such as ischemic cardiomyopathy (ICM), hypertrophic cardiomyopathy (HCM), alcoholic cardiomyopathy, and idiopathic dilated cardiomyopathy (IDCM) (12, 13). IDCM with low ejection fraction (EF) has a phenotype similar to PPCM, suggesting that there may be an association between the I/D of the ACE gene and PPCM. This study aims to determine the association between the SNP of the GNB3 C825T gene and the I/D of the ACE gene in women with PPCM. Study design An analytic observational study with a case-control study was conducted at the Integrated Cardiac Service Center, at Dr. Soetomo General Hospital and Institute of Tropical Diseases (ITD) Laboratory of Airlangga University, Surabaya, Indonesia from January 2021 to June 2022. The case group consisted of all women diagnosed with PPCM, while the control group comprised women without PPCM or a history of PPCM. The study was approved by the Dr. Soetomo General Hospital Surabaya Ethics Committee (0151/KEPK/II/2021). All procedures were approved by the relevant ethics committees and written informed consent was obtained from all study participants. Patients and controls All women who were 18-40 years' old and who underwent examination and treatment at the Polyclinic Integrated Cardiac Service Center of Dr. Soetomo General Hospital Surabaya were included. PPCM was diagnosed according to the criteria of the European Society of Cardiology (ESC) Working Group on Peripartum Cardiology in 2010 (14). The criteria were: (1) Heart failure symptoms that appeared in the last 1 month of pregnancy to 5 months' postpartum; (2) No history and other identifiable causes of heart failure; and (3) An left ventricular ejection fraction (LVEF) <45% based on echocardiography. All PPCM patients with previous history of heart failure, a history of coronavirus disease 2019 (COVID-19) infection complicated with any heart problems, and incomplete data were excluded. Controls were women with a history of pregnancy who had never been diagnosed with PPCM. Detection of GNB3 C825T gene polymorphisms and ACE gene I/D Patients selected on the basis of inclusion and exclusion criteria and signed a letter of informed consent to participate in the study. A 5 mL sample of cubital venous blood was collected in an ethylenediaminetetraacetic acid (EDTA) tube and rested for about 30 min. The tube was then centrifuged at 300 rpm for 10 min to separate the plasma. DNA extraction was carried out using the QiaAMP DNA Blood Mini Kit (Qiagen, Hilden, Germany) and stored at −20°C. DNA content was quantified by spectrophotometric absorption (Nanodrop Spectrophotometer, BioLab, Scoresby, VIC, Australia). All DNA samples were blindtested. The GNB3 C825T polymorphism was examined according to the procedure stipulated by Siffert et al. (15). We used 5′ TGACCCACTTGCCACCCGTGC 3′ as a sense primer and 5′ GCAGCAGCCAGGGCTGGC 3′ as an antisense primer. The polymerase chain reaction (PCR) was run using a Promega Germany), at 37°C in a water bath for 3 h and 80°C for 5 min. DNA fragments were obtained after the restriction enzyme was electrophoresed on a 2.5% agarose gel and stained with ethidium bromide and the BenchTop 1,000 bp DNA Ladder (Promega, Madison, WI, United States). The DNA fragments were imaged under ultraviolet (UV). The T allele was not digested by using the restriction endonuclease enzyme. It corresponded to the cDNA fragments of 256 bp (TT genotype), whereas the C allele corresponded to 152 bp and 104 bp (CC genotype). Thus, the CT genotype produced three bands, 256 bp, 152 bp, and 104 bp ( Figure 1A). Three representative samples of each genotype (TT, TC, and CC) were confirmed with DNA sequencing using the Sanger method. DNA sequencing for the GNB3 C825T polymorphism was performed by using the ABI Prism 24-capillary 3,500xL Genetic Analyzer to confirm the PCR result. The sequence analysis of the DNA is shown in Figure 1B. The results were compared with the reference strains of the sequences that were published in GenBank using the Clone Manager Professional version 9.0. The I/D ACE polymorphism was examined as described by Rigat et al. (16). To amplify the ACE, a pair of primers 5′ CTGGAGACCACTCCCATCCTTTCT 3′ and an antisense primer 5′ GATGTGGCCATCTTCGTCAGA 3′ were used. The PCR amplification was processed as described for GNB3. The PCR product is a 490 bp fragment in the presence of the insertion (I) allele and a 190 bp fragment in the presence of the deletion (D) allele. Thus, each DNA sample revealed one of three possible patterns after electrophoresis: a 490 bp band (genotype II), a 190 bp band (genotype DD), or both 490 bp and 190 bp bands (genotype ID) ( Figure 1C). ACE ELISA test Plasma from the PPCM group sample was separated after centrifugation and stored at −20°C until analysis. To determine the level of the ACE between different alleles and the genotype of ACE I/D, a sandwich-ELISA test was conducted using a Human Angiotensin-Converting Enzyme 1 ELISA (Elabscience, Hubei, China). The resulting optical density was read by using the BioRad ELISA Reader at 450 nm. Statistical analysis The data obtained were processed using SPSS (IBM Statistics 20.0) for Windows. The Hardy-Weinberg Equilibrium (HWE) was used to estimate the number of heterozygous and homozygous variant carriers in non-evolving populations on the basis of allele frequency. The χ 2 test for the degree of freedom (dF) = 1 and a p-value = 0.05 were used to determine whether the observed genotypic distribution for GNB3 and ACE agreed with the HWE. The genotypes and alleles of GNB3 and ACE between the PPCM and the control groups were assessed using the χ 2 test or Fisher's exact test according to the obtained data. Odds ratios (ORs) with a 95% confidence interval (95% CI) were determined to find the association of gene polymorphism intensity with disease. The normality of data was assessed using the Kolmogorov-Smirnov test. An independent Student's t-test or a Mann-Whitney test was used for numerical data analysis of the two groups. For numerical data with >2 groups, analysis was performed using one-way ANOVA or the Kruskal-Wallis test as appropriate. Univariate and multivariate logistic regression analyses were done to determine whether gene polymorphism was the independent predictor of PPCM. Differences with pvalues <0.05 were considered statistically significant. Characteristics of patients A total of 100 patients were included in the study, of which 34 were PPCM patients and 66 controls, and the characteristics of the case and control group patients are presented in Table 1. The mean BMI was higher in the PPCM group than in the control group (29.02 vs. 26.96, p = 0.037). The number of patients who had preeclampsia or eclampsia was significantly higher in the PPCM group than in the control group (44.1% vs. 9.1%, p < 0.001). The PPCM group had a higher mean systolic blood pressure than the control group (143.26 mmHg vs. 131.09 mmHg, p = 0.007). A total of 91.2% of patients with PPCM were diagnosed antepartum ( Table 1). We did not find any deviations from the HWE in our population study. GNB3 and ACE gene polymorphisms and the risk of PPCM Of the total number of samples, the genotypes of GNB3 were mostly TC (n = 57, 57%), followed by CC (n = 24, 24%) and TT (n = 19, 19%). There were significant differences in the frequency of TT and TC genotypes of the GNB3 gene between the PPCM and the control groups ( Table 2). Individuals with the TT genotype had a higher odds ratio of approximately 4.59 to have PPCM compared with those with the TC and CC genotypes (OR: 4.59; 95% CI: 1.60-13.17, p = 0.003) ( Table 2). Although the frequency of T allele was higher in the PPCM group, the difference was not statistically significant compared with that in the control group (55.9% vs. 43.2%, p = 0.088). Our data indicated that 15 subjects had the DD genotype, 27 had the ID genotype, and 58 the II genotype of the ACE. The DD genotype was significantly higher in the PPCM group than in the control group (26.5% vs. 9.1%), and the presence of the DD genotype was associated with a higher risk of PPCM compared with individuals with the ID and II genotypes (OR: 3.60; 95% CI: 1.15-11.18, p = 0.021) ( Table 2). The frequency of D allele was higher in the PPCM group, but the difference was not statistically significant compared with that in the control group (36.8% vs. 24.2%, p = 0.063) ( Table 2). Univariate and multivariate logistic linear regression analyses were done on various variables, as presented in Table 3. The analysis from Table 3 showed that GNB3 TT and preeclampsia/eclampsia were independent predictors for PPCM. Subanalysis of the GNB3 and ACE genotypes in the PPCM group In PPCM patients, the frequency of the GNB3 genotype was significantly different and was based on BMI and left ventricular internal diameter in diastole (LVIDd) ( Table 4). The BMI was higher in the TT genotype of GNB3 than in the TC and CC genotypes (31.73 vs. 27.54 kg/m 2 , p = 0.018). The mean of LVIDd was also higher in the TT genotype group than in the TC and CC groups (5.39 ± 0.80 vs. 4.86 cm ± 0.64 cm, p = 0.041). Hypertension and a history of preeclampsia/eclampsia were more frequent among those with the ACE DD genotype than among those with the ID and II genotypes; 44.4% vs. 8.0%, p = 0.031 and 77.8% vs. 32.0%, p = 0.025, respectively ( Table 4). Comparison of ACE levels based on ACE genotypes among PPCM patients The ACE levels were measured among 30 of 34 PPCM patients because four subjects received ACE inhibitors that may cause bias. Our data revealed that the ACE levels in DD, ID, and II were 4,356.88, 3,980.91, and 3,679.94 pg/mL, respectively. The ACE levels were significantly higher in the DD genotype group than in the ID and II genotype groups, p < 0.001. The ACE levels in individuals with the ID genotype were also higher than in individuals with the II genotype (p = 0.020) ( Figure 2). Discussion Despite the growing recognition of genetic predispositions as a risk factor for the development of PPCM, little is known about the impact of genomic background on racial differences. Our study was the first one to determine whether there was an association of the SNP GNB3 C825T gene and the incidence of PPCM in an Asian population. Although the frequency of the TT genotype was relatively rare, this genotype increased the risk of PPCM 4.6 times compared with the other CC and TC genotypes. Multivariate analysis also showed that GNB3 TT appeared to be an independent predictor for PPCM. A study conducted in North America with 97 subjects (30% were Blacks, 65% were Caucasian, and 5% were others), which assessed the relationship between different GNB3 genotypic backgrounds and their impact on improvement in LV remodeling in PPCM, found that the GNB3 TT genotype was more common in Blacks (10). GNB3 TT was also associated with a much higher incidence of PPCM and lower LVEF recovery (10). Interestingly, our study found that the TC genotype (57%) was the most frequent genotype and may appear to afford protection from PPCM. However, there is limited evidence of the association between GNB3 TC polymorphism and PPCM incidence. The exact mechanism by which the GNB3 polymorphism contributes to the development of PPCM has not been fully understood, but it is thought to involve alterations in the G protein-coupled receptor (GPCR) signaling pathway (17). The GNB3 protein plays a key role in the signaling pathways through GPCR that control the contraction and relaxation of heart muscle cells (17,18). In addition, our study found that the most common genotypes of ACE were II (58%), followed by ID (27%) and DD (15%). The DD genotype increased the risk of PPCM 3.6 times compared with the other genotypes (II and ID). This result was similar to that of a study by Yaqoob et al., which found that the DD genotype was possibly a predisposing and independent risk factor for the pathophysiology of PPCM in the Kashmiri Indian population (13). The frequency of the DD genotype and D allele was also significantly higher in the PPCM population (13). The DD genotype was associated with poorer left ventricle systolic function in terms of ejection fraction, dimension, and left ventricle end-systolic and end-diastolic volumes (13). Preeclampsia and eclampsia appear to be independent predictors for PPCM, as revealed by multivariate analysis. The pathophysiology of preeclampsia and eclampsia related to PPCM is still poorly understood, but several hypotheses suggest that hemodynamic stress caused by preeclampsia can contribute to the worsening of this condition (19). The EORP study states that the global preeclampsia incidence rate as a comorbid PPCM is 25%. A further investigation reveals that the rate of incidence in the Asia Pacific population reaches 46% (9). A meta-analysis of 22 observational studies with a total of 979 samples also reveals that 22% of PPCM patients develop preeclampsia/eclampsia (20). No previous studies have reported an association of the GNB3 and ACE polymorphisms with hypertension and preeclampsia, specifically in the PPCM population. In our subanalysis, we found that the percentage of PPCM patients with hypertension (44.4% vs. 8%, p = 0.032) and a history of preeclampsia (77.8% vs. 32%, p = 0.025) was higher in the ACE DD genotype group than in the other genotype groups. A review study found that previous studies reported conflicting results, but the majority found that the DD genotype was associated with the incidence of hypertension and preeclampsia in pregnancy. A study of 121 pregnant women with a gestational age of 27-40 weeks reported a higher frequency of the DD genotype in the essential hypertension group than in the control group (21). A metaanalysis of 40 studies with a total of 3,977 cases and 7,065 controls concluded that the DD genotype increased the risk of preeclampsia compared with the DD and ID genotypes (52% vs. 17%), and D allele increased the risk of preeclampsia 1.29 times more than I allele (22). Obesity is a risk factor for PPCM. Hemodynamic alterations, apoptosis, and inflammation are three potential causes of pathogenesis. Obesity causes excessive levels of circulating fat to alter blood volume, which increases stroke volume and stresses the LV wall, which, in turn, cause eccentric LV hypertrophy and, eventually, LV dysfunction (23). However, no previous studies have reported an association of the GNB3 and ACE gene polymorphisms with obesity in the PPCM population. A study of Caucasian, Chinese, and Black populations reported that the TT genotype had a higher mean BMI than other genotypes (TC and CC) (24). In our study, similar results were obtained, where the Frontiers in Cardiovascular Medicine mean BMI in the TT genotype was significantly higher than that in the GNB3 TC and CC genotypes. Although the mean LVEF in the GNB3 TT and ACE DD genotype groups has been reported to be lower in previous studies (10,13), our data suggested no significant difference. Our results are in line with those of other studies, which showed no statistically significant difference in LVEF in the ACE DD genotype, although the mean LVEF was lower in the ACE DD genotype (13). However, the IPAC study reported that PPCM patients with the GNB3 TT genotype showed a lower LVEF at the initial stage of the study (10). After follow-up, LVEF was found to be significantly lower for GNB3 TT subjects at 6 months (p = 0.007) and 12 months (p < 0.001). The geometry and thickness of the heart wall, especially the LV, are associated with cardiovascular risk. Our study found that GNB3 genotypes were associated with LVIDd, while the ACE was not. In contrast to our finding, a previous study by Poch et al., reported a lower mean of LVIDd in the GNB3 CC genotype group than in the TT and CT genotype groups in the essential hypertension population (25). Another study by Mahmood et al., also reported that the GNB3 TT genotype had a strong association with the incidence of LV hypertrophy (26). Similar to our study, a previous study found no difference in mean LVIDd among different genotypes of the ACE gene (13). Comparison of ACE levels in the PPCM group A study found that the I/D polymorphism of the ACE influenced the level of serum ACE in a healthy population (16). Observations of genetic polymorphisms may explain the interindividual variability in plasma ACE. In our study, the highest mean ACE levels were found in the DD group, followed by ID and II. This is in line with a study that found that the ACE levels increased twice as high in the DD genotype group as in the ID genotype group (16). Another study of pregnant women with hypertension in India reported significantly higher ACE levels in the DD genotype group than in the ID and DD genotype groups (27). The I/D of the ACE gene affects not only plasma ACE levels but also tissue ACE (28). Higher ACE levels would increase angiotensin II, which affects various systems, including the cardiovascular system (29). In the PPCM group, elevated ACE levels in the ACE DD genotype may be associated with the incidence of hypertension. An awareness on the part of clinicians about the existence of differences in ACE levels in each genotype will certainly have implications for the management of PPCM patients, one of which is the administration of ACE inhibitors as a therapy option in PPCM patients, especially those with the ACE DD genotype. Limitations The synergistic relationship between the GNB3 and the ACE gene could not be assessed in this study. In the PPCM group, only two patients had polymorphisms of both genes, while the control group had none. In this study, we did not analyze the levels of improvement in LVEF function in PPCM patients, which could have provided a better understanding of the issue. The reason for this lack of analysis was that we could not ask these patients to visit the hospital for an echocardiography examination because of the restrictions imposed by the COVID-19 pandemic, which has thrown many facets of the healthcare system out of gear. Conclusion This is a study determining the association of GNB3 C825T and ACE gene polymorphisms and the incidence of PPCM in an Asian population. The presence of the GNB3 TT genotype increases the risk of PPCM 4.6 times, while the ACE DD genotype potentially increases the risk of PPCM by 3.6 times. A subanalysis on PPCM patients found that those with TT had a higher BMI and LVIDd and also that those with the DD genotype were more prone to have hypertension and preeclampsia/eclampsia. ACE levels were significantly higher in the DD genotype group than in the ID and II genotype groups. These findings highlight the importance of gene polymorphisms in PPCM and, therefore, might be used as predictors and management strategies in the future. Data availability statement The data analyzed in this study are subject to the following licenses/restrictions: The datasets used are available from the corresponding author on reasonable request. Requests to access these datasets should be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by the Dr. Soetomo General Hospital Surabaya Ethics Committee. The patients/participants provided their written informed consent to participate in this study. to thank the staff at Dr. Soetomo Hospital for their cooperation in this study.
2023-04-05T13:06:37.509Z
2023-04-05T00:00:00.000
{ "year": 2023, "sha1": "48a8ad4d46182e947a1d70c89e6715f02933c671", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "48a8ad4d46182e947a1d70c89e6715f02933c671", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
244618529
pes2o/s2orc
v3-fos-license
How the Pregabalin (Lyrica) Administration as a Abuse Drug Can affects the Reproductive Health of male Wistar Rat Drugs addiction considered a massive problem persist in diverse populations everywhere the world. Our work aimed to illustrate the correlation between drug addiction and reproductive function in an animal model, also the potential impact of pregabalin (Lyrica) intake on spermatozoa formation process and sexual hormone levels. In this study, we used 14 adult healthy rats that allocated into two groups (n = 7) as control and treated that orally administrated with pregabalin (23.7 mg/kg) for 30 subsequent days. Reproductive hormones, spermatozoa parameters (motility and morphology), lipid peroxidation (MDA), nitric oxide, total antioxidant activity, DNA damage and histopathological investigation were performed. The results revealed that pregabalin addiction had a harmful impact on the hypothalamus-pituitary-gonad axis of male rats through hindered hormones secretion, raised reactive oxygen species, affect antioxidant enzymes, triggered DNA damage and distorted testicular histology. Finally, we found that addiction of Lyrica caused adverse impact on the male reproductive health and subsequently affect fertility. INTRODUCTION he genitourinary organs can be influenced by drugs, many studies investigate the impacts of drugs on this structure is of extraordinary significance. Several experimental works have shown that oral opium can diminish luteinizing hormone (LH), dihydrotestosterone, and follicle stimulating hormone (FSH), and can cause hypogonadism in 89% of users. The predominance of erectile brokenness and diminished libido is additionally significantly higher in sedate abusers 1 . Pregabalin is an alkylated simple of c-aminobutyric acid (GABA) and basically linked to gabapentin 2 . In expansion, GABA mimetic properties have been appeared in rats 3 . A study by Grosshans et al. found that unlawful utilize of pregabalin was communal among opioid-addicted persons 4,5 . Lyrica is not a narcotic or an opioid. Lyrica is in a kind of medicines named anticonvulsants. Various components can disrupt spermatogenesis progression and decrease sperm's property and amount. Effective spermatogenesis based on a multifaceted relation between endocrine, paracrine and autocrine components 6,7 . Obviously, many inherited illnesses might impede the spermatozoa formation mechanism. The non-genetic reasons of male infertility, oxidative stress (OS) coming about from overstated generation of reactive oxygen species (ROS) is maybe the foremost identified issue. ROS are required for capacitation, the acrosome reaction and eventually fertilization; be that as it may, diminished removal and overproduction are capable to initiate DNA harm and imperfect membrane reliability of spermatozoa, in this manner coming about in decreased fertility capacity 7,8 . An extraordinary generation of reactive oxygen species (ROS) can be harming to the sperm cells. The spermatozoa plasma membrane contains enormous quantities of unsaturated fatty acids. Hence, it is vulnerable to peroxidative damage. The lipid peroxidation abolishes the constitute of lipid ground of spermatozoa membranes and distracts sperm's motility 9,10 . The present study aimed to investigate the potential harmful effect of pregabalin (Lyrica) addiction on the male reproductive health and try to illustrate the exact mechanism through which the used drug can caused infertility and affect sperm function. Animals 14 male Wistar with the weight range of 200-220 g was obtained from National organization for drug control and research (NODCAR). The animals were reserved in an animal house for one week under well-ordered laboratory settings at temperature 22°C ± 2°C, humidity (50%-60%) and 12 h in light and 12 h in dark with permitted admission to water and food. Experimental design The practical work was agreed by the Cairo University, Faculty of Science Institutional Animal Care and Use Committee (IACUC) (Egypt), (CU/I/S/3/20). The rats were allocated into two groups as follows: group one (control), received distilled water and group two (treated), and administrated orally with abuse dose of pregabalin (Lryrica) 23.7 mg/kg for one month according to World Health Organization 2018 11 for four weeks day after day. Sperm collection After one month, the rats were sacrificed using an intraperitoneal injection of sodium pentobarbital (100 mg/kg). Then, testes, epididymis and seminal vesicle were collected, washed with saline, dried and weighted. The cauda part of the epididymis was used to evaluate sperm parameters. In each animal, right testes fixed in neutral 10% formalin for histopathological analysis and left one freeze at -20 C for further investigation. Cauda epididymis were minced, and protected in a warm Petri dish containing 5 ml physiological saline solution (Ph 7.4) at 37°C. The spermatozoa were permitted to disperse into the buffer 12 . Sperm count For calculating the sperm, 500 μL of the sperm suspension was diluted (dilution of 1:10) with formaldehyde fixative (10% formalin in phosphate buffered saline). 10 μL from the diluted solution was placed into a hemocytometer. Hemocytometer was situated in a moist chamber for 7 min. Hemocytometer was placed on the microscope stage. Then, the sperms at the four corners of the central square were counted 13 . Sperm morphology Eosin/ nigrosin stain was operated to evaluate spermatozoa morphology. One drop of eosin/nigrosin was added to the suspension and slightly mixed. The slides were then examined under the light microscope at ×400. A total of 300 spermatozoa were evaluated on each slide to find the anomalies of the head and tail 14 . Sperm high motility The sperm motility was measured through the light microscope at ×400. One drop of sperm suspension was set on a glass slide. The number of the sperms with rapid progressive forward movement was computed and the percentages of high motile sperms were achieved 15 . Hormones profile Blood were harvested from the rats by cardiac puncture in clean tube, centrifuged at 3000 rpm for 15 min to obtain sera and then were kept at -20˚C for hormones analysis. Quantitative determination of Testosterone (T), folliclestimulating hormone (FSH) and luteinizing hormone (LH) level were measured using ELISA kits specific for rats (SunLong Biotech Co., LTD). Redox status Reactive oxygen species evaluated using reagent kits obtained from Bio Diagnostic (Egypt). Freeze testes minced in PBS (1:10 ml). The suspension was centrifuged at 3000 rpm for 20 minutes, and the supernatant was used to test MDA level by means of Satoh 16 , NO concentration based on Montgomery and Dymock 17 and total antioxidant capacity as designated by Koracevic et al. 18 . Comet assay DNA injury was estimated using the single-cell gel electrophoresis technique 19 . The DNA damage were calculated through comet score software, in which the DNA % in the tail, the tail length and the tail moment might be acquired. Histological examination For light microscopical investigation, the testes of two groups were fixed by immersion in 10% neutral formalin. All samples were relocated in 70 % ethanol and then dehydrated in an ascending run of ethanol, cleared in xylol and embedded in paraffin. Five μm thick sections were yielded using a rotary microtome. Histological staining was performed with Ehrlich's hematoxylin and counterstained with aqueous eosin 20 . Microscopical examination and photographing of the histological sections were implemented with AmScope microscope. Statistical analysis Statistical analysis was achieved by the independent t-test to reveal mean and standard error mean of all examined parameters using Statistical Package for the Social Sciences (SPSS). The differences between control and treated group at significance level of 0.05 (P<0.05). Effect of Lyrica on tissues weight The absolute and relative reproductive organs weights of different treated groups are showed in Table 1. Lyrica exposure triggered a reduction in in testes, epididymis and seminal vesicle weights. Effect of Lyrica on spermatozoa Lyrica exposure resulted in non-significant decrease in sperm motility with significant rise in the total abnormalities as banana head, abnormal tail, hookless sperm compared with the control group (Table 4 & Fig. 2). Values are expressed as Mean ± SEM. The statistical differences were analyzed by independent samples t-test. *= P <0.05 compared with control. Effect of Lyrica on reproductive hormones Lyrica administration revealed significant imbalance in the reproductive hormones' levels in comparison by control group. The testosterone, LH and FSH levels were decreased in the treated rats when compared with control group. Also treated group reveled non-significant increase in alkaline acid phosphate (ACP) in comparison with the control one (Table 3). Values are expressed as Mean ± SEM. The statistical differences were analyzed by independent samples t-test. *= P <0.05 compared with control. Effect of Lyrica on redox status Lyrica exposure increase the malonaldehyde (MDA) level non-significantly in comparison with the control group. Similarly, addiction of lyrica create a significant rise in content of nitric oxide (NO). Also, lyrica taken elevate the total antioxidant activity non-significantly as compared with the control group in all examined tissues (Table 4). International Journal of Pharmaceutical Sciences Review and Research Available online at www.globalresearchonline.net Values are expressed as Mean ± SEM. The statistical differences were analyzed by independent samples t-test. *= P <0.05 compared with control. Effect of Lyrica on DNA damage We used the comet assay (single cell gel electrophoresis) to inspect the influence of Lyrica on DNA by evaluating the DNA tail, tail length and tail moment in the examined tissue ( Fig. 3 and Table 5). The treated group exposed to lyrica displayed a significant change in the comet parameters compared with the control rats except for Tail DNA % showed insignificant increase as compared with the control group. Histopathological results Histological investigation of testis of control rat stained with H&E presented standard form of seminiferous tubules and interstitial tissue. Each tubule was lined with stratified epithelium (germinal cells) and supportive Sertoli cells. The germinal cells were organized in several layers (spermatogenic cells). Leydig cells were found between stroma connective tissue. Animals taken Lyrica displayed different histopathological alterations. The seminiferous tubules have irregular outline, degenerated tubules were noticed with absence of spermatozoa. Most of the seminiferous tubules revealed injured and disordered spermatogenesis cells and the impaired germ cells were exfoliated in the lumen. Furthermore, nonexistence of spermatozoa was obviously documented. Plentiful spermatocytes seemed with pyknotic nuclei and many vacuoles were observed in the seminiferous tubules. Some seminiferous tubules be seen cracked (Fig. 4). DISCUSSION In this study, we investigated the correlation between drug abuse and sexual performance also the impending influence of pregabalin (Lyrica) intake on spermatozoa formation and reproductive hormones activity. Drugs addiction considered a massive problem persist in diverse populations everywhere the world. Lyrica is not a narcotic or an opioid. Lyrica is in a kind of medicines named anticonvulsants. The outcomes showed that pregabalin (Lyrica) reduced sperm motility, normal sperm morphology, augmented testicular DNA damage, and prompted histopathological alterations in testicular tissue. These harmful impacts have been complemented by provoked oxidative stress in testicular tissue and the change of serum hormone levels that participate a part in the spermatogenesis progression. Some findings have implied that the drug misuse have destructively influence male fertility, with an effect on hypothalamus-pituitary-gonadal axis, spermatogenesis, and sperm function, Leydig cells, Sertoli cells and in testicular tissues 7,21 . Organ weights are susceptible indicators to reveal toxicity after chemical exposure 22 . Tissue weight change mirror the distractions of the reproductive system functions 23 . In our study, significant decreasing in absolute and relative testis and epididymis weights after pregabalin (Lyrica) intake were not detected. The testicular tissue weights are related with the Sertoli cells number and spermatozoa formation consequently the testis size is indication of the germinal cells number in the testis 12 . This might be as a result of free radicals' creation and ROS by pregabalin (Lyrica) and effect of these detrimental elements on testis susceptible cells. Spermatozoa motility, and morphology are pointers used to estimate semen quality, testicular function and verify reproductive toxicity 23 . A decline in sperm motility and sperm morphology abnormalities are a significant parameter of chemical caused infertility 24,25 . Regulatory authorities like EPA, FDA, OECD, WHO, and ICH underline the significance of the sperm head, sperm midpiece and tail abnormalities especially twisted tail and bent/spiral tail are related to infertility 25,26 . FSH, LH, and testosterone have parts in male reproductive functions conservation and so, hormone levels determination is essential in reproductive toxicity reports 25 . It is proven that LH and FSH are released below the regulator hypothalamic gonadotropin-releasing hormone from the anterior pituitary. LH motivates the testosterone secretion from Leydig cells, and testosterone is mandatory for secondary sexual appearances and spermatogenesis. FSH controls the spermatozoa production in Sertoli cells 25,27 . The hypothalamic-pituitary gonadal axis can be influenced by several agents. Chemicals comprising drugs can lessen fertility and cause infertility by disordering the ordinary function of this axis 28 . It has been revealed that antiepileptic drugs affect hypothalamic pituitary gonadal axis and trigger reproductive malfunction 29,30 . In our study, fallen serum FSH, LH, and testosterone levels were detected after drug administration. Previous studies presented that LH and FSH levels were not changed 25,31 . In addition, a reduction of testosterone level is associated to the spermatogenic cell damage and spermatogenesis distraction, and subsequently initiate reproductive power disordered 25,27,28 . Opioids play on the hypothalamic-pituitary axis by hindering the GnRH discharge, that suppress FSH and LH release accordingly cause spermatogenesis impairment and decline testosterone concentrations 32 . Vuong et al. 33 stated that opioid-convinced hypogonadism. Other articles imply that sperm concentration and quality are damaged in opioid users, augmented DNA fragmentation level and lowered catalase (CAT) and superoxide dismutase (SOD) activity were seen in addict men competed to healthy persons 7,34 . Pregabalin drug decreases serotonin discharge in the synaptic cleft. Meanwhile serotonin is the required mediator in melatonin creation, thus causing melatonin reduction. Melatonin is an antioxidant and has an important role in defending the testicular tissue against damage induced by ROS. Low melatonin activity causing lessening in testosterone synthesis and secretion by lessening the glutathione peroxidase (GPx) enzyme 35,36 . Prior paper presented that the pituitary gonadotropins, serum FSH, LH, and PRL hormone levels reduced using two doses of pregabalin, these clarifications are arrangement with our study, and is due to the antiepileptics (ADEs) may has an effect on the gonadal level 37,38 , Pregabalin (PGB) has the ability to hinder the central nervous system activity, that controls physiological and behavioral consequences correlated with normal reproductive performance through hormonal signals 38,39 . Harden and Pennell 40 indicated that ADEs may intermingle with gonads. Since PGB drug blockade calcium channels, that analogous neurochemical mechanisms are involved in the interaction of these drugs with hypothalamic neurohormones synthesis as gonadotropin-releasing hormone (GnRH), so PGB may be affects hypothalamus because GnRH discharge from neurons is based on the depolarization -caused influx of extracellular calcium and because PGB effects on Ca2+ channels by blockading or inhibiting Ca2+ influx on hypothalamus that is susceptible to change the GnRH pulsations and trigger reduction in LH and FSH levels 38,41 . Findings demonstrated that testosterone clearly affects the Sertoli cells. Sertoli cells providing nutrients for dividing spermatogenic cells. They produce many growth factors and transferring proteins which has a critical role in cell division and spermatozoa formation 36 . Concerning the testosterone role on spermatogenesis, reduction in this hormone secretion induced sperm density decrease. Mammalian spermatozoa has high unsaturated fatty acids quantity that are main substrates in oxidation. In normal situations antioxidant mechanisms are participate in reproductive tissues and inhibit oxidative injure in different gonadal cells and developed sperm 42 . The previous reports reveled that free radical's creation directly disturbs sperm proliferation, activity and fertility 36 . Hypothalamus release Luteinizing hormone releasing hormone (LHRH) that initiates the secretion of LH from pituitary gland then testosterone releasing from the testes. We did not identify if the decreased LH was because of pituitary malfunction or the decreasing LHRH release in the treated group. It is probable that LHRH, LH, and testosterone were distressed by the PGB 1,43 . The oxidative stress induce lipid peroxidation affects semen quality, and inducing male infertility 25,44,45 . Oxidative stress induced a reduction in intracellular ATP levels, apoptotic factors creation resulting in mitochondrial membrane disturbance, protein phosphorylation disorder, rise in membrane permeability, and spermicidal molecules formation, impairment of acrosome membrane so reduces semen quality, concentration, motility and morphology 44 . Moreover, antioxidant defense mechanism deficiency participates to the sperm vulnerability 25,46 . In our study, reduced total antioxidant activity levels and increased MDA and NO levels were observed following PGB administration signifying that PGB induced oxidative stress in testicular tissue. Nitric oxide has an essential role in sperm physiology and have many undesirable effects on hypothalamic-pituitarytesticular axis 47 . There is a correlation between nitric oxide and sperm acrosome and tail. Nitric oxide can reduce the sperm motility by decreasing ATP level 48 . Nitric oxide can damage sperm mitochondrial membrane, thus releasing C chromosome, initiating caspase cascade activity and promoting apoptosis 12,15 . In the study of Daniel et al., severe morphologic variations of the sperms were seen in microscopic analysis of the semen of addicts, which verifies the outcomes of the existing study 12,49 . Our results showed significant initiation of oxidative stress after pregabalin exposure. This comes in accordance with the results of Kamel 50 who explored the effect of chronic oral pregabalin administration for 90 days on the rat brains and stated significant decline in SOD and CAT in pregabalin administered groups 51 . Sperm DNA integrity is a clue of the sperm reproductive power 52 . So, the DNA's structural integrity was examined to estimate sperm function. The neutral comet assay is simpler, sensitive and accurate method in clarifying DNA damage double-strand breaks in human sperm 25,53 . In this study tail length, DNA percentage in tail and olive tail moment were recorded to evaluate genotoxic impact of the PGB. These parameters were particularly crucial to determine the DNA damage severity after exposure to genotoxic environmental agent 54 . Many studies stated that tail moment is a clearer value to evaluate DNA damage 25,55 . Thus, PGB generated DNA damage seen in our study may be due to oxidative stress, promoting DNA histone modification. Additionally, sperm head morphology is an indirect mark of mutagenic effects induced by chemical exposure. The preceding report found a confident association between sperm head abnormalities and DNA damage, and it was discussed that imperfections in sperm head morphology developed from genetic material damage 25,56 . Here sperm head abnormalities boosted by PGB intake reflected DNA damaging. CONCLUSION Based on the preceding elucidation we imply that Lyrica may be create imbalance redox status by producing reactive oxygen species and nitrogen with low antioxidant power that initiating cell destruction through interaction with the lipid of cell membranes, nucleic acid and proteins which influences cells signaling pathways that controlling programmed cell death (apoptosis and necrosis) and cell proliferation finally causing reproductive hormones levels and histopathological alternation. International Journal of Pharmaceutical Sciences Review and Research Available online at www.globalresearchonline.net
2021-10-18T15:07:34.080Z
2021-09-15T00:00:00.000
{ "year": 2021, "sha1": "656c864604b18d8b20333e1eb74ea5f9e479d4fc", "oa_license": null, "oa_url": "https://doi.org/10.47583/ijpsrr.2021.v70i01.003", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "1f02cd1445c3c01bbd768857558d233824f6e44f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261987250
pes2o/s2orc
v3-fos-license
Utilizing of the Statistical Analysis for Evaluation of the Properties of Green Sand Mould A statistical approach was conducted to investigate effect of independent factors of the mixing time compactability and bentonite percentage on dependent variables of permeability, compression and tensile strength of sand mould properties. Using statistical method save time in estimating the dependent variables that affect the moulding properties of green sand and the optimal levels of each factor that produce the desired results. The results yielded indicate that there are variations in the effects of these factors and their interactions on different properties of green sand. The outcomes obtained a range of permeability values, with the highest and lowest numbers being 125 and 84. The sand exhibited high values of tensile and compressive strength measuring at 0.33N/cm 2 and 17.67N/cm 2 . Conversely it demonstrated low levels of tensile and compressive strength reaching 0.14N/cm 2 and 9.32N/cm 2 . These results suggest that the moulding factors and their interactions have an important role in determining properties of the green sand. ANOVA was used to assess effect of various factors on different properties of the green sand. The results obtained suggest that compactability factor play a significant effect on permeability, the mixing time or bentonite factor has a significant effect on the compressive strength and mixing time or compactability factor has a significant impact on the tensile strength with a significance level lower than 5%. It is found that neither the mixing time nor the amount of bentonite used in the green sand mix has a significant impact on its permeability. Compactability of the green sand does not has a significant effect on the compressive strength. Bentonite used in green sand mix does not have a significant impact on its tensile strength. Introduction Sand moulds characterized by ease of the mould making process, and ability of recycle of the moulding sand [1,2].Silica sands is the most refractory material used as a moulding mixture for casting production, because materials and patterns are relatively cheap [3,4].The natural sand and the synthetic sands are two kinds of sands [5].Sand moulding process involves mixing of sand, bentonite, and water to make green sand moulds [6].Water content affects the properties of green sand such as permeability, bulk density, shear strength, dry and green compression strength [7].The grain size and shape of sand, type, water content, the mixing process efficiency, cohesive forces of bentonite binder, and adhesive forces between, pattern and moulding sand are necessary parameters that determine green sand properties [8,9].Mixing process of the green sand improves density of the sand mould by connecting of sand and bentonite particles together [10].Variation of water content and bentonite content has played an important role in determining the properties of bentonite-bonded green sand.Green Compression strength of the sand mould is an important property in mould production, and depends on sand grain size, shape and distribution, water content, kind and bentonite content [11].Increase of water content enhance compression strength of the green sand until constant value, afterward decrease in this property as a result of increase in water content.Fine grain size of the sand and high value of bentonite content support increases green compression strength, while coarse grains contribute in reduce of green compression strength of the sand [12,13].Ability of the green sand to be mould require at least 10.34 kpa of the compression strength [6].Design Of Experiments (DOE) is important tool used for determining the responses depend on inserted input-output data.Response Surface Methodology (RSM) is an experimental design and statistical tool, during which dependent properties respond into change in one or more independent variables [14,15].Abdulamer [16] used Taguchi method to determine the effective moulding parameters for improving green sand mould properties.The study was conducted through an experimental design to investigate the effects of several molding variables on the compression strength, tensile strength, and permeability of green sand.The molding variables considered in this study are compactability percentage, mixing time, and bentonite content. The study utilized a factorial design of experiments, which is a statistical approach that allows for the efficient exploration of multiple factors and their interactions.In this design, the molding variables are varied at different levels, and the response variables are measured for each combination of factor levels. Design of Experiments The sand samples were prepared using a mixing machine to combine sand, bentonite, and water in the appropriate ratios based on a design of experiments that mentioned in tables 1 and 2. It is important to ensure that the sand samples are prepared accurately and consistently in order to obtain reliable results from the testing processes.This may involve careful measurement and monitoring of the mixing process, as well as ensuring that the sand, bentonite, and water are of a consistent quantity.Once the sand samples have been prepared, they can be subjected to various testing processes to evaluate their properties in relation to different moulding factors.The tests are permeability, compressive strength, and tensile strength that can affect the quality and performance of sand moulds in different applications. A standard test procedure used in foundries for measuring the properties of green sand.The first step of the procedure involves filling a tube with a measured mass of prepared mixed sand.The tube has a diameter of 50mm and a height of 100mm.Next, the sand is compacted by subjecting it to three strikes from a ramming machine.This compaction process is important because it ensures that the sand is of a consistent density and will produce reliable test results.After the sand has been compacted, it is removed from the tube and formed into standard sand samples.These samples have a diameter of 50mm and a height of 50mm.These sand samples are then used to measure permeability and mechanical properties of green sand.The permeability is measured using a permeability gauge as shown in figure which determines how easily air can pass through the sand.The mechanical properties are measured using a Universal Sand Strength Testing Machine (USSM) as shown in figure which measures the strength and deformation characteristics of the sand.This test procedure is an important tool for ensuring that green sand used in foundries is of a consistent quality and will produce reliable castings. Results and Discussion Design of experiments (DOE) is a statistical method used to evaluate the effect of mixing time, compactability and bentonite percentage on the properties of the green sand to identify the optimal levels of each factor to achieve the desired sand properties.Figures 3-5 show impact of different molding parameters on certain properties of a moulding material.Several experiments were conducted, and the properties being studied are permeability number, compressive strength, and tensile strength respectively.The results obtained from the experiments showed that the moulding factors and their interactions with each other have a varying impact on the properties of the sand mould. The second experiment resulted in the lowest permeability value, which means that the green sand was least permeable in that experiment compared to the others.On the other hand, the maximum permeability property was observed in experiment number 13, indicating that the green sand was most permeable in that experiment compared to the others. The compressive strength property was measured for each experiment, experiment number 7 and 16 resulted in the optimum values of compressive strength, indicating that the green sand had the highest compressive strength in those experiments compared to the others.Conversely, experiment number 1 had the lowest value of compressive strength, suggesting that the green sand had the weakest compressive strength in that experiment compared to the others. The tensile strength of green sand was tested in a series of experiments, and that the highest and lowest values were obtained in experiments 16 and 1, respectively.The factors being analysed are compactability and mixing time, and the response variable is permeability of green sand.The interaction plot is shown in figure 6, displays how the effect of compactability on permeability changes at different levels of mixing time, and vice versa. It is found that combination of 4 th level of the compactability factor, and 1 st level of the mixing time factor resulted in the highest values of permeability.The first level of compactability factor in combination with the second level of mixing time factor resulted in the lowest permeability value.This suggests that a specific combination of these two factors can lead to the most tightly packed and well-mixed green sand, which in turn reduces its permeability.On the other hand, the fourth and second levels of compactability factor show that permeability of the green sand decreases with an increase in mixing time levels.This could be due to the fact that increasing mixing time can lead to better distribution of the sand particles and binder, resulting in a more homogenous mixture that is less permeable.While, the first and third levels of compactability factor improved the permeability property with an increase in mixing time levels.This suggests that there may be an optimal range of mixing time for each level of compactability factor that results in the best permeability properties.The interaction plot shown in figure 7 between bentonite and mixing time may provide insight into how these two factors interact to affect permeability.The 4th level of bentonite and the 1st level of mixing time had the greatest impact on permeability, resulting in the highest permeability.In contrast, the 2nd levels of bentonite and mixing time had the lowest permeability values.It is also suggested that the other three levels of bentonite had a changing effect on permeability as the mixing time increased, except for the 1st level of bentonite which did not change with increasing mixing time.Overall, it seems that the level of bentonite and mixing time are both important factors that can influence green sand permeability.The specific levels of each factor can have a significant impact on the permeability value obtained.The results suggest that the 1 st and 4 th levels of bentonite had a consistent effect across the 2 nd and 3 rd levels of mixing time.The 3 rd level of bentonite had a constant effect across the 1 st and 2 nd levels of mixing time, while the 2 nd level of bentonite had a constant effect across the 3 rd and 4 th levels of mixing time. Analysis of Variance The study investigated impact of the different moulding factors on permeability of the green sand, which is used in foundry applications.Table 3 presents results of Analysis Of Variance (ANOVA) that performed on the data collected during the study.ANOVA is a statistical tool used for comparison means of two or more groups, to determine if there are statistically significant differences between them.In this case, ANOVA results for the different moulding factors (compactability, mixing time, and bentonite) indicate that there is a statistically significant variance in permeability for the compactability factor (P<0.05), but no significant differences for mixing time and bentonite (P>0.05) and equal means.Equation 1 presents a regression model that was developed to predict the permeability of green sand based on the different moulding factors.Regression analysis is a statistical technique used to identify relationship between dependent variable (permeability), and one or more independent moulding factors.Table 4 shows a comparing permeability values obtained through the regression equation, expected values by the design of experiments, and in practice.Fig. 6.Plot of interaction of compactability and mixing time for Permeability Fig. 7. Plot of interaction of bentonite and mixing time for Permeability It is important to consider the design of the experiment that produced the plot and the statistical methods used to analyse the data, and the interpretation of the interaction plot may be influenced by other factors that were not mentioned.Figure 8 shows the interaction plot of compactability and mixing time showing that the fourth level of these two factors possess a positive influence on green compressive strength of sand.The highest value of the compressive strength is gained when the compactability factor is at 2 nd level and the mixing time is at 3 rd level, while the lowest value of compressive strength is achieved when the compactability and mixing time are at their 1st levels. The plot shown in figure 9 was examines the relationship between bentonite, mixing time and their effects on compressive strength of the green sand.Excluding the 1 st level, the other three levels of bentonite and mixing time factors have enhance of the compressive strength of green sand.The experiments found that except for the 1 st level, the other three levels of bentonite and mixing time have been tested, and results showed an increase in the compressive strength of sand. An analysis of variance (ANOVA) listed in table 5 for compressive strength of green sand shows that mixing time and bentonite have a significant effect on the strength, while compactability does not.This is indicated by the p-values, with those for mixing time and bentonite being less than 0.05 (which is the typical threshold for statistical significance), and the p-value for compactability being above 0.05 indicating no variances and equal means.Equation 2 listed the regression equation of compressive strength, and table 6 comparing values of property obtained through the regression equation, the expected value by the design of experiments, and in practice.Equation 2 has listed a regression equation that relates the compressive strength of the green sand to the moulding factors (mixing time, bentonite, and compactability), and table 6 compares the values of this property obtained through the regression equation, the expected value based on the design of experiments, and the actual values observed in practice.The observed values were found have a good match with the predicated and regression values.Figure 10, describes results of an experiment that investigate effects of compactability and the mixing time on tensile strength of the green sand.The results show that there is a significant interaction between compactability and mixing time, which affects tensile strength of the green sand.There are two levels of compactability (the 2 nd and 4 th levels) that have similar effects on tensile strength of the sand.In other words, changing the compactability from the 2 nd level to the 4 th level did not result in a significant change in the tensile strength.Additionally the mixing time has nonlinear effect on tensile strength of sand.Specifically, there was a decrease in tensile strength of green sand when shifting from 1 st level to 2 nd level of mixing time.However, after that, there was an increase in the tensile strength when shifting from the 2 nd level to the 3 rd and 4 th levels of mixing time.Overall, these findings suggest that the optimal combination of compactability and mixing time can lead to the highest tensile strength of green sand. There is a correlation between mixing time and compactability and tensile strength of green sand.It appears that 1 st level of compactability shows clear behavior resulted in the highest tensile strength of sand with increase of mixing time.It is found that 1 st level of factors of compactability and the mixing time gives negatively impact the strength of the sand.On the other hand, the 4th level of factors of compactability and mixing time resulted in the highest tensile strength, indicating that an optimal level of these factors was reached.The relationship between bentonite and the mixing time factors and their impacts on tensile strength of green sand was shown in figure 11. The first level of bentonite (the lowest level used in the experiment) had the same effect of compactability as the fourth level (presumably the highest level used) on the tensile strength.It is found that these levels of these two factors does not significantly improve tensile strength of sand.The second, and third levels of bentonite had varying effects on tensile strength of green sand.This implies that there is an optimal level of bentonite that can enhance the tensile strength.The highest tensile strength was observed when using the first level of bentonite and the fourth level of mixing time.This indicates that longer mixing time may lead to better bonding between the sand grains, resulting in higher tensile strength.The lowest tensile strength occurred with the first level of bentonite and mixing time.This suggests that inadequate bentonite content and/or mixing time may result in poor bonding and lower tensile strength. Table 7 presents the results of analysis of variance (ANOVA) for the different parameters studied, on the tensile strength of the green sand.The significance P-values for mixing time and compactability were found to be less than 0.05, which suggests that there is a significant variance in the results for these factors.On the other hand, the significance value for bentonite was above 0.05, indicating that there was no significant variance and that the means were equal for this factor.Equation 3 presents a regression equation that was derived from the data obtained in the experiment, which can be used for prediction tensile strength based on the values of the different factors studied.Table 8 appears to compare the values of the tensile strength obtained through the regression equation, the expected values based on the design of the experiment, and the values obtained in practice.This comparison can help to evaluate the accuracy of the regression equation and the effectiveness of the experiment in predicting tensile strength. Conclusions The results suggest that the properties of green sand can be influenced by various factors such as compactability, bentonite content, and mixing time.By adjusting these factors, it may be possible to produce green sand with specific properties that are suitable for different applications.The study has identified specific combinations of moulding parameters that can have a significant impact on permeability, tensile strength and compressive strength of the green sand. A statistical analysis of data showed that there is a simultaneous increase in permeability, tensile strength and compressive strength of green sand when the moulding parameters are increased, and vice versa.The study found that the highest value of permeability was achieved when a combination of 49% compactability, 11% bentonite, and 2 seconds of mixing time was used.The study also found that a combination of 34% compactability, 7% bentonite, and 4 seconds of mixing time resulted in the lowest value of permeability for the green sand, with a value of 84.The interaction of 39% compactability, 11% of bentonite, and 6 seconds of mixing time gave the highest value of compressive strength, which was 17.67 N/cm 2 .Additionally, the combination of 34% compactability, 5% of bentonite, and 2 seconds of the mixing time resulted in the lowest value of compressive strength, which was 9.32 N/cm 2 . The highest tensile strength 0.33 N/cm 2 was achieved when the compactability was 49%, the bentonite content was 5%, and the mixing time was 8 seconds.Conversely, the lowest tensile strength 0.14 N/cm 2 was observed when the compactability was 34%, the bentonite content was 5%, and the mixing time was 2 seconds.There is good agreement between the experimental results, statistical predictions, and regression analysis.This suggests that the study was well-designed and that the results are reliable. Table 1 . Sand moulding factors & their levels Table 4 . Comparison between used methods for determining permeability Table 5 . ANOVA for factors dependent-compressive strength Table 6 . Comparison between used methods for determining compressive strength
2023-09-17T15:06:57.668Z
2023-09-15T00:00:00.000
{ "year": 2023, "sha1": "c2f11d5fcba4f87ddba1aaef729a8b9a4fa093b7", "oa_license": "CCBY", "oa_url": "http://journals.pan.pl/Content/128370/PDF-MASTER/AFE%203_2023_09.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "25146c8c44bb09b66e9c612e2cfb535240d782a7", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
73572561
pes2o/s2orc
v3-fos-license
Cosmological constraints on dark matter models with velocity-dependent annihilation cross section We derive cosmological constraints on the annihilation cross section of dark matter with velocity-dependent structure, motivated by annihilating dark matter models through Sommerfeld or Breit-Wigner enhancement mechanisms. In models with annihilation cross section increasing with decreasing dark matter velocity, big-bang nucleosynthesis and cosmic microwave background give stringent constraints. Introduction In a weakly-interacting massive particle dark matter (WIMP DM) scenario, a DM particle with mass of O(100) GeV -O(1) TeV should have an (thermally-averaged) annihilation cross section of σv ≃ 3 ×10 −26 cm 3 /s in order to reproduce the observed DM abundance due to the thermal production. On the other hand, recently reported excesses of cosmicray positron [1] and electron fluxes [2,3,4] may be interpreted as signatures of annihilating dark matter with fairly large annihilation cross section of order of 10 −23 -10 −22 cm 3 /s depending on DM mass m, which is typically three orders of magnitude larger than the canonical value quoted above, although constraints from other observations, such as gamma-rays [5,6,7] and neutrinos [8,9,10,11] are also stringent and might have already excluded some parameter regions. One way to achieve the "boost factor" of O(10 3 ) is to make the DM annihilation cross section velocity-dependent. In this case the annihilation cross section in the early Universe is not same as that in the Galaxy or elsewhere, simply because typical velocity of the DM particle varies from place to place. Hence it is in principle possible that the DM has canonical annihilation cross section at the freezeout epoch in the early Universe reproducing the DM abundance observed by Wilkinson Microwave Anisotropy Probe (WMAP), while explaining the cosmic-ray positron/electron excesses. A common mechanism would be Sommerfeld enhancement of annihilation cross section [12,13,14]. If a DM interacts with a light particle through which it annihilates, non-perturbative effects enhance the annihilation cross section. The cross section is enhanced by inverse of the DM velocity, v −1 or v −2 , in this class of models. In the Breit-Wigner enhancement scenario on the other hand, DM annihilates through S-channel resonance where a particle in the intermediate state has a mass close to two times DM mass [15,16,17]. In this case the DM cross section can scale as v −4 at an earlier time or v −2 at a later time. In these models the annihilation cross section increases as the temperature decreases in the early Universe, and hence DM continues to inject high energy particles through the cosmic history. Therefore, it is quite non-trivial whether these models satisfy constraints from big-bang nucleosynthesis (BBN) and cosmic microwave background (CMB). In the case of velocity-independent annihilation cross section, bounds from BBN [18,19,20,21,22] and CMB [23,24,25] were derived in previous works. In this paper, we extend the analysis to the velocity-dependent annihilation cross section and derive general upper bound on the annihilation cross section. This paper is organized as follows. In Sec. 2 a simple prescription for treating the velocity-dependence of DM annihilation cross section is described. In Sec. 3 we present constraints from BBN and CMB and give implications on DM models. Sec. 4 is devoted to conclusions and discussion. 2 Dark matter with velocity-dependent cross section 2.1 Models of velocity-dependent annihilation cross section Below we briefly give examples of DM with velocity-dependent annihilation cross section. After that we will explain our unified treatment for describing the cosmological effects from DM annihilation with velocity-dependent annihilation cross section. Sommerfeld enhancement A DM particle χ is assumed to have an interaction with φ, which may be a scalar or gauge boson with coupling constant α χ , whose mass is much lighter than the DM mass: m φ ≪ m. Let us consider the DM annihilation process mediated by φ exchanges. If the mass of φ is sufficiently small, the φ-mediated interaction can be regarded as a long-range force and such an annihilation cross section receives an enhancement S compared with tree-level perturbative expression [26], where v is the initial DM velocity in the center of mass frame. Thus the DM annihilation cross section is proportional to 1/v for v ≪ α χ . This 1/v enhancement saturates at v ∼ m φ /m. There is another interesting effect caused by the bound state formation, which resonantly enhances the DM annihilation rate for some specific DM mass [12,14]. It is known that the enhancement is proportional to v −2 near the zero-energy resonance, and this v −2 behavior also saturates due to the finite width of the bound state. Breit-Wigner enhancement In the Breit-Wigner enhancement scenario, DM particles annihilate through S-channel particle exchange (φ), where the mass of φ, m φ , is close to 2m. The square amplitude of this S-channel process is proportional to where δ and γ are defined as m 2 φ = 4m 2 (1 − δ) and γ = Γ φ /m φ (Γ φ is the decay width of φ), respectively. If δ and γ are much smaller than unity, we have σv ∝ v −4 in the limit v 2 ≫ max[δ, γ]. At smaller velocity, it becomes proportional to v −2 . Finally, in sufficiently small v the cross section saturates at a constant value. Energy injection from DM annihilation Some DM models have velocity-dependent annihilation cross section as described above. In order to treat the effects of velocity-dependence, we phenomenologically parametrize the annihilation cross section as where σv 0 is a constant, v is the (thermal-averaged) velocity of DM particle, and v 0 is the velocity at the freezeout of DM annihilation in the velocity-independent case. Typically, the freezeout temperature is given by T fo ∼ m/25 [27], which gives v 0 ∼ √ 3/5 ∼ 0.3; in our numerical calculations, we take v 0 = √ 3/5 independently of n although the freezeout epoch may deviate from T fo given above. In Eq. (3), ǫ is a dimensionless parameter which determines the cutoff below which the velocity-dependence disappears. Since we are interested in the case that ǫ ≪ 1, we recover σv ≃ σv 0 in the limit v → v 0 . The power law index n and the cutoff parameter ǫ depend on models. The Sommerfeld enhancement predicts n = 1 and ǫ ≃ m φ /m, while it also predicts n = 2 in the zero-energy resonance region. In the Breit-Wigner enhancement, the DM annihilation cross section reduces to the form (3) with n = 4 and ǫ = [δ 2 + γ 2 ]/v 4 0 in the limit v 2 ≫ max[δ, γ]. In another limit v 2 ≪ max[δ, γ], it is of the form with n = 2 and ǫ = [δ 2 + γ 2 ]/(2δv 2 0 ), after rescaling σv 0 → σv 0 v 2 0 /2δ. When the annihilation cross section with n = 4, the DM annihilation cross section is large enough to reduce the DM number density significantly even below the freezeout temperature. Thus, we consider the cases of n = 1 and 2 in the following analysis, and it would give conservative bounds on the Breit-Wigner enhancement. With the annihilation cross section being given, the annihilation term in the Boltzmann equation, which governs the evolution of the number density of DM n DM , is given by In deriving constraints from BBN and CMB, spectra of injected energy per unit time are needed for all the daughter particles. For the particle species i, such a quantity is given by where dN i /dE is the energy spectrum of i from the pair annihilation of DM. The energy spectra of decay products depend on the property of DM; for a given decay process, we calculate dN i /dE by using PYTHIA package [28]. In order for a qualitative understanding of the effects of velocity-dependent cross section, it is instructive to consider the total energy injection ∆ρ in typical cosmic time, which is ∼ H −1 , with H being the expansion rate of the universe. For this purpose, let us define where E vis is the total release of visible energy in one pair-annihilation process of DM, and s is the entropy density. Because we consider the case that the cosmic expansion is (almost) unaffected by the DM annihilation, the quantity ∆ρ/s is approximately proportional to the amount of injected energy in a comoving volume per Hubble time. Numerically, we obtain where T is the cosmic temperature. In addition, for the convenience of the following discussion, we have introduced the enhancement factor We have set the present DM energy density to be consistent with the WMAP observation, Ω c h 2 ≃ 0.11 [29]. 1 In the case of velocity-independent cross section (where σv = σv 0 ), it is evident that the energy injection per comoving volume decreases as T decreases. This is natural since the DM number density decreases as the Universe expands. In the case of the velocity-dependent cross section, however, some non-trivial features appear. First note that the velocity is estimated as where T kd denotes the temperature at the kinetic decoupling. 2 Below this temperature, a DM particle cannot maintain kinetic equilibrium with thermal plasma, and hence it propagates freely and loses its momentum only adiabatically by the Hubble expansion. Notice that typical kinetic decoupling temperature for WIMP DM is much smaller than the freezeout temperature (we may say T kd ∼ keV -MeV for WIMP DM candidates) [32,33,34,35]. Thus it is found that, for T < T kd and n ≥ 1, the energy injection (7) is constant, or increases as T decreases as long as the velocity dependence is not saturated. In Fig. 1, we plot ∆ρ/s as a function of time for n = 1 (top) and n = 2 (bottom). Red solid lines correspond to T kd = 1 MeV, and green dashed lines correspond to T kd = 1 keV, for ǫ = 10 −3 -10 −9 from bottom to top. T kd = 1 keV approximately corresponds to a lower bound on the temperature of kinetic decoupling in order not to suppress the density fluctuation for a formation of the Lyman-α clouds [36]. We have taken σv 0 = 3 × 10 −26 cm 3 /s, m = 1 TeV and E vis = 2m. Then, even at T ∼ 0.1 MeV, we get an enhancement factor of the order of R e ∼ 10 3 (∼ 10 6 ) with n = 1 (n = 2). This implies that constraints become stronger than the case of usual DM without velocity dependence. In the following we perform detailed calculations of the effects on BBN and CMB, and derive constraints on the cross section with various choice of ǫ and T kd . Basic picture It has been known that injection of high-energy particles which are emitted through the annihilation of long-lived massive particles during/after the big-bang nucleosynthesis epoch (at a cosmic time t = 10 −2 -10 12 sec) significantly changes the light element abundances [18,19,20,21,22,37]. However, the effect of the injection highly depends on what particles are injected. We discuss two possibilities: (i) injection of electromagnetic particles and (ii) injection of hadronic particles in this section. The injection of high-energy electromagnetic particles such as photon and electron induces the electromagnetic cascade, which produces a lot of energetic photons. Those photons destroy the background 4 He and produce lighter elements such as deuterium (D), tritium (T), 3 He, and heavier elements such as 6 Li nonthermally at t 10 6 sec. In particular there is a striking feature that the 3 He to D ratio ( 3 He/D) tends to increase. By comparing to the observed value of 3 He/D, this gives us the most stringent bound on the annihilation cross section in case of the injection of electromagnetic particles [22]. This reaction occurs at the cosmic temperature of T ∼ 10 −4 MeV. It is notable that this constraint from BBN [38] is stronger than that on the µ-or y-distortion from the Planck distribution of CMB [39]. On the other hand, the injection scenario of high-energy hadrons such as pion, proton (p), neutron (n) and their antiparticles might be more complicated, but has been understood in detail [38,40]. The emitted high-energy neutron and proton destroy the background 4 He and produce D, T, 3 He or 6 Li. The charged pions, nn and pp pairs induce an extra-ordinal interconversion between the background proton and neutron, which makes the neutron to proton ratio (n/p) increase. Then this mechanism produces more 4 He. In terms of the annihilating dark matter, the overproduction of D or the increase of 3 He to deuterium ratio ( 3 He/D) gives us the most stringent constraint on the annihilation cross section [21,22]. In the following, we perform a detailed calculation of the light-element abundances; to take account of the injection of hadronic and electromagnetic particles, we follow the procedure given in [38]. Then, comparing the theoretical prediction with the updated observational constraints, we derive precise upper bounds on the annihilation cross section as a function of the DM mass. Observational light element abundances Next we discuss observational limits on D/H and 3 He/D which are adopted in this study. In the previous work [22], it was shown that these elements give us more stringent constraints than the others. The recent observation of the metal-poor QSO absorption line system QSO Q0913+072, together with the six previous measurements, leads to value of the primordial deuterium abundance with a sizable dispersion [41], Compared with data adopted in the previous analyses in Refs. [21,22], the error of (n D /n H ) p has been reduced by about 20 %. We adopt an upper limit on n3 He /n D which is recently observed in protosolar clouds [42], This value was also used in Ref. [22]. Constraints on electromagnetic particle injection Here we discuss the case of an electromagnetic annihilation modes into electron and/or photon. It is notable that the total amount of energies into electromagnetic modes approximately determines the bound, independently of the detail of each mode. In Fig. 2 we plot the upper bounds on the annihilation cross section obtained from the observational limit on 3 He/D, with n = 1 (top) and n = 2 (bottom) for various values of ǫ = 10 −10 -10 −3 . Here the kinetic decoupling temperature is set to be 1 MeV. The dashed line denotes the canonical annihilation cross section (= 3 × 10 −26 cm 3 /sec ). In the top panel, we see that the bounds highly depend on the cutoff parameter ǫ when ǫ 10 −7 . This behavior can be understood from the fact that the production of 3 He becomes most efficient when T ∼ 10 −4 MeV; at such a temperature, the enhancement factor is estimated as R −1 e ∼ 5 × 10 −7 (T kd /MeV) −1/2 (m/TeV) −1/2 (T /10 −4 MeV), which becomes smaller than ∼ 10 −7 with the present choice of parameters. Then, when ǫ 10 −7 , the cross section is enhanced purely by the factor of ǫ −1 . To allow the canonical value of the annihilation cross section for a few TeV mass of dark matter, we need ǫ 10 −4.5 at least. In the case of n = 2 which is plotted in the bottom panel of Fig. 2, R −1 e is much smaller than ǫ everywhere in this parameter space. Therefore ǫ −1 determines the enhancement of the annihilation cross section, and there exists a simple scaling law for the line of the limits, which means that the upper bound is proportional to ǫ. This feature is slightly different in case of T kd = 1 keV. Because the inverse of the enhancement factor with n = 1 is the order of R −1 e ∼ 1 × 10 −5 , any constraints with ǫ 1 × 10 −5 is insensitive to ǫ. In case of n = 2, the constraint is same as the bottom panel of Fig. 2 because of the same reason. Constraints on hadron injection When we consider the injection of hadronic particles, the limit is completely different from that of the electromagnetic particles. The constraint on the overproduction of the Here DM is assumed to annihilate purely radiatively into electron and/or photon. The kinetic decoupling temperature is set to be 1 MeV. The dashed line denotes the canonical annihilation cross section (= 3 × 10 −26 cm 3 sec −1 ). Fig. 2, but for the kinetic decoupling temperature set to be 1 keV. The case of n = 2 is completely same as the bottom panel of Fig. 2. deuterium due to the 4 He destruction often gives the most stringent constraint [22]. To study the hadronic injection, hereafter, we assume DM annihilates into a W -boson pair as a typical hadronic DM annihilation channel; in such a case, significant amount of hadrons are produced by the subsequent decay of the W bosons produced by the DM annihilation. Constraints do not change much for other cases, such as DM annihilation into bb [22]. In Fig. 4 we plot the upper bound on the annihilation cross section obtained from the observational limit on D/H with n = 1 (top) and n = 2 (bottom). The kinetic decoupling temperature is set to be 1 MeV. First let us consider the case of n = 1. Because the hadrodissociation processes become most effective at T ∼ 10 −2 MeV, for which the enhancement factor is estimated as R −1 e ∼ 5 × 10 −5 (T kd /MeV) −1/2 (m/TeV) −1/2 (T /10 −2 MeV), the constraint is determined only by the value of ǫ if ǫ 10 −5 . To agree with the canonical annihilation cross section, we have to assume ǫ 10 −3 for a few TeV mass of dark matter. Figure 3: Same as If the kinetic decoupling occurs at around 1 keV, the enhancement factor behaves differently from the case of T kd = 1 MeV because the hadrodissociation occurs before the time of the kinetic decoupling. As is shown in Fig. 5, then the enhancement factor is estimated to be R −1 e ∼ 5 × 10 −4 (T /10 −2 MeV) 1/2 (m/TeV) −1/2 at T = 10 −2 MeV, from which we easily find that the constraint is independent of ǫ for ǫ 10 −4 When we consider n = 2, the cutoff parameter determines the upper bound everywhere in the current parameter space for both T kd = 1 MeV and 1 keV. The result is shown in the bottom panel of Fig. 4 as a representative of both cases. Although so far we have discussed the limit from D/H, consideration of other light elements sometimes tighten the constraint. In particular the limit from 3 He/D by the photodissociation of 4 He could also give us stronger limits. Note that even in the annihilation into quarks and gluons, the photodissociation occurs because a sizable amount of the electromagnetic particles is also injected as decay products. For example, the electromagnetic energy corresponds to ∼ 47% of the total energy in case of the annihilation into a W -boson pair [22]. In Refs. [21,22,37], it has been shown that the upper bound from 3 He/D due to the photodissociation accompanied with the hadronic annihilation is much weaker than that from D/H when the annihilation cross section does not depend on v. The bound on ǫ from D/H was severer than the one from 3 He/D by three or four order of magnitude. On the other hand in case of the v-dependent cross section, the situation can be altered since the enhancement factor depends on the temperature. Because the constraint on 3 He/D is sensitive to the cosmic history at T ∼ 10 −4 MeV, the enhancement factor is the order of ∼ 10 7 for T kd = 1 MeV with n = 1. For a small cutoff parameter ǫ 10 −8 , then the constraint from 3 He/D can become stronger than that from D/H for m 1 TeV. This feature is shown in the top panel of Fig. 6. In this figure we plot the constraint only from 3 He/D, ignoring the D/H constraint. Notice that large amounts of D are produced in most parameter space as is seen from Fig. 4. Since hadro/photo-dissociations of 4 He also create 3 He, both D and 3 He are produced from the standard BBN and hadro/photo-dissociation processes. For sufficiently large σv 0 , both D and 3 He are produced by the hadrodissociation of 4 He at T ∼ 10 −2 MeV, and the constraint comes from the additional photodissociation effects at around T ∼ 10 −4 MeV. These mixed processes complicate the behavior of the lines in Fig. 6. There is no simple Figure 4: Upper bound on the annihilation cross section obtained from the observational D/H limit with n = 1 (top) and n = 2 (bottom) for various values of ǫ = 10 −10 -10 −3 . Here DM is assumed to annihilate into a W -boson pair. The kinetic decoupling temperature is set to be 1 MeV. The dashed line denotes the canonical annihilation cross section (= 3 × 10 −26 cm 3 /sec ) which gives the right amount of the dark-matter relic density. Figure 5: Same as Fig. 4, but for the kinetic decoupling temperature set to be 1 keV. The case of n = 2 is same as the bottom plot of Fig. 4. scaling law among lines with respect to the line of v-independent constraint. On the other hand, if we take n = 2 , the bound from 3 He/D is always weaker than that of D/H. This is clearly shown in the bottom panel of Fig. 6. We also consider the case of T kd = 1 keV. Results are shown in Fig. 7. As in the previous case, the constraint is mostly from the abundance of D. In some parameter region, however, 3 He/D gives the most stringent constraint. This fact is seen in the case of n = 1 for m 1 TeV and ǫ 10 −6 . In addition, for n = 2 (the bottom panel of Fig. 7), a simple scaling law (i.e., the proportionality of the upper bound on σv 0 to ǫ) breaks down once ǫ becomes smaller than 10 −8 ; for such a small value of ǫ, the constraint becomes significantly stringent. This feature can be understood analytically because, for ǫ 10 −8 , ∆ρ/s starts to increase as a function of t after t = 10 6 sec (T = 1 keV). Such a behavior is clearly seen as the dashed line in the bottom panel of Fig. 1. Before closing this subsection, we comment on the constraints from the Li abundances. Photo/hadro-dissociation processes also modify abundances of 6 Li and 7 Li. However, constraints from observations of 7 Li/H and 6 Li/H are weaker than those from D/H and/or 3 He/D for annihilating DM [22]. Constraints from CMB Energy injection around the recombination epoch affects the CMB anisotropy [45,23,24,25]. 3 This is because injected energy can ionize neutral hydrogens and modify the standard recombination history of the Universe. The effect is characterized by the quantity dχ (i) ion (E, z ′ , z), which represents the fraction of injected electron (photon) energy E for i = e(γ) at the redshift z ′ used for ionization of the hydrogen atom at the redshift between z and z + dz. The evolution equation of the ionization fraction of the hydrogen atom, x e , includes the following additional term, where E Ry = 13.6 eV is the Rydberg energy, n H is the number density of the hydrogen atom and Here F denotes the final state of the DM annihilation, e.g., F = e + e − , W + W − , etc., and dN (F ) e,γ /dE denotes the energy spectrum of the electron (photon) generated from the cascade decay of F . In the case of F = e + e − , we have dN (F ) e /dE = δ(E − m). For general final state F , it is evaluated by the PYTHIA code [28]. We follow the methods described in Refs. [25,46] to compute dχ ion (E, z ′ , z)/dz. This term is included in the RECFAST code [47] implemented in the CAMB code [48] for calculating the CMB anisotropy. Additional energy injections from DM annihilation around the recombination epoch cause ionization of neutral hydrogen atoms. Thus the effect is to slow down the recombination of the Universe. As a result, anisotropies in CMB are dumped at small scales due to the increase in thickness of the last scattering surface. Fig. 8 shows the T T power spectrum of the CMB temperature anisotropy, with/without DM annihilation effect. The solid line corresponds to the best-fit ΛCDM model without DM annihilation, and dotted line to DM annihilation cross section into e + e − with σv = 10 −24 cm 3 /sec and dashed line to DM annihilation cross section with σv = 10 −23 cm 3 /sec, while all cosmological parameters are fixed. Here we have taken m = 1TeV with velocity-independent annihilation cross section. It is seen that DM annihilation effects suppress the T T spectrum, reflecting the increase in thickness of the last scattering surface. It is not hard to imagine that this effect has a degeneracy with other cosmological parameters. In particular, the increase of the reionization optical depth causes similar effects. In order to derive conservative bounds on the DM annihilation cross section, we must take into account degeneracies between DM annihilation effect and other cosmological parameters. We have derived 2σ constraints using a profile likelihood function where the other cosmological parameters including the six standard ones (ω b , ω c , Ω Λ , n s , τ, ∆ 2 R in Figure 8: Power spectrum of the CMB anisotropy with no DM annihilation effect (solid), with σv = 10 −24 cm 3 /sec (dotted) and σv = 10 −23 cm 3 /sec (dashed) for m = 1TeV and assuming DM annihilation into e + e − with velocity-independent annihilation cross section. Also shown are data points from WMAP, QUaD, ACBAR and CBI. the notation of Ref. [49]) and the amplitude of the Sunyaev-Zel'dovich effect are marginalized so that the original likelihood function is maximized for given DM annihilation cross section and mass. The likelihood surface is scanned by using the CosmoMC code [50]; in our analysis, we have modified the CosmoMC code to take account of the above mentioned effect of energy injection. The used datasets include WMAP [49], ACBAR [51], CBI [52] and QUaD [53]. As opposed to BBN constraints, CMB constraint depends on the injected radiative energy, hence purely leptonic annihilation is more strongly constrained than the hadronic one. The result is presented in Fig. 9 where we plot upper bounds on the annihilation cross section obtained from CMB anisotropy data as a function of DM mass for ǫ = 10 −3 -10 −7 . DM is assumed to annihilate into e + e − pair in the top panel and W + W − in the bottom panel. Here we have taken n = 1 and T kd = 1 MeV. We have checked that the results do not change for n = 2 and/or T kd = 1 keV. This is because the CMB constraint is sensitive to the annihilation rate at around the recombination epoch, T 1 eV, and hence the annihilation cross section is already saturated for most interesting range of ǫ for both n = 1 and n = 2. Comparing them with BBN constraints, it is found that the CMB constraint is severer for the leptonic annihilation case independently of the parameters. In the case of hadronic annihilation with m a few TeV, the situation is not so simple. For n = 1 and T kd = 1MeV, CMB gives weaker constraint than BBN for ǫ 10 −4 , as seen from Fig. 4, but becomes tighter for ǫ 10 −4 . The situation is similar for T kd = 1keV. This is because the BBN constraint from the observation of D/H is sensitive to the annihilation at T ∼ 10 −2 MeV, and the annihilation cross section do not saturate at that epoch for small ǫ for n = 1. On the other hand, for n = 2, BBN gives tighter constraint than the CMB for parameter ranges shown in the figures. Therefore, CMB takes complementary role to BBN in constraining the DM annihilation with velocity-dependent annihilation cross section. Conclusions In this paper we have investigated effects of DM annihilation on BBN and CMB, and derived constraints on the DM annihilation rate, particularly focusing on the case where the annihilation cross section has a velocity-dependent structure. This is partly motivated by the observations of cosmic-ray positron/electron excesses and their explanations by the DM annihilation contribution. We phenomenologically parametrized the velocitydependence of the annihilation cross section and the critical velocity at which such an enhancement saturates, and derived general constraints on them. Our constraints are applicable to known velocity-dependent DM annihilation models, such as the Sommerfeld and Breit-Wigner enhancement scenarios. These results have been plotted in Figs. 4 -9 by changing the parameters and observation. Therefore readers can read off the allowed parameter regions from those figures in accordance with the intended use. Some comments are in order. If the DM annihilation is helicity-suppressed, the p-wave process may be the dominant mode, as is often the case with Majorana fermion DM. In this case we obtain n = −2 : σv ∝ v 2 . Thus the annihilation cross section becomes smaller as the temperature decreases, until the S-wave process becomes efficient. For negative n, the BBN/CMB constraints are weaker than the velocity-independent case. In the Sommerfeld enhancement scenario, it was pointed out that the DM-DM scattering mediated by light particle exchanges causes observationally relevant effect [54], and this also gives significant constraint [55,56].
2011-03-20T16:46:37.000Z
2011-02-23T00:00:00.000
{ "year": 2011, "sha1": "5467860b1a684b3a4e57c25d68999a1aa6e47efa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1102.4658", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5467860b1a684b3a4e57c25d68999a1aa6e47efa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
87172686
pes2o/s2orc
v3-fos-license
The Current Status of Mistbelt Mixed Podocarpus Forest in Natal The known distribution and history o f Mistbelt Mixed Podocarpus Forest in Natal, and its utilization and destruction are discussed. It is suggested that there may be a general drying o f the forest climate, which is supported by evidence from canopy tree growth and regeneration. That this generally drier period has contributed to the rapid rate o f forest degradation is postulated, and the need for the immediate implementation o f conservation measures to ensure the safety o f a represen­ tative area o f forest is stressed. D e f in it io n a n d D ist r ib u t io n Mistbelt Mixed Podocarpus Forest was the name given by Edwards (1967) in his survey of the vegetation of the Tugela Basin, for the climatic climax forest vegetation of the Natal Mistbelt. Previously this forest type had been variously called High Timber Forest (Fourcade, 1889), Yellow Wood Bush (Bews, 1912) and Temperate Forest (Pentz, 1945; Acocks, 1953). It generally occurs between about 3 500 ft (1 000 m) and 4 500 to 5 000 ft (1 300 to 1 500 m), on steep south-facing slopes. These slopes are subject to relatively frequent mist, particularly in summer, (hence the name ‘mistbelt’), and the rainfall is good (at least 1 000 mm a year), so the region is rela­ tively moist. Temperatures are equable, with low maxima (about 37°C), high minima (about —4°C), and an annual mean of about 16CC (Weather Bureau, 1954). Mode­ rately severe frosts occur on level ground, but probably not on the steep slopes on which the forest is situated. Snow does occur occasionally, and the rare heavy falls can cause great mechanical damage (Moll, 1965). As the name suggests, the most important tree species are, or were, Podocarpus spp. Many associated tree species occur, such as Ptaeroxylon obliquum, Celtis africana, Calodendrum capense, Olea capensis, Cussonia chartacea, Cryptocarya myrt[folia, Prunus africanus, Xymalos monospora, Kiggelaria africana and Combretum kraussii. Mistbelt Mixed Podocarpus Forest represents one of a series of three montane forest types found in Natal. Montane Podocarpus Forest occurs at higher altitudes, it is physiognomically and structurally reduced, and floristically depauperate (Moll, 1965; Edwards, 1967), The Inland Sub-tropical Forest types (Acocks, 1953) occur at lower altitudes and further north, and have greater floristic affinity with the Tropical Forests. In Natal, Mistbelt Mixed Podocarpus Forest occurs from Qudeni in the north, to the Ingeli and Impetyne Forests in the south (Fig. 1). D e f in it io n a n d D is t r ib u t io n Mistbelt Mixed Podocarpus Forest was the name given by Edwards (1967) in his survey of the vegetation of the Tugela Basin, for the climatic climax forest vegetation of the Natal Mistbelt. Previously this forest type had been variously called High Timber Forest (Fourcade, 1889), Yellow Wood Bush (Bews, 1912) and Temperate Forest (Pentz, 1945;Acocks, 1953). It generally occurs between about 3 500 ft (1 000 m) and 4 500 to 5 000 ft (1 300 to 1 500 m), on steep south-facing slopes. These slopes are subject to relatively frequent mist, particularly in summer, (hence the name 'mistbelt'), and the rainfall is good (at least 1 000 mm a year), so the region is rela tively moist. Temperatures are equable, with low maxima (about 37°C), high minima (about -4°C), and an annual mean of about 16C C (Weather Bureau, 1954). Mode rately severe frosts occur on level ground, but probably not on the steep slopes on which the forest is situated. Snow does occur occasionally, and the rare heavy falls can cause great mechanical damage (Moll, 1965 Mistbelt Mixed Podocarpus Forest represents one of a series of three montane forest types found in Natal. M ontane Podocarpus Forest occurs at higher altitudes, it is physiognomically and structurally reduced, and floristically depauperate (Moll, 1965;Edwards, 1967), The Inland Sub-tropical Forest types (Acocks, 1953) occur at lower altitudes and further north, and have greater floristic affinity with the Tropical Forests. In Natal, Mistbelt Mixed Podocarpus Forest occurs from Qudeni in the north, to the Ingeli and Impetyne Forests in the south (Fig. 1). U t il iz a t io n We know from various historical accounts, summarized relatively recently by Rycroft (1942), Moll (1965) and Edwards (1967), that the Mistbelt Mixed Podocarpus Forests of Natal were intensively worked for timber until about 1940. The chief species cut were Podocarpus spp., Ocotea bullata, Ptaeroxylon obliquum and Olea capensis. In addition poles, laths and saplings were taken out by the thousand for hut-building timber by the Bantu. Exploitation was not limited to severe tree cutting, the results of which are still exhibited by the irregular forest canopy, but also to the utilisation of forest areas as winter grazing for cattle-a practice which has a definite detrimental effect on regeneration (Taylor, 1961(Taylor, , 1962. The practice of burning the grassland surrounding the forest, without due precaution for the protection of the forest margins, has also contributed to forest destruction. Today it is generally accepted that the major factor causing the reduction in size of our Mistbelt Mixed Podocarpus Forests is man. Taylor (1961) pleaded for the protection of the K arkloof Forest, which covered an estimated 80 000 acres in 1880 (Fourcade, 1889) and had greatly diminished to an estimated 15 000 to 20 000 acres by the early 1940's (Rycroft, 1944). Rycroft suggests that even if the 1880 estimate was high, it does mean that there might have been as much as 80% reduction of forest area in only 60 years, the chief factors causing this reduction being fire and over-exploita tion. Taylor (1963) in a report on the Nxamalala Forest states that this forest, which was about 8 000 acres in area in 1880, had been reduced to a mere 1 500 acres 70 years later. O b s e r v a t io n s o n C a n o p y T r ee G r o w t h a n d R e g e n e r a t io n In 1929 the resident forester at Xumeni Forest, near Donnybrook, laid out a line through the forest and on it recorded the circumference at breast height of all trees in 40 one chain square, systematically placed plots. In 1966, 37 years later, the same trees were re-measured (Moll & Woods, 1971). The results showed that the mean increment rate was very slow, 0,201 ±0,05 (n± 160) inches (5,10 =0,127 mm) in circumference per year. Common trees at Xumeni are Podocarpus henkelii, Kiggelaria africana, Xymalos monospora, Podocarpus falcatus and Fagara davyi. Of these species, F. davyi, P. falcatus and K. africana grew the fastest. Moll & Haigh (1966) wrote of Xumeni that, " regeneration was poor, and it would appear that, under natural conditions, regeneration is not sufficient to main tain the forest" . Xumeni Forest has been protected by the Department of Forestry since 1910. In 1967/1968 collected density data of all woody plants from twelve 40 x 40 m stands in the K arkloof Forest. The data indicated that the species regenerating were not those species that are presently im portant in the canopy. Taylor (1961) observed that on Miss M orton's farm in the K arkloof there were many seedlings of Celtis africana, Cussonia chartacea and Ptaeroxylon obliquum. He also noted that where cattle grazed the forest the tree seedlings were unable to ad vance, and that the two species common at M orton's, Podocarpus henkellii and Ocotea bullata, were not regenerating. The general conclusion which can be drawn from these observations is that the species regenerating are those capable of tolerating a drier climate. Added to this, in areas of forest which have been protected, such as Xumeni which has been protected for the last 60 years, regeneration is poor. Seedling density in the K arkloof of 48 Vepris undulata, 32 Ptaeroxylon obliquum and 16 Podocarpus latifolius per hectare, is not indicative of active regeneration, not when one compares this to seedling densities in actively regenerating forests on the coast, such as at Hlogwene (Moll, in preparation) where, for example, there are 131 Olea woodiana and 94 Strychnos decussata seedlings per hectare. In addition, seedlings of trees which prefer a cool moist environment, such as Ocotea bullata and Podocarpus henkelii, are extremely rare. F a c t o r s C o n t r ib u t i n g t o a D r ie r F o r e s t C l im a te M oreau (1966) states that in the last 18 000 years the temperatures in Africa have risen by 5°C. Stuckenberg (1969) quotes Van Zinderen Bakker (1963), who states, " It has often been said that changes in temperature of the magnitude of only 5°C are of minor importance in a tropical continent such as Africa. These changes have, however, been of very great significance...............Little but consistent changes of this nature can have an enormous influence on the distribution of plants and anim als." Stuckenberg also quotes Bailey (1960), who states that these temperature changes affect maritime climates most. The Mistbelt Forests in Natal are influenced to a considerable degree by weather from the Indian Ocean. Accepting a rise in tem perature of 5 C during the last 18 000 years means that evaporation alone would be greatly increased. The mountain biomes which were more extensive are now much reduced; the montane limit, according to Moreau (1966), was about 2 300 ft (700 m), and is now 5 000 ft (1 500 m). Acocks (1953) also suggests that forest and scrub forest has largely disappeared in Natal, and that the drier vegetation types of bushveld and grassveld have greatly increased (see Acocks's Maps 1 & 2). In addition to climatic changes, natural fires, and more especially man-made fires, have become more numerous and these too have contributed to forest destruc tion; both directly and also indirectly, by increasing runoff. Furtherm ore, cattle grazing in the forests not only eat and trample the vegetation, but also open up the margins, allowing wind to penetrate beneath the canopy and fires to enter protective marginal vegetation. P r e s e r v a t io n R e q u ir e m e n t s Referring again to Acocks (1953), we are warned that unless our vegetation is scientifically managed the drier vegetation types will expand further. If we are to preserve an example of Mistbelt Mixed Podocarpus Forest we will, therefore, have to manage it.* It has been shown by a few conservation minded farmers who live in the Karkloof and Dargle areas that indigenous trees, such as Ocotea bullata and Podocar pus henkelii, if planted and cared for, grow relatively rapidly. However, the first priority is to have a sufficiently large area of forest proclaimed as a N ature Reserve. Once this has been achieved active management must include tree planting, run-off retention and protection of the margin from fire. Also large grazing and browsing animals must be excluded from the forest. Such management would have to be linked to a scientific monitoring programme, designed to measure which management practices are most beneficial in insuring maximum forest development.
2019-03-31T13:42:33.654Z
1972-11-17T00:00:00.000
{ "year": 1972, "sha1": "f887971809b4bb51dc54135784efc31fbe38c138", "oa_license": "CCBY", "oa_url": "https://journals.abcjournal.aosis.co.za/index.php/abc/article/download/1569/1534", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7d670ae430142856fd77b6d4d152f7c90613a266", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
246586582
pes2o/s2orc
v3-fos-license
Development of E-Commerce for Selling Honey Bees in the COVID-19 Era During the Covid-19 pandemic, which has not yet ended, the business of selling honey is a type of business that is able to survive and even increase its market. The trend of selling honey in the era of the Covid-19 pandemic is quite stable and has the potential to be a good source of income because honey can be used as medicine and can increase the body's immunity. Fitorajo Bee Farm is a UMKM engaged in bee cultivation which is located in Kota Pinang, Labuhanbatu Selatan Regency, North Sumatra Province with original honey bee products packaged in plastic bottle containers. The marketing system carried out by Fitorajo Bee Farm is carried out through word of mouth, Facebook social media, or through the WhatsApp application. Seeing the promising potential of the honey business in the Covid-19 pandemic era, and in order to reach a wider market, Fitorajo Bee Farm should improve and innovate by adopting e-commerce technology. Conventional marketing, which so far only reaches consumers on a limited scale, must be changed with a marketing system that can reach consumers from any corner. The purpose of this research is to build a marketing media for Fitorajo Bee Farm honey bees by implementing web-based e-commerce. The system development method used is the waterfall model, while the programming language used is PHP framework codeigniter with MySQL DBMS. The results showed that the codeigniter framework and MySQL DBMS can be applied to build e-commerce web-based honey bee marketing media. INTRODUCTION Covid-19 is a virus that was first discovered in an animal and food market in Wuhan City, Hubei Province, China (Hidayat, Aini, Ilmi, Azzahroh, & Giantini, 2020). On March 12, 2020, the World Health Organization (WHO) declared Covid-19 a global pandemic (Ophinni et al., 2020). Covid-19 first appeared in Indonesia on March 2, 2020 with the findings of two cases of infection (R. N. Putri, 2020). As of June 19, 2021, the number of people who have been confirmed positive for Covid in Indonesia is more than 1.9 million, and the death toll is more than 54 thousand people (Kementerian Kesehatan Republik Indonesia, 2021). The Covid-19 pandemic not only resulted in a health crisis, but this outbreak also had a negative impact on the Indonesian economy (Yenti Sumarni, 2020) (Nasution, Erlina, & Muda, 2020) (Wahidah, Septiadi, Rafqie, Hartono, & Athallah, 2020), especially in the tourism, transportation, health, and trade sectors (Susilawati, Falefi, & Purwoko, 2020). The social distancing policy during the pandemic has resulted in significant national economic losses, the real impact of losses for business entities is the loss of income due to the absence of sales, while the expense burden remains (Hadiwardoyo, 2020). One of the business sectors that is very vulnerable to bankruptcy due to the Covid-19 pandemic is the Usaha Mikro Kecil dan Menengah (UMKM) sector (Iskandar, Possumah, & Aqbar, 2020). Due to Covid-19, sales of the UMKM sector in Indonesia have decreased, it is difficult to receive capital assistance, product distribution is hampered, and raw materials are scarce (Sugiri, 2020). The strategy that can be used in an effort to save UMKM is to build business processes based on digital technology platforms (Sugiri, 2020). This is proven by the increase in sales volume through electronic transactions (e-commerce) during the pandemic because consumers feel safer, more effective and efficient by shopping online (Ayu & Lahmi, 2020). With the implementation of the health protocol, UMKM actors seem to be forced to adopt e-commerce because inevitably business actors have to change the way they transact from offline to online (Kala'lembang, 2020) (Sumarni & Melinda, 2020). E-Commerce is recommended as a strategy in maintaining UMKM during the Covid-19 pandemic (Hardilawati, 2020). During the Covid-19 pandemic, which has not yet ended, the business of selling honey is a type of business that is able to survive and even increase its market. The trend of selling honey in the Covid-19 pandemic era is quite stable and has the potential to be a good source of income because honey can be used as medicine and can increase body immunity (Okezone, 2020) (Rakyat, 2021). The beekeeping business is a potential business sector in the Covid-19 pandemic (UNPAD, 2020). Even UMKM with honey businesses are able to expand their market online to overseas in the midst of the Covid-19 pandemic (Maskur, 2021). The trend of selling honey through the Tokopedia e-commerce application has also increased two to three times compared to before the Covid-19 pandemic (TEMPO, 2020) (6, 2020) (I. Putri, 2021). Fitorajo Bee Farm is an UMKM engaged in bee cultivation located in Kota Pinang, Labuhanbatu Selatan Regency, North Sumatra Province. The product from Fitorajo Bee Farm is real honey bees packaged in plastic bottles. The marketing systems carried out by Fito-rajo Bee Farm are carried out through word of mouth, Facebook social media, or through the WhatsApp application. Meanwhile, business transactions are usually carried out by making payments directly or through ATMs. Direct payments are made when consumers come directly to the location to buy honey, while payments through ATMs are made if consumers order via Facebook or WhatsApp. Seeing the promising potential of the honey business in the Covid-19 pandemic era, and in order to reach a wider market, Fitorajo Bee Farm should clean up and innovate by adopting e-commerce technology in marketing honey in the era of the Covid-19 pandemic. don't know when it will end. Conventional marketing, which has only reached consumers on a limited scale, must be changed to a marketing system that can reach consumers from all over. The purpose of this research is to develop a marketing media for Fitorajo Bee Farm honey bees by implementing web-based e-commerce. LITERATURE REVIEW E-Commerce is a business transaction process that is run electronically through the use of computers and internet networks (Wibowo & Haryokusumo, 2020) (Andriyanto, 2018). The advantages of e-commerce compared to conventional marketing systems lie in a wide marketing reach, minimal product promotion costs, ease of conducting transactions and making sales and purchase reports (S & Sari, 2020). Codeigniter is a PHP framework whose development is oriented to the concept of Model View Controller (MVC). Codeigniter has a complete library to perform operations commonly required by web-based applications, such as accessing databases, validating forms so that the system developed is easy. Codeigniter is also the only framework with complete and clear documentation. The program code in the CodeIgniter framework is equipped with comments in it so that it further clarifies the function of a program code and the resulting CodeIgniter is very clean and Search Engine Friendly (SEF) (Destiningrum & Adrian, 2017). Unifield Modeling Language (UML) is a visual modeling method used in the design and manufacture of objectoriented software. UML is a writing standard or a kind of blueprint which includes a business process, writing classes in a specific language (Prihandoyo, 2018). UML was created to provide the tools needed by software developers in analyzing, designing and implementing software-based systems (Kurniawan, 2018). There are several UML diagrams that are often used in the development of a system, namely: use case diagrams, activity diagrams, sequence diagrams, and class diagrams (Primadasa & Juliansa, 2020). METHOD This study uses the System Development Life Cycle (SDLC) software development method by choosing the waterfall model. The waterfall model applies a sequential approach and is easy to implement in making the system (Nurjannah, Dar, & Bangun, 2021). The process in the waterfall model starts from requirements analysis, system design, implementation, testing, and maintenance (Aldi, 2022). Web-based E-Commerce application development is currently using the PHP framework. There are several reasons why the Codeigniter framework is used in building e-commerce applications in this research. First of all, the framework can increase productivity in programming because the framework provides a basic framework (Model, View and Controller), a complete Application Programming Interface and Library so that web developers don't need a long time to code (Laaziri, Benmoussa, Khoulji, & Kerkeb, 2019). Second, the built-in documentation support available from many frameworks and forums on the internet makes web developers faster in finding answers when facing error programs (Benmoussa, Laaziri, Khoulji, Larbi, & Yamami, 2019). Third, the framework has advantages in terms of web security (Lakshmi & Mallika, 2017). In testing the performance of some of the most frequently used PHP frameworks, it was found that Co-deigniter is superior to the Symfony framework in several criteria required for a Model View Controller (MVC) based framework . Viewed from the security aspect, the use of codeigniter is safer than PHP Native (Yaqin & Al Anis, 2018) because the user is not directly related to the database (Somya, 2018). The Codeigniter framework is open-source, so it does not require a fee in terms of license usage (Vidal-Silva, Jiménez, Madariaga, & Urzúa, 2020). The most popular thing about the Codeigniter framework is its very fast execution time compared to other frameworks (Saputra et al., 2020). Requirement Analysis This stage is the initial activity in creating an e-commerce web. At this stage, the needs that will be provided by the e-commerce web are determined, the information needs available on the website, the type of user and access rights, software and hardware requirements. The data collection method used in this research is literature study and observation. Literature studies are obtained from studying books or written materials such as journals that have to do with the research being carried out. Observation, namely data collection techniques carried out through field observations of the research object. In addition, data about the products to be sold is obtained from direct interviews with sources. The things that must be analyzed against the needs of this system are: First, how to manage category data and item data. Second, how to verify orders made by users. Third, how to manage order data and delivery data. Fourth, the process of making reports System Design At this stage, software and website design are carried out according to the needs obtained from the needs analysis. The system design uses the Unified Modeling Language modeling language in the form of Use Case Diagrams and Activity Diagrams. Fig. 2 Use Case Diagram Based on the use case diagram in Figure 2, there are two actors involved in the system with different access rights, namely Admin and User. Admin has full access rights to the system in processing user data, and processing report data. Meanwhile, the user can only register and order goods. Fig. 3 Activity Diagram The Activity diagram in Figure 3 shows how the process of ordering honey bee products is carried out. After the user opens the application, the user can select the desired product, then register and place an order for goods. The system in the application will change the order status after the user confirms the payment. Implementation At this stage, the design that has been made is implemented. The database is implemented using the MySQL DBMS, and the coding of the program uses the PHP Codeigniter Framework. Testing Testing is done by testing the functionality of the software that has been implemented. Testing using the Blackbox Testing method. Maintenance After testing the software, then routine maintenance is carried out on the software. Routine maintenance is carried out to prevent errors from occurring that cause system disturbances. RESULT After the design and system testing stages were carried out, it was found that what was designed was in accordance with what was shown. While in the testing process, the tested components can function properly. The system process that runs is the user opens the application then selects an item, then the system accepts the user's choice and will automatically enter the user's shopping cart, then the system calculates the total purchase automatically. The user will receive the total transaction then complete the transaction data and then pay according to the calculated amount, after the user makes the payment, the system will automatically confirm. Then the store admin will process the delivery of goods. In the implementation of e-commerce to market honey, there is a main menu where users can see the benefits of honey itself and can see various honey products. After the user sees the various honey products on the website, then wants to buy it, the user can immediately choose the honey as desired. After selecting the honey, the website will display the details of the selected honey, then the buyer can process the purchase by viewing the basket and then clicking checkout process. Then the buyer is asked to write down personal data and choose a payment method then the buyer can make a payment according to the method that has been previously selected. After making the payment, the buyer can confirm the payment. There are admin activities, namely inputting category and item data, viewing customer data, viewing orders and changing order status if the customer has made a payment, can also view reports, namely stock of goods, orders, & delivery. The class diagram in Figure 4 shows the schema of the relationship between tables in the e-commerce that was built. Some bolds have relationships with other tables. After ordering the product, the buyer must register by entering personal data. This process is so that the system can confirm the order made by the user. . 7), the user must see in detail the personal identity and product ordered before making a payment. This page also specifies the type of shipping company you want and the type of package you choose. While in Figure 8, the system has confirmed the payment that has been made by the user for further delivery of goods through the choice of the delivery company that has been determined. Testing with the blackbox testing method is carried out by carrying out several test scenarios for the menu button functionality. Furthermore, the expected results are compared with the actual results. If there is a match between the expected results and the actual results, then the test results are categorized as successful. DISCUSSIONS Based on the results of testing and implementation that have been described above. Testing using blackbox testing with system functional methods on the Fitorajo Bee Farm e-commerce web went well and there were no errors. With that produced an e-commerce application with the interface described above. All the interfaces described above are original images of the website that have been created by the author. All interfaces are made as simple as possible, to make it easier for potential buyers to use e-commerce websites. Also admin convenience in stock management, because e-commerce websites use the codeigniter framework. Admin can see all order reports after the buyer makes a transaction. After that, the admin can also update the buyer's order status so that buyers can see the status of their orders in real time. The CodeIgniter framework can be applied to a web-based Fitorajo Bee Farm e-commerce system. The use of codeigniter is very helpful in writing program code because the MVC concept in the codeigniter framework makes the program code more structured and can shorten the time of the program code generation process. CONCLUSION Research has succeeded in building a marketing media for Fitorajo Bee Farm honey bees by implementing web-based e-commerce. The results showed that the Codeigniter framework and MySQL DBMS can be applied to build e-commerce web-based honey bee marketing media.
2022-02-06T16:50:50.981Z
2022-01-19T00:00:00.000
{ "year": 2022, "sha1": "6261a067ab9b774df632df7dbe38b938cefc26bb", "oa_license": "CCBYNC", "oa_url": "https://jurnal.polgan.ac.id/index.php/sinkron/article/download/11263/736", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f3f3aa58ce9ef61962be3b9528091fc368a348cc", "s2fieldsofstudy": [ "Business", "Environmental Science" ], "extfieldsofstudy": [] }
3380833
pes2o/s2orc
v3-fos-license
Disentangling by Factorising We define and address the problem of unsupervised learning of disentangled representations on data generated from independent factors of variation. We propose FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions. We show that it improves upon $\beta$-VAE by providing a better trade-off between disentanglement and reconstruction quality. Moreover, we highlight the problems of a commonly used disentanglement metric and introduce a new metric that does not suffer from them. Introduction Learning interpretable representations of data that expose semantic meaning has important consequences for artificial intelligence. Such representations are useful not only for standard downstream tasks such as supervised learning and reinforcement learning, but also for tasks such as transfer learning and zero-shot learning where humans excel but machines struggle (Lake et al., 2016). There have been multiple efforts in the deep learning community towards learning factors of variation in the data, commonly referred to as learning a disentangled representation. While there is no canonical definition for this term, we adopt the one due to Bengio et al. (2013): a representation where a change in one dimension corresponds to a change in one factor of variation, while being relatively invariant to changes in other factors. In particular, we assume that the data has been generated from a fixed number of independent factors of variation. 3 We focus on image data, where the effect of factors of variation is easy to visualise. Using generative models has shown great promise in learning disentangled representations in images. Notably, semi-Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). 3 We discuss the limitations of this assumption in Section 4. Figure 1. Architecture of FactorVAE, a Variational Autoencoder (VAE) that encourages the code distribution to be factorial. The top row is a VAE with convolutional encoder and decoder, and the bottom row is an MLP classifier, the discriminator, that distinguishes whether the input was drawn from the marginal code distribution or the product of its marginals. supervised approaches that require implicit or explicit knowledge about the true underlying factors of the data have excelled at disentangling (Kulkarni et al., 2015;Kingma et al., 2014;Reed et al., 2014;Siddharth et al., 2017;Hinton et al., 2011;Mathieu et al., 2016;Goroshin et al., 2015;Hsu et al., 2017;Denton & Birodkar, 2017). However, ideally we would like to learn these in an unsupervised manner, due to the following reasons: 1. Humans are able to learn factors of variation unsupervised (Perry et al., 2010). 2. Labels are costly as obtaining them requires a human in the loop. 3. Labels assigned by humans might be inconsistent or leave out the factors that are difficult for humans to identify. β-VAE (Higgins et al., 2016) is a popular method for unsupervised disentangling based on the Variational Autoencoder (VAE) framework (Kingma & Welling, 2014;Rezende et al., 2014) for generative modelling. It uses a modified version of the VAE objective with a larger weight (β > 1) on the KL divergence between the variational posterior and the prior, and has proven to be an effective and stable method for disentangling. One drawback of β-VAE is that reconstruction quality (compared to VAE) must be sacrificed in order to obtain better disentangling. The goal of our work is to obtain a better trade-off between disentanglement and reconstruction, allowing to achieve better disentanglement without degrading reconstruction quality. In this work, we analyse the source of this trade-off and propose Factor-VAE, which augments the VAE objective with a penalty that encourages the marginal distribution of representations to be factorial without substantially affecting the quality of reconstructions. This penalty is expressed as a KL divergence between this marginal distribution and the product of its marginals, and is optimised using a discriminator network following the divergence minimisation view of GANs (Nowozin et al., 2016;Mohamed & Lakshminarayanan, 2016). Our experimental results show that this approach achieves better disentanglement than β-VAE for the same reconstruction quality. We also point out the weaknesses in the disentangling metric of Higgins et al. (2016), and propose a new metric that addresses these shortcomings. A popular alternative to β-VAE is InfoGAN (Chen et al., 2016), which is based on the Generative Adversarial Net (GAN) framework (Goodfellow et al., 2014) for generative modelling. InfoGAN learns disentangled representations by rewarding the mutual information between the observations and a subset of latents. However at least in part due to its training stability issues (Higgins et al., 2016), there has been little empirical comparison between VAE-based methods and InfoGAN. Taking advantage of the recent developments in the GAN literature that help stabilise training, we include InfoWGAN-GP, a version of InfoGAN that uses Wasserstein distance (Arjovsky et al., 2017) and gradient penalty (Gulrajani et al., 2017), in our experimental evaluation. In summary, we make the following contributions: 1) We introduce FactorVAE, a method for disentangling that gives higher disentanglement scores than β-VAE for the same reconstruction quality. 2) We identify the weaknesses of the disentanglement metric of Higgins et al. (2016) and propose a more robust alternative. 3) We give quantitative comparisons of FactorVAE and β-VAE against InfoGAN's WGAN-GP counterpart for disentanglement. Trade-off between Disentanglement and Reconstruction in β-VAE We motivate our approach by analysing where the disentanglement and reconstruction trade-off arises in the β-VAE objective. First, we introduce notation and architecture of our VAE framework. We assume that observations x (i) ∈ X , i = 1, . . . , N are generated by combining K underlying factors f = (f 1 , . . . , f K ). These observations are modelled using a real-valued latent/code vector z ∈ R d , interpreted as the representation of the data. The generative model is defined by the standard Gaussian prior p(z) = N (0, I), intentionally chosen to be a factorised distribution, and the decoder p θ (x|z) parameterised by a neural net. The variational posterior for an observation is q θ (z|x) = d j=1 N (z j |µ j (x), σ 2 j (x)), with the mean and variance produced by the encoder, also parameterised by a neural net. 1 The variational posterior can be seen as the distribution of the representation corresponding to the data point x. The distribution of representations for the entire data set is then given by which is known as the marginal posterior or aggregate posterior, where p data is the empirical data distribution. A disentangled representation would have each z j correspond to precisely one underlying factor f k . Since we assume that these factors vary independently, we wish for a factorial distribution is a variational lower bound on E p data (x) [log p(x (i) )] for β ≥ 1, reducing to the VAE objective for β = 1. Its first term can be interpreted as the negative reconstruction error, and the second term as the complexity penalty that acts as a regulariser. We may further break down this KL term as (Hoffman & Johnson, 2016;Makhzani & Frey, 2017) where I(x; z) is the mutual information between x and z under the joint distribution p data (x)q(z|x). See Appendix C for the derivation. Penalising the KL(q(z)||p(z)) term pushes q(z) towards the factorial prior p(z), encouraging independence in the dimensions of z and thus disentangling. Penalising I(x; z), on the other hand, reduces the amount of information about x stored in z, which can lead to poor reconstructions for high values of β (Makhzani & Frey, 2017). Thus making β larger than 1, penalising both terms more, leads to better disentanglement but reduces reconstruction quality. When this reduction is severe, there is insufficient information about the observation in the latents, making it impossible to recover the true factors. Therefore there exists a value of β > 1 that gives highest disentanglement, but results in a higher reconstruction error than a VAE. Total Correlation Penalty and FactorVAE Penalising I(x; z) more than a VAE does might be neither necessary nor desirable for disentangling. For example, InfoGAN disentangles by encouraging I(x; c) to be high where c is a subset of the latent variables z 2 . Hence we 1 In the rest of the paper we will omit the dependence of p and q on their parameters θ for notational convenience. 2 Note however that I(x; z) in β-VAE is defined under the joint distribution of data and their encoding distribution p data (x)q(z|x), whereas I(x; c) in InfoGAN is defined on the joint distribution of the prior on c and the decoding distribution p(c)p(x|c). motivate FactorVAE by augmenting the VAE objective with a term that directly encourages independence in the code distribution, arriving at the following objective: Note that this is also a lower bound on the marginal log likelihood E p data (x) [log p(x)]. KL(q(z)||q(z)) is known as Total Correlation (TC, Watanabe, 1960), a popular measure of dependence for multiple random variables. In our case this term is intractable since both q(z) andq(z) involve mixtures with a large number of components, and the direct Monte Carlo estimate requires a pass through the entire data set for each q(z) evaluation. 3 . Hence we take an alternative approach for optimizing this term. We start by observing we can sample from q(z) efficiently by first choosing a datapoint x (i) uniformly at random and then sampling from q(z|x (i) ). We can also sample fromq(z) by generating d samples from q(z) and then ignoring all but one dimension for each sample. A more efficient alternative involves sampling a batch from q(z) and then randomly permuting across the batch for each latent dimension (see Alg. 1). This is a standard trick used in the independence testing literature (Arcones & Gine, 1992) and as long as the batch is large enough, the distribution of these samples samples will closely approximateq(z). Having access to samples from both distributions allows us to minimise their KL divergence using the density-ratio trick (Nguyen et al., 2010;Sugiyama et al., 2012) which involves training a classifier/discriminator to approximate the density ratio that arises in the KL term. Suppose we have a discriminator D (in our case an MLP) that outputs an estimate of the probability D(z) that its input is a sample from q(z) rather than fromq(z). Then we have We train the discriminator and the VAE jointly. In particular, the VAE parameters are updated using the objective in Eqn. (2), with the TC term replaced using the discriminatorbased approximation from Eqn. (3). The discriminator is trained to classify between samples from q(z) andq(z), thus learning to approximate the density ratio needed for estimating TC. See Alg. 2 for pseudocode of FactorVAE. It is important to note that low TC is necessary but not sufficient for meaningful disentangling. For example, when Algorithm 1 permute dims , batch size m, latent dimension d, γ, VAE/Discriminator optimisers: g, g D Initialize VAE and discriminator parameters θ, ψ. repeat until convergence of objective. q(z|x) = p(z), TC=0 but z carries no information about the data. Thus having low TC is only meaningful when we can preserve information in the latents, which is why controlling for reconstruction error is important. In the GAN literature, divergence minimisation is usually done between two distributions over the data space, which is often very high dimensional (e.g. images). As a result, the two distributions often have disjoint support, making training unstable, especially when the discriminator is strong. Hence it is necessary to use tricks to weaken the discriminator such as instance noise (Sønderby et al., 2016) or to replace the discriminator with a critic, as in Wasserstein GANs (Arjovsky et al., 2017). In this work, we minimise divergence between two distributions over the latent space (as in e.g. (Mescheder et al., 2017)), which is typically much lower dimensional and the two distributions have overlapping support. We observe that training is stable for sufficiently large batch sizes (e.g. 64 worked well for d = 10), allowing us to use a strong discriminator. A New Metric for Disentanglement The definition of disentanglement we use in this paper, where a change in one dimension of the representation corresponds to a change in exactly one factor of variation, is clearly a simplistic one. It does not allow correlations among the factors or hierarchies over them. Thus this definition seems more suited to synthetic data with independent factors of variation than to most realistic data sets. However, as we will show below, robust disentanglement is not a fully solved problem even in this simple setting. One obstacle on the way to this first milestone is the absence of a sound quantitative metric for measuring disentanglement. A popular method of measuring disentanglement is by inspecting latent traversals: visualising the change in reconstructions while traversing one dimension of the latent space at a time. Although latent traversals can be a useful indicator of when a model has failed to disentangle, the qualitative nature of this approach makes it unsuitable for comparing algorithms reliably. Doing this would require inspecting a multitude of latent traversals over multiple reference images, random seeds, and points during training. Having a human in the loop to assess the traversals is also too timeconsuming and subjective. Unfortunately, for data sets that do not have the ground truth factors of variation available, currently this is the only viable option for assessing disentanglement. Higgins et al. (2016) proposed a supervised metric that attempts to quantify disentanglement when the ground truth factors of a data set are given. The metric is the error rate of a linear classifier that is trained as follows. Choose a factor k; generate data with this factor fixed but all other factors varying randomly; obtain their representations (defined to be the mean of q(z|x)); take the absolute value of the pairwise differences of these representations. Then the mean of these statistics across the pairs gives one training input for the classifier, and the fixed factor index k is the corresponding training output (see top of Figure 2). So if the representations were perfectly disentangled, we would see zeros in the dimension of the training input that corresponds to the fixed factor of variation, and the classifier would learn to map the index of the zero value to the index of the factor. However this metric has several weaknesses. Firstly, it could be sensitive to hyperparameters of the linear classifier optimisation, such as the choice of the optimiser and its hyperparameters, weight initialisation, and the number of training iterations. Secondly, having a linear classifier is not so intuitive -we could get representations where each factor corresponds to a linear combination of dimensions instead of a single dimension. Finally and most importantly, the metric has a failure mode: it gives 100% accuracy even when only K − 1 factors out of K have been disentangled; to predict the remaining factor, the classifier simply learns to detect when all the values corresponding to the K − 1 factors are non-zero. An example of such a case is shown in Figure 3. To address these weaknesses, we propose a new disentanglement metric as follows. Choose a factor k; generate data with this factor fixed but all other factors varying randomly; obtain their representations; normalise each dimension by its empirical standard deviation over the full data (or a large enough random subset); take the empirical variance in each dimension 4 of these normalised representations. Then the index of the dimension with the lowest variance and the target index k provide one training input/output example for the classifier (see bottom of Figure 2). Thus if the representation is perfectly disentangled, the empirical variance in the dimension corresponding to the fixed factor will be 0. We normalise the representations so that the arg min is invariant to rescaling of the representations in each dimension. Since both inputs and outputs lie in a discrete space, the optimal classifier is the majority-vote classifier (see Appendix B for details), and the metric is the error rate of the classifier. The resulting classifier is a deterministic function of the training data, hence there are no optimisation hyperparameters to tune. We also believe that this metric is conceptually simpler and more natural than the previous one. Most importantly, it circumvents the failure mode of the earlier metric, since the classifier needs to see the lowest variance in a latent dimension for a given factor to classify it correctly. We think developing a reliable unsupervised disentangling metric that does not use the ground truth factors is an important direction for future research, since unsupervised disentangling is precisely useful for the scenario where we do not have access to the ground truth factors. With this in mind, we believe that having a reliable supervised metric is still valuable as it can serve as a gold standard for evaluating unsupervised metrics. Related Work There are several recent works that use a discriminator to optimise a divergence to encourage independence in the latent codes. Adversarial Autoencoder (AAE, Makhzani et al., 2015) removes the I(x; z) term in the VAE objective and maximizes the negative reconstruction error minus KL(q(z)||p(z)) via the density-ratio trick, showing applications in semi-supervised classification and unsupervised clustering. This means that the AAE objective is not a lower bound on the log marginal likelihood. Although optimising a lower bound is not strictly necessary for disentangling, it does ensure that we have a valid generative model; having a generative model with disentangled latents has the benefit of being a single model that can be useful for various tasks e.g. planning for model-based RL, visual concept learning and semi-supervised learning, to name a few. In PixelGAN Autoencoders (Makhzani & Frey, 2017), the same objective is used to study the decomposition of information between the latent code and the decoder. The authors state that adding noise to the inputs of the encoder is crucial, which suggests that limiting the information that the code contains about the input is essential and that the I(x; z) term should not be dropped from the VAE objective. Brakel & Bengio (2017) also use a discriminator to penalise the Jensen-Shannon Divergence between the distribution of codes and the product of its marginals. However, they use the GAN loss with deterministic encoders and decoders and only explore their technique in the context of Independent Component Analysis source separation. Early works on unsupervised disentangling include (Schmidhuber, 1992) which attempts to disentangle codes in an autoencoder by penalising predictability of one latent dimension given the others and (Desjardins et al., 2012) where a variant of a Boltzmann Machine is used to disentangle two factors of variation in the data. More recently, Achille & Soatto (2018) have used a loss function that penalises TC in the context of supervised learning. They show that their approach can be extended to the VAE setting, but do not perform any experiments on disentangling to support the theory. In a concurrent work, Kumar et al. (2018) used moment matching in VAEs to penalise the covariance between the latent dimensions, but did not constrain the mean or higher moments. We provide the objectives used in these related methods and show experimental results on disentangling performance, including AAE, in Appendix F. There have been various works that use the notion of pre-dictability to quantify disentanglement, mostly predicting the value of ground truth factors f = (f 1 , . . . , f K ) from the latent code z. This dates back to Yang & Amari (1997) who learn a linear map from representations to factors in the context of linear ICA, and quantify how close this map is to a permutation matrix. More recently Eastwood & Williams (2018) have extended this idea to disentanglement by training a Lasso regressor to map z to f and using its trained weights to quantify disentanglement. Like other regressionbased approaches, this one introduces hyperparameters such as the optimiser and the Lasso penalty coefficient. The metric of Higgins et al. (2016) as well as the one we proposed, predict the factor k from the z of images with a fixed f k but f −k varying randomly. Schmidhuber (1992) quantifies predictability between the different dimensions of z, using a predictor that is trained to predict z j from z −j . Invariance and equivariance are frequently considered to be desirable properties of representations in the literature (Goodfellow et al., 2009;Kivinen & Williams, 2011;Lenc & Vedaldi, 2015). A representation is said to be invariant for a particular task if it does not change when nuisance factors of the data, that are irrelevant to the task, are changed. An equivariant representation changes in a stable and predictable manner when altering a factor of variation. A disentangled representation, in the sense used in the paper, is equivariant, since changing one factor of variation will change one dimension of a disentangled representation in a predictable manner. Given a task, it will be easy to obtain an invariant representation from the disentangled representation by ignoring the dimensions encoding the nuisance factors for the task (Cohen & Welling, 2014). Building on a preliminary version of this paper, (Chen et al., 2018) recently proposed a minibatch-based alternative to our density-ratio-trick-based method for estimating the Total Correlation and introduced an information-theoretic disentangling metric. Experiments We compare FactorVAE to β-VAE on the following data sets with i) known generative factors: 1) 2D Shapes ( From Figure 4, we see that FactorVAE gives much better disentanglement scores than VAEs (β = 1), while barely sacrificing reconstruction error, highlighting the disentangling effect of adding the Total Correlation penalty to the VAE objective. The best disentanglement scores for Factor-VAE are noticeably better than those for β-VAE given the same reconstruction error. This can be seen more clearly in Figure 5 where the best mean disentanglement of Fac-torVAE (γ = 40) is around 0.82, significantly higher than the one for β-VAE (β = 4), which is around 0.73, both with reconstruction error around 45. From Figure 6, we can see that both models are capable of finding x-position, y-position, and scale, but struggle to disentangle orientation and shape, β-VAE especially. For this data set, neither method can robustly capture shape, the discrete factor of variation 5 . As a sanity check, we also evaluated the correlation between our metric and the metric in Higgins et al. (2016) We have also examined how the discriminator's estimate of the Total Correlation (TC) behaves and the effect of γ on the true TC. From Figure 7, observe that the discriminator is consistently underestimating the true TC, also confirmed in (Rosca et al., 2018). However the true TC decreases throughout training, and a higher γ leads to lower TC, so the gradients obtained using the discriminator are sufficient for encouraging independence in the code distribution. We then evaluated InfoWGAN-GP, the counterpart of Info-GAN that uses Wasserstein distance and gradient penalty. See Appendix G for an overview. One advantage of Info-GAN is that the Monte Carlo estimate of its objective is differentiable with respect to its parameters even for discrete codes c, which makes gradient-based optimisation straightforward. In contrast, VAE-based methods that rely on the reprameterisation trick for gradient-based optimisation require z to be a reparameterisable continuous random variable and alternative approaches require various vari-ance reduction techniques for gradient estimation (Mnih & Rezende, 2016;Maddison et al., 2017). Thus we might expect Info(W)GAN(-GP) to show better disentangling in cases where some factors are discrete. Hence we use 4 continuous latents (one for each continuous factor) and one categorical latent of 3 categories (one for each shape). We tuned for λ, the weight of the mutual information term in Info ( However from Figure 8 we can see that the disentanglement scores are disappointingly low. From the latent traversals in Figure 9, we can see that the model learns only the scale factor, and tries to put positional information in the discrete latent code, which is one reason for the low disentanglement score. Using 5 continuous codes and no categorical codes did not improve the disentanglement scores however. Info-GAN with early stopping (before training instability occurs -see Appendix H) also gave similar results. The fact that some latent traversals give blank reconstructions indicates that the model does not generalise well to all parts of the domain of p(z). One reason InfoWGAN-GP's poor performance on this data set could be that InfoGAN is sensitive to the generator and discriminator architecture, which is one thing we did not tune extensively. We use a similar architecture to the VAEbased approaches for 2D shapes for a fair comparison, but have also tried a bigger architecture which gave similar results (see Appendix H). If architecture search is indeed important, this would be a weakness of InfoGAN relative to Better Figure 10. Same as Figure 5 for 3D Shapes data. We now show results on the 3D Shapes data, which is a more complex data set of 3D scenes with additional features such as shadows and background (sky). We train both β-VAE and FactorVAE for 1M iterations. Figure 10 again shows that FactorVAE achieves much better disentanglement with barely any increase in reconstruction error compared to VAE. Moreover, while the top mean disentanglement scores for FactorVAE and β-VAE are similar, the reconstruction error is lower for FactorVAE: 3515 (γ = 36) as compared to 3570 (β = 24). The latent traversals in Figure 11 show that both models are able to capture the factors of variation in the best-case scenario. Looking at latent traversals across many random seeds, however, makes it evident that both models struggled to disentangle the factors for shape and scale. To show that FactorVAE also gives a valid generative model for both 2D Shapes and 3D Shapes, we present the log marginal likelihood evaluated on the entire data set together with samples from the generative model in Appendix E. We also show results for β-VAE and FactorVAE experiments on the data sets with unknown generative factors, namely 3D Chairs, 3D Faces, and CelebA. Note that inspecting latent traversals is the only evaluation method possible here. We can see from Figure 12 Conclusion and Discussion We have introduced FactorVAE, a novel method for disentangling that achieves better disentanglement scores than β-VAE on the 2D Shapes and 3D Shapes data sets for the same reconstruction quality. Moreover, we have identified weaknesses of the commonly used disentanglement metric of Higgins et al. (2016), and proposed an alternative metric that is conceptually simpler, is free of hyperparameters, and avoids the failure mode of the former. Finally, we have performed an experimental evaluation of disentangling for the VAE-based methods and InfoWGAN-GP, a more stable variant of InfoGAN, and identified its weaknesses relative to the VAE-based methods. One of the limitations of our approach is that low Total Correlation is necessary but not sufficient for disentangling of independent factors of variation. For example, if all but one of the latent dimensions were to collapse to the prior, the TC would be 0 but the representation would not be disentangled. Our disentanglement metric also requires us to be able to generate samples holding one factor fixed, which may not always be possible, for example when our training set does not cover all possible combinations of factors. The metric is also unsuitable for data with nonindependent factors of variation. For future work, we would like to use discrete latent variables to model discrete factors of variation and investigate how to reliably capture combinations of discrete and continuous factors using discrete and continuous latents. Kumar, A., Sattigeri, P., and Balakrishnan, A. Variational inference of disentangled latent concepts from unlabeled observations. In ICLR, 2018. (2016) across 10 random seeds for varying L and number of Adagrad optimiser iterations (batch size 10). The number of points used for evaluation after optimisation is fixed to 800. These were all evaluated on a fixed, randomly chosen β-VAE model that was trained to convergence on the 2D Shapes data. Figure 17. Mean and standard deviation of our metric across 10 random seeds for varying L and number of points used for evaluation. These were all evaluated on a fixed, randomly chosen β-VAE model that was trained to convergence on the 2D Shapes data. empirical variance: Note that this is equal to empirical variance for continuous variables when d( Remark. Note that this decomposition is equivalent to that in Hoffman & Johnson (2016), written as follows: Proof. D. Using a Batch Estimate of q(z) for Estimating TC We have also tried using a batch estimate for the density q(z), thus optimising this estimate of the TC directly instead of having a discriminator and using the density ratio trick. In other words, we tried q(z) ≈q(z) = 1 |B| i∈B q(z|x (i) ), and using the estimate: Disentangling by Factorising Note that: for z (h) iid ∼ q(z). However while experimenting on 2D Shapes, we observed that the value of log q(z (h) ) becomes very small (negative with high absolute value) for latent dimension d ≥ 2 during training, becauseq(z) is not a good enough approximation to q(z) unless B is very big. As training progresses for the VAE, the variance of Gaussians q(z|x (i) ) becomes smaller and smaller, so they do not overlap too much in higher dimensions. Hence we get z (h) ∼ q(z) that land on the tails ofq(z) = 1 |B| i∈B q(z|x (i) ), giving worryingly small values of logq(z (h) ). On the other hand jq (z (h) j ), a mixture of |B| d Gaussians hence of much higher entropy, gives much more stable values of log jq (z (h) j ). From Figure 18, we can see that even with B as big as 10,000, we get negative values for the estimate of TC, which is a KL divergence and hence should be non-negative, hence this method of using a batch estimate for q(z) does not work. A fix is to use samples fromq(z) instead of q(z), but this seemed to give a similar reconstruction-disentanglement trade-off to β-VAE. Very recently, work from (Chen et al., 2018) has shown that disentangling can be improved by using samples fromq(z). E. Log Marginal Likelihood and Samples We give the log marginal likelihood of each of the best performing β-VAE and FactorVAE models (in terms of disentanglement) for both the 2D Shapes and 3D Shapes data sets along with samples from the generative model. Since the log marginal likelihood is intractable, we report the Importance-Weighted Autoencoder (IWAE) bound with 5000 particles, in line with standard practice in the generative modelling literature. In Figures 19 and 20, the samples for FactorVAE are arguably more representative of the data set than those of β-VAE. For example β-VAE has occasional samples with two separate shapes in the same image ( Figure 19). The log marginal likelihood for the best performing β-VAE (β = 4) is -46.1, whereas for FactorVAE it is -51.9 (γ = 35) (a randomly chosen VAE run gives -43.3). So on 2D Shapes, FactorVAE gives better samples but worse log marginal likelihood. In general, if one seeks to learn a generative model with a disentangled latent space, it would make sense to choose the model with the lowest value of β or γ among those with similarly high disentanglement performance. F. Losses and Experiments for other related Methods The Adversarial Autoencoder (AAE) (Makhzani et al., 2015) uses the following objective utilising the density ratio trick to estimate the KL term. Information Dropout (Achille & Soatto, 2018) uses the objective (8) The following objective is also considered in the paper but is dismissed as intractable: Note that it is similar to the FactorVAE objective (which has β = 1), but with p(z) in the first KL term replaced with q(z). (Kumar et al., 2018) uses the VAE objective with an additional penalty on how much the covariance of q(z) deviates from the identity matrix, either using the law of total covariance Cov q(z) DIP-VAE where µ(x) = mean(q(z|x)), or directly (DIP-VAE II): One could argue that during training of FactorVAE, j q(z j ) will be similar to p(z), assuming the prior is factorial, due to the KL(q(z|x)||p(z)) term in the objective. Hence we also investigate a modified FactorVAE objective that replaces j q(z j ) with p(z): However as shown in Figure 40 of Appendix I, the histograms of samples from the marginals are clearly quite different from the the prior for FactorVAE. Moreover we show experimental results for AAE (adding a γ coefficient in front of the KL(q(z)||p(z)) term of the objective and tuning it) and the variant of FactorVAE (Eqn. (12)) on the 2D Shapes data. From Figure 23, we see that the disentanglement performance for both are somewhat lower than that for FactorVAE. This difference could be explained as a benefit of directly encouraging q(z) to be factorised (FactorVAE) instead of encouraging it to approach an arbitrarily chosen factorised prior p(z) = N (0, I) (AAE, Eqn. (12)). Information Dropout and DIP-VAE did not have enough experimental details in the paper nor publicly available code to have their results reproduced and compared against. G. InfoGAN and InfoWGAN-GP We give an overview of InfoGAN (Chen et al., 2016) and InfoWGAN-GP, its counterpart using Wasserstein distance and gradient penalty. InfoGAN uses latents z = (c, ) where c models semantically meaningful codes and models incompressible noise. The generative model is defined by a generator G with the process: c ∼ p(c), ∼ p( ), z = (c, ), x = G(z). i.e. p(z) = p(c)p( ). GANs are defined as a minimax game on some objective V (D, G), where D is either a discriminator (e.g. for the original GAN (Goodfellow et al., 2014)) that outputs log probabilities for binary classification, or a critic (e.g. for Wasserstein-GAN (Arjovsky et al., 2017)) that outputs a real-valued scalar. InfoGAN defines an extra encoding distribution Q(c|x) that is used to define an extra penalty: that is added to the GAN objective. Hence InfoGAN is the following minimax game on the parameters of neural nets D, G, Q: (14) L can be interpreted as a variational lower bound to I(c; G(c, )), with equality at Q = arg min Q V I (D, G, Q). i.e. L encourages the codes to be more informative about the image. From the definition of L, it can also be seen as the reconstruction error of codes in the latent space. The original InfoGAN defines: (15) same as the original GAN objective where D outputs log probabilities. However as we'll show in Appendix H this has known instability issues in training. So it is natural to try replacing this with the more stable WGAN-GP (Gulrajani et al., 2017) objective: and with a newx for each iteration of optimisation. Thus we obtain InfoWGAN-GP. H. Empirical Study of InfoGAN and InfoWGAN-GP To begin with, we implemented InfoGAN and InfoWGAN-GP on MNIST using the hyperparameters given in Chen et al. (2016) to better understand its behaviour, using 1 categorical code with 10 categories, 2 continuous codes, and 62 noise variables. We use priors p(c j ) = U [−1, 1] for the continuous codes, p(c j ) = 1 J for categorical codes with J categories, and p( j ) = N (0, 1) for the noise variables. For 2D Shapes data we use 1 categorical codes with 3 categories (J = 3), 4 continuous codes, and 5 noise variables. The number of noise variables did not seem to have a noticeable effect on the experiment results. We use the Adam optimiser (Kingma & Ba, 2015) with β 1 = 0.5, β 2 = 0.999, and learning rate 10 −3 for the generator updates and 10 −4 for the discriminator updates. The detailed Discriminator/Encoder/Generator architecture are given in Tables 4 and 5. The architecture for InfoWGAN-GP is the same as InfoGAN, except that we use no Batch Normalisation (batchnorm) (Ioffe & Szegedy, 2015) for the convolutions in the discriminator, and replace batchnorm with Layer Normalisation (Ba et al., 2016) in the fully connected layer that follows the convolutions as recommended in (Gulrajani et al., 2017). We use gradient penalty coefficient η = 10, again as recommended. We firstly observe that for all runs, we eventually get a degenerate discriminator that predicts all inputs to be real, as in Figure 24. This is the well-known instability issue of the original GAN. We have tried using a smaller learning rate for the discriminator, and although this delays the degenerate behaviour it does not prevent it. Hence early stopping seems crucial, and all results shown below are from well before the degenerate behaviour occurs. Chen et al. (2016) claim that the categorical code learns digit class (discrete factor of variation) and that the continuous codes learn azimuth and width, but when plotting latent traversals for each run, we observed that this is inconsistent. We show five randomly chosen runs in Figure 25. The digit class changes in the continuous code traversals and there are overlapping digits in the categorical code traversal. Similar results hold for InfoWGAN-GP in Figure 36. We also tried visualising the reconstructions: given an image, we push the image through the encoder to obtain latent codes c, fix this c and vary the noise to generate multiple reconstructions for the same image. This is to check the extent to which the noise can affect the generation. We can see in Figure 26 that digit class often changes when varying Furthermore, we investigated the sensitivity of the model to the number of latent codes. We show latent traversals using three continuous codes instead of two in Figure 27. It is evident that the model tries to put more digit class information into the continuous traversals. So the number of codes is an important hyperparameter to tune, whereas VAE methods are less sensitive to the choice of number of codes since they can prune out unnecessary latents by collapsing q(z j |x) to the prior p(z j ). We also tried varying the number of categories for the categorical code. Using 2 categories, we see from Figure 28 that the model tries to put much more information about digit class into the continuous latents, as expected. Moreover from Figure 30, we can see that the noise variables also have more information about the digit class. However, when we use 20 categories, we see that the model still puts information about the digit class in the continuous latents. However from Figure 31 we see that the noise variables contain less semantically meaningful information. Using InfoWGAN-GP solved the degeneracy issue and makes training more stable (see Figure 33), but we observed that the other problems persisted (see e.g. Figure 36). For 2D Shapes, we have also tried using a bigger architecture for InfoWGAN-GP that is used for a data set of similar dimensions (Chairs data set) in Chen et al. (2016). See (Table 6) for run with best disentanglement score (λ = 0.6). Table 6. However as can be seen in Figure 34 this did not improve disentanglement scores, yet the latent traversals look slightly more realistic ( Figure 35). In summary, InfoWGAN-GP can help prevent the instabilities in training faced by InfoGAN, but it does not help overcome the following weaknesses compared to VAE-based methods: 1) Disentangling performance is sensitive to the number of code latents. 2) More often than not, the noise variables contain semantically meaningful information. 3) The model does not always generalise well to all across the domain of p(z). I. Further Experimental Results From Figure 37, we see that higher values of γ in FactorVAE leads to a lower discriminator accuracy. This is as expected, since a higher γ encourages q(z) and j q(z j ) to be closer together, hence a lower accuracy for the discriminator to successfully classify samples from the two distributions. We also show histograms of q(z j ) for each j in β-VAE Disentangling by Factorising and FactorVAE for different values of β and γ at the end of training on 2D Shapes in Figure 40. We can see that the marginals of FactorVAE are quite different from the prior, which could be a reason that the variant of FactorVAE using the objective given by Eqn. (12) leads to different results to FactorVAE. For FactorVAE, the model is able to focus on factorising q(z) instead of pushing it towards some reconstruction error iteration iteration Figure 39. Same as Figure 12 but for CelebA. arbitrarily specified prior p(z).
2018-01-12T07:39:43.928Z
2018-02-16T00:00:00.000
{ "year": 2018, "sha1": "04541599accc47d8174f63345ce9c987ef21685b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d9fe7ccc562bd29c95741e4e1cb9a0fd5c49dba9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
259342008
pes2o/s2orc
v3-fos-license
Implementing measurement error models in a likelihood-based framework for estimation, identifiability analysis, and prediction in the life sciences Throughout the life sciences we routinely seek to interpret measurements and observations using parameterised mechanistic mathematical models. A fundamental and often overlooked choice in this approach involves relating the solution of a mathematical model with noisy and incomplete measurement data. This is often achieved by assuming that the data are noisy measurements of the solution of a deterministic mathematical model, and that measurement errors are additive and normally distributed. While this assumption of additive Gaussian noise is extremely common and simple to implement and interpret, it is often unjustified and can lead to poor parameter estimates and non-physical predictions. One way to overcome this challenge is to implement a different measurement error model. In this review, we demonstrate how to implement a range of measurement error models in a likelihood-based framework for estimation, identifiability analysis, and prediction. We focus our implementation within a frequentist profile likelihood-based framework, but our approach is directly relevant to other approaches including sampling-based Bayesian methods. Case studies, motivated by simple caricature models routinely used in the systems biology and mathematical biology literature, illustrate how the same ideas apply to different types of mathematical models. Open-source Julia code to reproduce results is available on GitHub. Introduction Mechanistic mathematical modelling and statistical uncertainty quantification are powerful tools for interpreting noisy incomplete data and facilitate decision making across a wide range of applications in the life sciences. Interpreting such data using mathematical models involves many different types of modelling choices, each of which can impact results and their interpretation. One of the simplest examples of connecting a mathematical model to data involves estimating a best-fit straight line using linear regression and the method of ordinary least squares [1][2][3][4]. In this example, the mathematical model is chosen to be a straight line, y = mx + c, and the noisy data are assumed to be normally distributed with zero mean and constant positive variance about the true straight line. This assumption of additive Gaussian noise is a modelling choice that we refer to as an additive Gaussian measurement error model. Measurement error models can be used to describe uncertainties in the measurement process and random intrinsic variation. Other similar terminologies includes noise model, error model, and observation error model, but here we will refer to this as a measurement error model. Here and throughout, we assume that measurement errors are uncorrelated, independent and identically distributed. Best-fit model parameters,m andĉ, are estimated by minimising the sum of the squared residuals, E(m, c) = where the i th residual, for i = 1, 2, . . . , I, is the distance in the y-direction between the i th data point, y o i , and the corresponding point on the best-fit straight line, y i . Hence the name method of least squares. The best-fit straight line is then the mathematical model evaluated at the best-fit model parameters, i.e. y =mx +ĉ, wherem andĉ are the values of the slope and intercept that minimises E(m, c). Uncertainty in this example can be captured through the use of confidence intervals for model parameters, a confidence interval for the straight line based on the uncertainty in the model parameters, and a prediction interval for future observations [1][2][3][4]. In this review we present a general framework extending these concepts to mechanistic mathematical models, in the form of systems of ordinary differential equations (ODEs) and systems of partial differential equations (PDEs), that are often considered in the systems biology literature and the mathematical biology literature, respectively. In particular, our primary focus is on the fundamental question of how to connect the output of a mathematical model to data using a variety of measurement error models. The additive Gaussian measurement error model is ubiquitous and simple to interpret for mechanistic mathematical models, and often relates to estimating a best-fit model solution using nonlinear regression and a least-squares estimation problem [5,6]. However, in practice the assumption of additive Gaussian noise is often unjustified and, as we demonstrate, this can have important consequences because this assumption can lead to poor parameter estimates and non-physical predictions. Furthermore, even when the additive Gaussian error model is a reasonable choice it may not always be the most appropriate, for example multiplicative noise models are often thought to be more relevant to problems in some parts of the systems biology literature [7][8][9][10][11][12][13]. One approach to tackle Figure 1: Implementing a variety of measurement error models in a profile likelihood-based framework for parameter estimation, identifiability analysis, and prediction. Results analyse Eq (17), the additive Gaussian measurement error model, known model parameters θ = (r 1 , r 2 , σ N ) = (1.0, 0.5, 4.0), fixed initial conditions (c 1 (0), c 2 (0)) = (100.0, 25.0), and observed data at sixteen equally-spaced time points from t = 0.0 to t = 2.0. (a) Synthetic data (circles). Throughout c 1 (t) (green) and c 2 (t) (magenta). (b) The framework is applicable to a range of mathematical models and measurement error models. (c) Mathematical model simulated with the MLÊ θ = (r 1 , r 2 , σ N ) = (1.03, 0.51, 4.18) (solid line). Inset of (c) residualsê i = y o i − y i (θ) with time, t. (d) Residual analysis can take many forms, results show a normal quantile-quantile plot of residuals. (e-g) Profile likelihoods (blue) shown for (e) r 1 , (f) r 2 , and (g) σ N with MLE (vertical red) and an approximate 95% confidence interval threshold (horizontal black-dashed). Predictions in the form of (h-o) confidence sets for the model solution and (p-w) Bonferroni correction-based confidence sets for data realisations. (h-k, p-s) show the mathematical model simulated with the MLE (solid), synthetic data (circles), and confidence sets (shaded regions). (l-o, t-w) To examine the confidence sets in detail we show the difference between the respective confidence sets and the mathematical model simulated with the MLE. Results correspond to parameters in the order: r 1 , r 2 , σ N , and θ. we work within a likelihood-based framework. The likelihood function, L(θ | D), is related to the probability of observing data D as a function of the parameters θ [39]. In this setting the bestfit model solution corresponds to the output of the mathematical model simulated at the model parameters which are found to be 'best' in the sense of those parameters that maximise L(θ | D). Parameters can be used to describe the mathematical model, such as m and c in the straight line example, and as well as describing the noise, such as the variance σ 2 N in the additive Gaussian measurement error model. In this work we estimate both mathematical model parameters and statistical noise parameters simultaneously. Comparing the best-fit model solution with the data, and analysing residuals helps us to understand whether modelling choices are appropriate (Fig 1d). Techniques to analyse standard additive residuals are reviewed in [5,6]. Practical parameter identifiability. While point estimates of best-fit model parameters are insightful, we often seek to understand how well parameters can be identified given a finite set of noisy incomplete data [15,29,38,40]. This question of practical parameter identifiability, and the subsequent components of the framework, can be explored using frequentist [15,29,39,40] or Bayesian methods [41][42][43][44][45][46]. While both approaches are generally interested in uncertainty quantification, we choose to work with a frequentist profile likelihood-based method that employs numerical optimisation procedures [15,29,39,40,[47][48][49][50]. The optimisation procedures tend to be more computationally efficient than sampling-based Bayesian methods for problems considered in this study [40,51,52]. While working with a full likelihood-based approach is relatively straightforward for models with a small number of parameters, this approach becomes computationally challenging for more complicated models with many parameters. By using a profile likelihood-based method we can target individual parameters of interest, explore their practical identifiability, and form approximate confidence intervals (Fig 1e-g) [39]. Prediction. Given a set of estimated model parameters, together with an estimate of the uncertainty in our estimates, it is natural to seek to understand how uncertainty in model parameters impacts predictions of model solutions (mathematical model trajectories) and data realisations (unobserved measurements). This is important because practitioners are most likely to be interested in understanding the variability in predictions rather than variability in parameter estimates. In this framework we show that using parameter estimates to generate predictions is a powerful tool to assess the appropriateness of modelling choices and to interpret results. Predictions in the form of profile-wise confidence sets for model solutions are introduced in [40,52,53] (Fig 1h-o). These methods are simpler to implement and interpret in comparison to previous prediction methods that can involve additional constrained optimisation problems or integration based techniques [15,[54][55][56][57][58]]. An approach to form likelihood-based confidence sets for model realisations, where the model is composed of a mechanistic mathematical model and a measurement error model, was introduced in [40] and here we present concrete examples (Fig 1p-w). We also demonstrate how to assess statistical coverage properties that are often of interest, including curvewise and pointwise coverage properties for predictions, and make comparisons to a gold-standard full likelihood-based approach [40]. This review is structured as follows. In §2, we detail how to implement different measurement error models for parameter estimation, identifiability analysis, and prediction using profile likelihoodbased techniques. In §3, we demonstrate the generality of the framework by exploring a variety of measurement error models using illustrative case studies motivated by systems biology-type models and mathematical biology-type models. In §4 we present an explicit example of how to evaluate statistical coverage properties. Supplementary material presents additional results including a comparison to a full likelihood-based approach [40]. To aid with understanding and reproducibility, all open source Julia code used to generate results is freely available on GitHub. 2 Parameter estimation, identifiability analysis, and prediction Here we detail a profile likelihood-based framework for parameter estimation, identifiability analysis, and prediction. Throughout, we assume that experimental measurements are noisy observations of a deterministic mechanistic mathematical model. This framework is very general as it applies to cases where measurement error models may be additive, multiplicative, discrete, or continuous. As illustrative examples, we explicitly discuss and implement additive Gaussian noise, multiplicative log-normal and Poisson noise models. Mechanistic mathematical models may take many forms, for example systems of ODEs, systems of PDEs, and systems of difference equations. We choose to work with simple models to focus on the implementation of the framework and to make this work of interest to the broadest possible audience, as opposed to focusing on the details of specific mathematical models that are likely to be of interest to a smaller community. Our hope is that by focusing on fundamental mathematical models and providing open source code that readers can adapt these ideas to suit specific models for their particular area of interest. Data We consider temporal data that are often reported in the systems biology literature and are often interpreted in terms of models of chemical reaction networks and gene regulatory networks, and spatio-temporal data that are often reported in mathematical biology literature and interpreted using reaction-diffusion models. Temporal data are recorded at specified times. Spatio-temporal data are recorded at specified times and spatial positions. We let y o i denote the i th experimental measurement at time t i and spatial position x i . The superscript 'o' is used to distinguish the observed data from mechanistic mathematical model predictions. The spatial position, x i , may be a scalar or vector, and is omitted for temporal data. We represent multiple measurements at the same time and spatial position using distinct subscript indices. Assuming I experimental measurements, we collect the individual noisy measurements into a vector y o 1:I , collect the observation times into a vector t 1:I , and, for spatio-temporal data, collect the spatial positions into a vector x 1:I . Mechanistic mathematical model We consider a variety of temporal and spatio-temporal mechanistic mathematical models. Temporal models in systems biology often take the form of systems of ODEs [14][15][16], where y(t) = y (1) (t), y (2) (t), . . . , y (n) (t) represents an n-dimensional vector of model solutions at time t, and θ M represents a vector of mathematical model parameters. Noise free mathematical model solutions are evaluated at each t i , denoted y i (θ M ) = y(t i ; θ M ), and collected into a vector Spatio-temporal models often take the form of systems of PDEs. In mathematical biology we often consider systems of advection-diffusion-reaction equations [17][18][19][20][21], where y(t, x) = y (1) (t, x), y (2) (t, x), . . . , y (n) (t, x) represents an n-dimensional vector of model solutions at time t and position x, and θ M represents a vector of mathematical model parameters. Noise free mathematical model solutions, evaluated at t i and x i are denoted y i (θ M ) = y(t i , x i ; θ M ), and collected into a vector y 1:I (θ M ). The framework is well-suited to consider natural extensions of Eq (2), for example additional mechanisms such as nonlinear diffusion or non-local diffusion or PDE models in higher dimensions or in different coordinate systems [19,20]. The framework is also wellsuited to consider many more mechanistic mathematical models, for example difference equations Measurement error models Measurement error models are a powerful tool to describe and interpret the relationship between experimental measurements, y o i , and noise free mathematical model solutions, y i (θ M ). We take the common approach and assume that experimental measurements are noisy observations of a deterministic mechanistic mathematical model. This often corresponds to uncorrelated, independent, and identically distributed additive errors or multiplicative errors, in which case measurement errors are of the form e i = y o i − y i (θ M ) or e i = y o i /y i (θ M ), respectively. Good agreement between the data and the solution of a mathematical model corresponds to e i = 0 for additive errors and e i = 1 for multiplicative noise. In practice, the true model solution y(θ M ) is unknown and we use a prediction of the best-fit model solution y(θ). Therefore, for additive errors we analyse standard additive residuals taking the formê i = y o i − y i (θ). While it is common to analyse multiplicative noise via additive residuals in log-transformed variables, i.e. log(y o i ) − log(y i (θ)) =ê i [10], here we take a more direct approach and analyse the ratioê i = y o i /y i (θ). Error models can take many forms, including discrete or continuous models, and are typically characterised by a vector of parameters θ E . The full model, comprising the mathematical model and measurement error model, is then characterised by θ = (θ M , θ E ). We will demonstrate that it is straightforward to implement a range of measurement error models using three illustrative examples. Additive Gaussian model The additive Gaussian model is ubiquitous, simple to interpret, and captures random errors and measurement uncertainties in a wide range of applications. Measurement errors are assumed to be additive, independent, and normally distributed with zero mean and constant variance, σ 2 N > 0. Therefore, experimental measurements, y o i , are assumed to be independent and normally distributed about the noise free model solution, y i (θ M ), Under this noise model the mean, median, and mode of the distribution of possible values of y o i | θ are identical and equal to y i (θ). The variance is σ 2 N and θ E = σ N . Using this error model to obtain a best-fit solution of the mathematical model to the data, in the form of a maximum likelihood estimate, reduces to a nonlinear least squares problem. However, this error model is not always appropriate. Data in systems and mathematical biology are often non-negative, for example chemical concentrations or population densities. Implementing the additive Gaussian error model for data close to zero can be problematic and lead to non-negative physically unrealistic predictions as we will explore later in several case studies. Log-normal model The log-normal model is employed to ensure non-negative and right-skewed errors in a range of biological applications [9][10][11][12][13]. This error model is multiplicative and we write Here, θ E = σ L and η i are assumed to be independent. Eq (4) can also be written as y o i | θ ∼ LogNormal log (y i (θ)) , σ 2 L . Key statistics for the distribution of possible values of y o i | θ include the mean y i (θ) exp(σ 2 L /2), median y i (θ), mode y i (θ) exp(−σ 2 L ), and variance (y i (θ)) 2 exp(σ 2 L ) exp(σ 2 L ) − 1 . In contrast to the additive Gaussian model which has constant variability over time, with the lognormal model variability increases as y i (θ) increases and variability vanishes as y i (θ) → 0 + . The log-normal error model can also be written as y o i | θ = y i (θ) exp(ε i ) where ε i ∼ N (0, σ 2 L ) and is equivalent to implementing an additive Gaussian error model for log-transformed experimental measurements and log-transformed noise free model solutions, i.e. log(y o i ) | θ ∼ N (log(y i (θ M )), σ 2 L ). Poisson model The Poisson model is commonly employed to analyse non-negative count data [40,59,60]. Unlike the previous two measurement error model models, we do not introduce additional parameters to describe this error model, so θ = θ M , and we write The Poisson distribution in Eq (5) is a discrete probability density function that is neither additive or multiplicative. The model is only appropriate when observed data, y o i , are non-negative integers. However, there are no such technical restrictions for the output of the mathematical model and y i (θ) may take any non-negative value. When y i (θ) = 0 we consider the limit of Poisson distribution such that the only possible outcome is y o i = 0 [61]. Under the Poisson model key statistics for the distribution of possible values of y o i | θ include the mean y i (θ); the median lies between y i (θ) − 1 and y i (θ) + 1 ; the modes are y i (θ) and y i (θ) − 1 when y i (θ) is a positive integer and y i (θ) when y i (θ) is a positive non-integer; and the variance y i (θ) [62]. In contrast to the additive Gaussian model which has approximately constant variability over time, with the Poisson model variability increases as y i (θ) increases and variability vanishes as y i (θ) → 0 + . Parameter estimation We perform parameter estimation for the full model that comprises two components: (i) a mechanistic mathematical model; and, (ii) a measurement error model. We take a general approach and simultaneously estimate the full model parameters θ. This means that we estimate the mathematical model parameters, θ M , and measurement error model parameters, θ E , simultaneously. It is straightforward to consider special cases of this approach where a subset of the full model parameters θ may be pre-specified or assumed known, for example in cases where the measurement error model parameters θ E can be pre-specified [42,51]. Taking a likelihood-based approach to parameter estimation, we use the log-likelihood, where φ (y o i ; y i (θ), θ) represents the probability density function related to the measurement error model. For the additive Gaussian error model φ (y o i ; y i (θ), θ) =φ y o i ; y i (θ), σ 2 N (θ) , whereφ(x; µ, σ 2 ) represents the Gaussian probability density function with mean µ and variance σ 2 . For the log-normal error model φ (y o i ; y i (θ), θ) =φ y o i ; log (y i (θ)), σ 2 L (θ) , whereφ (x; µ, σ) represents the probability density function of the Lognormal(µ, σ 2 ) distribution. For the Poisson error model, φ (y o i ; y i (θ), θ) = φ (y o i ; y i (θ)), whereφ(x; λ) represents the probability density function for the Poisson distribution with rate parameter λ. It is straightforward to compute the log-likelihood for a range of distributions using the loglikelihood function in Julia (Distributions package [63]). As illustrative examples we compute the log-likelihood for additive Gaussian, log-normal, and Poisson measurement error models using loglikelihood(Normal(y i (θ), σ 2 ), y o i ), loglikelihood(LogNormal(log(y i (θ)), σ 2 L ), y o i ), and loglikelihood(Poisson(y i (θ)), y o i ), respectively. To obtain a point-estimate of θ that gives the best match to the data, in the sense of the highest likelihood, we seek the maximum likelihood estimate (MLE), We estimateθ, subject to bound constraints, using numerical optimisation. There are many algorithms to perform numerical maximisation. We find that the Nelder-Mead local optimisation routine, with default stopping criteria, within the NLopt optimisation package in Julia performs well for the problems in this study [64]. Identifiability analysis using the profile likelihood We are often interested in the range of parameters that give a similar match to the data as the MLE. This is analogous to asking whether parameters can be uniquely identified given the data. There are two approaches to address this question of parameter identifiability: structural identifiability and practical identifiability. Structural identifiability explores whether parameters are uniquely identifiable given continuous noise free observations of model solutions. Many software tools, utilising symbolic calculations, have been developed to analyse structural identifiability for systems of ODEs as reviewed in [34]. Throughout this study, we assess structural identifiability in Julia (StructuralIdentifiability package [37]). Tools to assess structural identifiability of systems of PDEs have not been widely developed [65], and structural identifiability analysis of PDE models is an active area of research Practical identifiability assesses how well model parameters can be identified given a finite set of noisy incomplete data. To explore practical identifiability we use a profile likelihood-based approach and work with the normalised log-likelihood, Normalising the log-likelihood means thatl(θ | y o 1:I ) ≤ 0 andl(θ | y o 1:I ) = 0. To assess whether each parameter within the full parameter vector, θ, is practically identifiable, we partition θ as θ = (ψ, λ) where ψ is a scalar target parameter of interest and λ is a vector representing the remaining nuisance parameters [39,66,67] We then work with the profile log-likelihood for the interest parameter ψ [39,68],ˆ p (ψ | y o 1:I ) = sup λ|ψˆ (ψ, λ | y o 1:I ). Therefore, the profile log-likelihood maximises the normalised log-likelihood for each value of ψ. This process implicitly defines a function λ * (ψ) of optimal values of λ for each ψ, and defines a curve with points (ψ, λ * (ψ)) in parameter space that includes the MLE,θ = (ψ,λ). To estimatê p (ψ | y o 1:I ) we define a mesh of 2N points for ψ comprising N equally-spaced points from a prespecified lower bound, ψ L , toψ and N equally-spaced points fromψ to a pre-specified upper bound, ψ U . Choices of lower and upper bounds and the number of mesh points can be chosen on a case-bycase basis, depending on the results. We choose the lower and upper bounds to capture approximate confidence intervals. We choose the number of mesh points so that there are many points within the approximate confidence interval, typically we choose N = 20. For each value of ψ in the mesh we estimateˆ p (ψ | y o 1:I ), subject to the bound constraints for λ, using numerical maximisation. To perform the numerical maximisation we again implement the Nelder-Mead local optimisation routine, with default stopping criteria, within the NLopt optimisation package in Julia [64]. Univariate profile likelihoods for scalar interest parameters, referred to as profiles for brevity, provide a visual and quantitative tool to assess practical identifiability. A narrow univariate profile that is well-formed about a single peak corresponds to a parameter of interest that is practically identifiable, while a wide flat profile indicates that the parameter of interest is not practically identifiable. We assess narrow and wide relative to log-likelihood-based approximate confidence intervals. We define the log-likelihood-based approximate confidence interval for ψ from the profile log-likelihood, where the threshold parameter c is chosen such that the confidence interval has an approximate asymptotic coverage probability of 1−α. Many studies report 95% confidence intervals corresponding respectively [39,69]. These thresholds are calibrated using the χ 2 distribution, which is reasonable for sufficiently regular problems [39,69]. In particular, c = −∆ ν,1−α /2, where ∆ ν,1−α refers to the (1 − α) quantile of a χ 2 distribution with ν degrees of freedom set equal to the dimension of the interest parameter, e.g ν = 1 for univariate profiles. Computationally, the threshold is given by c=quantile(Chisq(ν), 1 − α)/2, using Julia (Distributions package [63]). It is straightforward to extend this approach to consider a vector valued interest parameters, for example to generate bivariate profiles [40]. Predictions We generate predictions for model solutions, y = y(t; θ), and data realisations, z i , using a profile likelihood-based approach. These predictions propagate forward uncertainties in interest parameters and allows us to understand and interpret the contribution of each model parameter, or combinations of parameters, to uncertainties in predictions. This step is very important when using mathematical models to interpret data and to communicate with collaborators from other disciplines simply because predictions and variability in predictions are likely to be of greater interest than estimates of parameter values in a mathematical model. Confidence sets for deterministic model solutions We now propagate forward uncertainty in an interest parameter, ψ, to understand and interpret the uncertainty in predictions of the model solution, y = y(t; θ). The approximate profile-wise likelihood for the model solution, y, is obtained by taking the maximum profile likelihood value over all values of ψ consistent with y(t; (ψ, λ * (ψ))) = y, i.e., p (y (t; (ψ, λ * (ψ))) = y | y o 1:I ) = sup ψ|y(t;(ψ,λ * (ψ)))=yˆ Here, y(t; (ψ, λ * (ψ))) corresponds to the output of the mechanistic mathematical model solved with parameter values θ = (ψ, λ * (ψ)). The confidence set for the model solution, y, propagated from the interest parameter ψ is In practice, we form an approximate (1 − α)% confidence interval, C ψ y,1−α (y o 1:I ), by simulating y (t; (ψ, λ * (ψ))) for each ψ ∈ C ψ,1−α (y o 1:I ). This confidence set can be used to reveal the influence of uncertainty in ψ on predictions of the model solution, for example whether the parameter of interest contributes to greater uncertainty at early, intermediate, or late times in the solution of the mathematical model. If the mapping from ψ to y(t; (ψ, λ * (ψ))) is 1-1 thenˆ p (y (t; (ψ, λ * (ψ))) = y | Each parameter in θ can be treated in turn as an interest parameter. Therefore, for each parameter in θ we can construct an approximate confidence interval C ψ y,1−α (y o 1:I ). Comparing approximate confidence intervals constructed for different parameters in θ illustrates which parameters contribute to greater uncertainty in model solutions [53]. This can be important for understanding how to improve predictions and for experimental design. However, optimising out nuisance parameters in this profile likelihood-based approach typically leads to lower coverage than other methods that consider all uncertainties simultaneously, especially when the model solution has weak dependence on the interest parameter and non-trivial dependence on the nuisance parameters [52]. More conservative approximate confidence sets, relative to the individual profile-wise confidence sets, can be constructed by taking the union of individual profile-wise confidence sets for the model solution, Equation (13) provides insight into the uncertainty due to all model parameters across the solution of the mathematical model. As we will demonstrate, this approach is a simple, computationally efficient, and an intuitive model diagnostic tool. Furthermore, the method can be repeated with vector-valued interest parameters and increasing the dimension results in closer agreement to full likelihood-based methods [40]. This approach can also be generalised beyond that of predictions of the model solution to predictions of data distribution parameters [40]. Note that for the additive Gaussian and Poisson measurement error models the model solution is the mean of the data distribution and for the lognormal measurement error model the model solution is the median of data distribution. These methods are simpler to implement and interpret in comparison to previous methods, such as those that involve additional constrained optimisation problems [54][55][56][57]. Confidence sets for noisy data realisations In practice we are often interested in using mathematical models to generate predictions of noisy data realisations, since an individual experiment measurement can be thought of as a noisy data realisation. These predictions allow us to explore what we would expect to observe if we were to repeat the experiment or if we were to measure at different times and/or spatial positions. By building our framework on parameterised mechanistic mathematical models we can also predict beyond the data based on a mechanistic understanding. In contrast to confidence sets for the model solution (trajectory), here we consider confidence sets for noisy single time observations. To form approximate ( quantiles of the normal distribution with mean y(t i ) and standard deviation σ N . This computational approach naturally extends to other measurement error models, including the Poisson and log-normal models. In practice however, we typically face a more challenging scenario where the true model parameters and true mathematical model solution, y = y(t; θ), are all unknown, and we now outline two approaches for dealing with this situation. MLE-based approach. When the true model parameters and true mathematical model solution are unknown a simple approach is to assume that the model parameters are given by the MLE,θ, and the true solution of the mathematical model is given by evaluating the solution of the model at the MLE, y(t;θ). With this assumption, it is then straightforward to generate a (1 − α)% confidence set as previously described. In practice, it is unlikely that the MLE,θ, will be identical to the true model parameters, θ, so this approach may not reach the desired coverage level. However, when uncertainty due to statistical noise is large relative to the difference between y(t; θ) and y(t;θ) this simple MLE-based approach can work well. Bonferroni correction-based approaches. A more conservative approach for forming confidence sets for model realisations involves propagating forward uncertainty in model parameters. The following approach was introduced in [40], and here we present concrete examples. Consider an interest parameter ψ and a corresponding confidence set for the model solution, C ψ y,1−α/2 (y o 1:I ). For each y ∈ C ψ y,1−α/2 (y o 1:I ) we construct a prediction set A ψ y,1−α/2 (y o 1:I ) such that the probability of observing a measurement z i ∈ A ψ y,1−α/2 (y o 1:I ) is 1 − α/2. Computationally, A ψ y,1−α/2 (y o 1:I ) can be constructed in a pointwise manner by estimating the α/4 and 1 − α/4 quantiles of the probability distribution associated with the measurement error model. Taking the union for each y ∈ C ψ y,1−α/2 (y o i:I ) we obtain a conservative (1 − α)% confidence set for model realisations from the interest parameter ψ, This approach employs a Bonferroni correction method [40,70]. Equation (14) represents a conservative confidence set for the data realisations z i at the level of the individual interest parameter ψ. Treating each parameter in θ in turn as an interest parameter and taking the union results in a confidence set for the overall uncertainty in data realisations, Coverage properties Coverage properties of confidence intervals and confidence sets are defined formally, but for likelihoodbased confidence sets coverage properties are expected to only hold asymptotically. In practice, we can evaluate approximate statistical coverage properties numerically by repeated sampling. In particular, we can generate, and then analyse, many data sets using the same mathematical model, measurement error model, and true model parameters, θ. A detailed illustrative example for temporal data is discussed in §4. The procedure is applicable to a range of models and data. Case studies We will now implement the general framework using simple caricature mathematical models routinely Example measurement error models that we consider include additive Gaussian, log-normal, and Poisson. Systems biology-style linear models Consider a chemical reaction network with two chemical species C 1 and C 2 . We assume that C 1 decays to form C 2 at a rate r 1 , and that C 2 decays at a rate r 2 . Within this modelling framework we do not explicitly model the decay products from the second reaction. Applying the law of mass action, the concentrations of C 1 and C 2 at time t, denoted c 1 (t) and c 2 (t), respectively, are governed by the following system of ODEs, We refer to the terms on the right-hand side of Eq (16) as the reaction terms, which are linear in this simple case. Equation (16) has an analytical solution, which for r 1 = r 2 can be written as, In the special case r 1 = r 2 we can write the exact solution in a different format where c 2 (t) is proportional to c 1 (t).. We treat the initial conditions c 1 (0) and c 2 (0) as known so that Eqs (16)- (17) are characterised by two parameters r 1 and r 2 that we will estimate. Here, r 1 and r 2 are structurally identifiable. Initial conditions can also easily be treated as unknowns within this framework [53,71]. For parameter estimation we solve Eq (16) numerically using the default ODEproblem solver in Julia (DifferentialEquations package [72]). Solving Eq (16) numerically is convenient because we do not have to consider the cases r 1 = r 2 and r 1 = r 2 separately in our numerical implementation. There are many techniques to analyse standard additive residuals in greater detail should a simple visual interpretation lead us to conclude that the residuals are not independent [5,6,50]. We take a simple and common graphical approach. We plot the residuals on a normal quantile-quantile plot (Fig 1d). As the residuals appear close to the reference line on the normal quantile-quantile plot, the assumption of normally distributed residuals appears reasonable. In practice it is often crucial to understand whether model parameters can be approximately identified or whether many combinations of parameter values result in a similar fit to the data. To address this question of practical identifiability we compute univariate profile likelihoods for r 1 , r 2 , and σ N . Each profile is well-formed around a single central peak (Fig 1e). This suggests that each model parameter is well identified by the data. Using the profile likelihoods we compute approximate 95% confidence intervals, r 1 ∈ (0.97, 1.10), r 2 ∈ (0.45, 0.56) and σ N ∈ (3.33, 5.46). These confidence intervals indicate the range of values for which we are 95% confident that the true values lie within. On this occasion each component of the known parameter θ is contained within the respective confidence interval. Thus far we have obtained estimates of best-fit parameters and associated uncertainties. To connect estimates of best-fit parameters and associated uncertainties to data we need to understand how uncertainty in θ propagates forward to uncertainties in the dependent variables, here c 1 (t) and c 2 (t), as this is what is measured in reality. There are many predictions of c 1 (t) and c 2 (t) that one could make. We consider two key forms of predictions: confidence sets for deterministic model solutions and Bonferroni correction-based confidence sets for noisy data realisations. For each parameter we generate confidence sets for the model solution and explore the difference between the confidence sets and the mathematical model simulated with the MLE (Fig 1h-w). Results in For example, uncertainty in the parameter r 2 corresponds to increasing uncertainty in the model solution for c 2 (t) as time increases, i.e. C r2 y,0.95 −y(θ) increases with time for c 2 (t) (Fig 1i,m). However, uncertainty in the measurement error model parameter, σ N , does not contribute to uncertainty in predictions of the model solution (Fig 1j,n), since the noise is additive. Furthermore, we can observe that for t ≥ 1 uncertainty in r 2 contributes to greater uncertainty in c 2 (t) than uncertainty in r 1 (Fig 1h,i,l,m). Predictions in the form of Bonferroni correction-based confidence sets for data realisations take into account the measurement error model (Fig 1p-w). These can be generated for In practice faced with experimental data, we do not know which measurement model is appropriate. An extremely common approach in this situation is to assume an additive Gaussian measurement error model as we do in Figure 1. This choice is simple to implement and interpret but the suitability of this choice is often unjustified. We now explore an example where assuming additive Gaussian errors is inappropriate and leads to physically-unrealistic predictions. In Fig 2a we Evaluating Eq (16) with the MLE we observe good agreement with the data (Fig 2a). However, , on a normal quantile-quantile plot shows a visually distinct deviation from the reference line (Fig 2c). This suggests that the additive Gaussian measurement error model may be inappropriate. Nevertheless, we proceed with the additive Gaussian error model to demonstrate further issues that can arise and subsequent opportunities to detect the misspecified measurement error model. Profile likelihoods for r 1 , r 2 , and σ N suggest that these parameters are practically identifiable and approximate 95% confidence intervals, r 1 ∈ (0.77, 1.22) and r 2 ∈ (0.35, 0.54), capture known parameter values. Due to the error model misspecification, we are unable to compare the approximate confidence interval for σ N to a known value. We now generate a range of predictions. Profile-wise confidence sets for the mean reveal how uncertainty in estimates of mathematical model parameters, r 1 and r 2 , result in uncertainty in predictions (Fig 2g,h,j,k). For example, Figs 2g,j show that uncertainty in r 1 results in greater uncertainty in c 2 (t) close to t = 1 as opposed to close to t = 0 and t = 5. In contrast, Figs 2h,k show that uncertainty in r 2 results in greater uncertainty in c 2 (t) for t ≥ 1 than 0 < t < 1. In addition, we observe that uncertainty in r 1 contributes to greater uncertainty in predictions for c 1 (t) than uncertainty in r 2 (Fig 2g,h). Taking the union of the profile-wise confidence sets for the model solution we observe the overall uncertainty due to mathematical model parameters (Fig 2i). Thus far these results appear to be physically realistic. However, now we consider Bonferroni correction-based profile-wise confidence sets for data realisations, and their union, that incorporate uncertainty in both the mathematical model parameters and measurement error model parameters (Fig 3). These predictions of data realisations generate results with negative concentrations (Fig 2). Such non-physical predictions are a direct consequence of using the additive Gaussian error model. i /y i (θ), are reasonably described by the log-normal distribution (Fig 4c). Profile likelihoods suggest model parameters are practically identifiable (Fig 4d-f). Approximate 95% confidence intervals, r 1 ∈ (0.90, 1.02), r 2 ∈ (0.38, 0.56) and σ L ∈ (0.37, 0.56), capture known parameters and show that using the additive Gaussian error model overestimated uncertainty in r 1 . Profile-wise confidence sets for data realisations and their union are non-negative and so physically realistic (Fig 4k-n). Systems biology-style nonlinear models It is straightforward to explore mathematical models of increasing complexity within the framework. A natural extension of Eq (16) assumes that chemical reactions are rate-limited and nonlinear, Here V i and K i represent maximum reaction rates and Michaelis-Menten constants for chemical species C i , with concentrations c i (t), for i = 1, 2. We solve Eq (18) numerically using the default ODEproblem solver in Julia (DifferentialEquations package [72]). We treat the initial conditions c 1 (0) and c 2 (0) as known. Then Eq (18) is characterised by four parameters θ = (V 1 , K 1 , V 2 , K 2 ) that we will estimate. These four parameters are structurally identifiable. Note that the previous example, Eq 16, only involved two parameters and so our use of the profile likelihood in that case (Fig 5a). Profile likelihoods for V 1 , K 1 , V 2 and K 2 capture known parameter values and show that these parameters are practically identifiable. Predictions, in the form of the union of profile-wise confidence sets for the means (Fig 5(g)) and the union of profilewise confidence sets for realisations (Fig 5(h)), show greater uncertainty at higher concentrations. Re-analysing this data using the additive Gaussian measurement error model results in non-physical predictions at later times where c 1 (t) and c 2 (t) are close to zero. The framework is straightforward to apply to other ODEs with nonlinear reaction terms, for example the Lotka-Volterra predator-prey Mathematical biology-style models Throughout mathematical biology and ecology we are often interested in dynamics that occur in space and time [17][18][19][20][21]. This gives rise to spatio-temporal data that we analyse with spatio-temporal models such as reaction-diffusion models.. Reaction-diffusion models have been used to interpret a range of applications including chemical and biological pattern formation, spread of epidemics, and animal dispersion, invasion, and interactions [17][18][19][20][21][73][74][75]. As a caricature example, consider a system of two diffusing chemical species in a spatial domain −∞ < x < ∞ subject to the reactions in Eq (16). The governing system of PDEs is, Here, D represents a constant diffusivity. We choose initial conditions to represent the release of chemical C 1 from a confined region, Solving Eqs (19)- (20) analytically, for r 1 = r 2 , gives (Supplementary S1) [76,77], where erf(z) = 2/ √ π z 0 exp(η 2 ) dη is the error function [77]. An analytical solution for the special case r 1 = r 2 can also be obtained and has a different format where again c 2 (t, x) is proportional to c 1 (t, x). Assuming that C 0 and h are known, Eq (21) is characterised by three unknown parameters (D, r 1 , r 2 ). We generate synthetic spatio-temporal data at eleven spatial points and five different times (Fig 6a-e). To generate the synthetic data we use Eq (21), the Poisson measurement error model, and set θ = (D, r 1 , r 2 ) = (0.5, 1.2, 0.8) and fix (C 0 (0), h) = (100, 1). To obtain estimates of D, r 1 , r 2 and generate predictions, we use Eq (21) and the Poisson measurement error model. Simulating the mathematical model with the MLE, we observe excellent agreement with the data (Fig 6a-f). Univariate profile likelihoods for D, r 1 , and r 2 are well-formed, capture the known parameter values, and suggest that these parameters are practically identifiable. Predictions, in the form of the union of profile-wise confidence sets for realisations (Fig 5h), show that there is greater uncertainty at higher chemical concentrations. This framework also applies to systems of PDEs that are solved numerically (Supplementary S2). Coverage Frequentist methods for estimation, identifiability, and prediction are generally concerned with constructing estimation procedures with reliability guarantees, such as coverage of confidence intervals and sets, and so in for completeness we explore coverage properties numerically. We present an illustrative example revisiting Eq (16) For each data set, we propagate forward variability in r 1 to generate an approximate 95% confidence set for the model solution, C r1 y,0.95 . We consider coverage of this confidence set from two perspectives. First, we explore coverage from the perspective of testing whether or not the true For the problems we consider the variation in the confidence set at each time point is narrow relative to the overall variation in c 1 (t) and c 2 (t) over time (Fig 7a). Therefore, we plot and examine the difference between the confidence set and the model solution at the MLE, C r1 y,0.95 − y(θ), and the difference between the true model solution and the model solution at the MLE, y(θ) − y(θ) (Fig 7b,c). The c 1 (t) component of the true model solution, y(t; θ), is contained within the confidence set (Fig 7b). However, the true model solution is only contained within the c 2 (t) component of the confidence set for t ≤ 1.056 (Fig 7c). Hence, the true model solution is not contained within the confidence set C r1 y,0.95 . We repeat this analysis for the confidence set C r2 y,0.95 (Figure 7d-f) and the union of the confidence sets C y,0.95 = C r1 y,0.95 ∪ C r2 y,0.95 (Figure 7g-i). By construction, the confidence set C y,0.95 has coverage properties that are at least as good as C r1 y,0.95 and C r2 y,0.95 . For example, in Fig 7h,i the true model solution is contained within C y,0.95 whereas it is not contained within C r1 y,0.95 . Assessing whether the model solution is or is not entirely contained within the confidence sets C r1 y,0.95 , C r2 y,0.95 , and C y,0.95 for each of the 5000 data sets, we obtain observed curvewise coverage probabilities of 0.007, 0.018, and 0.609, respectively. These observed coverage probabilities are much lower than results for confidence intervals of model parameters. However, in contrast to our profile-wise coverage results, a full likelihood-based approach recovers an observed curvewise coverage probability of 0.956 for the confidence set for model solutions (Supplementary S3). Given the drastic differences in observed curvewise coverage probabilities between the profile likelihood-based method and full likelihood-based method one may expect that the confidence sets from the two methods are qualitatively very different. However, comparing the two confidence sets they appear to qualitatively very similar (Supplementary S3). This suggests that subtle differences in confidence sets may play an important role in observed curvewise coverage probabilities. Full likelihood-based approaches are computationally expensive relative to profile likelihood-based methods, especially for models with many parameters. Here we have only considered univariate profiles. However, an interesting approach is to use profile likelihood-based methods with higher-dimensional interest parameters. These have been shown to improve coverage properties relative to scalar valued interest parameters at a reduced computational expense relative to full likelihood-based methods [40]. Assessing pointwise coverage can help diagnose why we do not reach target curvewise coverage properties when propagating univariate profiles. This kind of diagnostic tool can be used to inform experimental design questions regarding when, and/or where, to collect additional data. In this context, the confidence sets can be interpreted as tools for sensitivity analysis. We discretise the temporal domain into 100 equally-spaced points (0.022 ≤ t ≤ 2.200), and exclude t = 0 because initial conditions are treated as fixed quantities in this instance. For each data set, time point, chemical concentration, and confidence set, we test whether the true model solution is contained within the confidence set. For the component of the confidence set C r1 y,0.95 − y(θ) concerning c 1 (t), the observed pointwise coverage is constant throughout time and equal to 0.932 which is relatively close to the desired value (Fig 8a). In contrast, for the component of the confidence set C r1 y,0.95 − y(θ) concerning c 2 (t), the observed pointwise coverage is initially equal to 0.920 at t = 0.022, then decreases over time reaching a minimal value of 0.012 at t = 1.408 before increasing to 0.497 at t = 2.200 (Fig 8e). Similar behaviour is observed for the confidence set C r2 y,0.95 − y(θ) (Fig 8b,f). Taking the union of the confidence sets we obtain more conservative confidence sets, with an observed pointwise coverage for c 1 (t) of 0.932 throughout (Fig 8c) and an observed pointwise coverage for c 2 (t) of at least 0.681 (Fig 8g). Note that the solution of the mathematical model evaluated at the MLE, y(t;θ), is not identical to the true model solution so, as expected, the observed pointwise coverage probability of this single trajectory is zero at all time points (Fig 8d,h). We now explore MLE-based and Bonferroni correction-based confidence sets for model realisations in a pointwise manner. For both methods we apply the same evaluation procedure (Fig 9). For each of the 5000 synthetic data sets we generate the confidence set for the data realisations and then generate a new synthetic data set under the same conditions as the original synthetic data set. In particular, the new data set is generated at the same time points using the same mathematical model, measurement error model, and parameter values. This approach can be be thought of as a test of the predictions under replication of the experiment. For each new data point, which includes fifteen equally-spaced data points from t = 0.13 to t = 2.00, we test whether or not it is contained within the confidence set for the model realisation. Results for a single synthetic data set show that Bonferroni correction-based confidence sets for model realisations based on r 1 (Fig 10a-c), r 2 (Fig 10d-f), and their union (Fig 10g-i) can overcover relative to the MLE-based approach (Figure 10j-l). Step 2. Generate synthetic data Step 3. Estimate the MLE Step 1. Choose model parameters Step 4. Generate confidence set for data realisations based on the MLE Step 5. Generate new test data under same conditions Step 6. Pointwise check if test data is contained in confidence set for data realisations Figure 9: Schematic for evaluation procedure used to test coverage properties of confidence sets for model realisations. In this work we repeat these steps 5000 times. Example presented using the MLE-based approach, and is readily adapted for Bonferroni correction-based confidence sets for model realisations by modifying step four. Bonferroni-based prediction -Union Bonferroni-based prediction -r๛ Bonferroni-based prediction -r While the framework presented in this section is straightforward to apply to other mathematical models and measurement error models, coverage properties should be interpreted and assessed on a case-by-case basis. In Supplementary S5 we present such an example using the log-normal measurement error model and find similar results to those discussed here. Other frequentist evaluation procedures can also be used to explore coverage properties of confidence sets for model realisations. For example, for a data set with I elements we could generate a confidence set for model realisations based on the first k < I time points of data and then test if one, or more, of the remaining I − k elements of the data set are contained in the confidence set. Conclusion In this review we demonstrate how to practically implement a variety of measurement error models in a general profile likelihood-based framework for parameter estimation, identifiability analysis, and prediction. Illustrative case studies explore additive, multiplicative, discrete, and continuous measurement error models and deal with the commonly-encountered situation of noisy and incomplete data. Mathematical models in the case studies are motivated by the types of models commonly found in the systems biology literature and the mathematical biology literature. Within the framework, assessing uncertainties in parameter estimates and propagating forward these uncertainties to form predictions allows us to assess the appropriateness of measurement error models and make direct comparisons to data. Furthermore, techniques to assess pointwise and curvewise coverage properties provide useful tools for experimental design and sensitivity analysis. The profile likelihood-based methods, based on numerical optimisation procedures, are computationally efficient and a useful approximation to full likelihood-based methods (Supplementary S3) [40]. Open source Julia code to reproduce results is freely available on GitHub. These implementations can be adapted to deal with other forms of mathematical models or they could be adapted for implementation within other software frameworks, however we prefer Julia because it is freely available and computationally efficient. This likelihood-based framework could also be implemented using other methods, for example Bayesian sampling-based methods. We illustrate the framework using simple caricature models to emphasise the practical implementation of the methods and how to interpret results, rather than the details of each mathematical model. This includes systems of ODEs that are often used in the systems biology literature ( §3.1, §3.1, Supplementary S4) and systems of PDEs routinely used in the mathematical biology literature ( §3.3). ODE-based models are also routinely used to described biological population dynamics [78] and disease transmission [79]. As parameter estimation, identifiability analysis, and prediction within the profile likelihood-based framework depend only on the solution of the mathematical model, the solution can be obtained analytically or numerically. Analytical solutions are preferred over numerical solutions for computational efficiency, however closed-form exact solutions cannot always be found.. For this reason we implement a number of case studies that involve working with simple exact solutions, as well as working with numerical solutions obtained using standard discretisations of the governing differential equations. One can also consider other mathematical models with the framework, such as difference equations are often used in applications about ecology (Supplementary S4) [19,[22][23][24][25]. More broadly the framework can apply to stochastic differential equation-based models [80] and stochastic simulation-based models [81][82][83][84][85]. Extensions to models that incorporate process noise are of interest [25,[86][87][88][89][90]. The framework is well-suited to consider a variety of measurement error models. Illustrative case studies explore the additive Gaussian error model, the multiplicative log-normal model, and the discrete Poisson model. All example calculations presented in this review take an approach where synthetic data are generated using a mathematical model rather than working with experimental measurements. This is a deliberate choice that allows us to explicitly explore questions of model misspecification and model choice unambiguously since we have complete control of the underlying data generating process. By definition, samples from the log-normal distribution are positive so we deliberately avoid situations where the observed data is zero when using the log-normal measurement error model. A different error model should be considered in such a case, for example, based on the zero-modified log-normal distribution [62,91]. For both the log-normal and Poisson error models we also avoid situations where the observed data is positive and the model solution is identically zero. For example, our solutions of ODE-based models approach zero at late time but remain positive for all time considered in this work. Exploring error models for reaction-diffusion PDEs with nonlinear diffusion is of interest, for example those that give rise to travelling wave solutions describing biological invasion with sharp boundaries [92][93][94]. In such an example we may expect to evaluate the error model, and so the likelihood function, at points in space where the data is positive but the model solution is zero. How to handle such a situation and which measurement error model to incorporate is an interesting question that could be explored by extending the tools developed in this review. Within the framework one could also consider other forms of multiplicative error models, for example based on the gamma distribution [7,8], of which the exponential and Erlang distributions are special cases, or based on the beta distribution [25]. A different form of the log-normal distribution with mean equal to y i (θ) could also be considered within the framework and is given by Multiplicative noise can be also be implemented in other forms. We have considered multiplicative noise of the form y o i = y i (θ)η i with η i ∼ LogNormal(0, σ 2 L ) (Eq 4), which for a straight line model, y(θ) = c + mx, would be y o i = (c + mx i )η i . However, multiplicative noise could also be associated with a component of the model solution. As a specific example from a protein quantification study [10] consider the straight line model where multiplicative noise is incorporated into the slope of the equation but not the y-intercept, i.e y o i = c + mx i η i with η i ∼ LogNormal(0, σ 2 L ). One could also relax assumptions in the Poisson distribution that the variance is equal to the mean, in which case the negative binomial distribution may be useful [88]. The framework also applies to other discrete distributions such as the binomial model [95]. Different measurement error models could also be studied for example the proportional, exponential, and combined additive and proportional error models that are used in pharmacokinetic modelling [96]. Throughout we assume that errors are independent and identically distributed. Extending the noise model to consider correlated errors is also of interest [97,98]. Assessing coverage properties using different evaluation procedures and assessing predictive capability through the lens of tolerance intervals is also of interest [70,99]. Overall, the choice of which mathematical model and measurement error model to use should be considered on a case-by-case basis and can be explored within this framework. S1 Analytical solutions Here we present a transformation method to obtain analytical solutions to systems of linear ordinary differential equations and systems of linear partial differential equations with coupling in the source terms of the partial differential equation models. These methods, based on diagonalisation [1], can be applied to systems with n chemical species; to systems of partial differential equations in higher spatial dimensions, and to systems of partial differential equations describing additional mechanisms such as advection [1]; and to a range of initial conditions for which there are exact solutions for the analogous uncoupled problems [2]. S1.1 Systems of ordinary differential equations with linear reaction terms The general method follows [1]. Consider a system of ordinary differential equations dc(t)/dt = Kc(t), where c(t) is an n-dimensional vector valued function and K is an n×n constant diagonalisable matrix. We first determine an n × n constant matrix S whose columns are the eigenvectors of K. Next, we define a new n-dimensional vector valued function, b(t), via the relationship c(t) = Sb(t). Assuming S is invertible, b(t) = S −1 c(t) and the system of equations can be written as an uncoupled system of equations db(t)/dt =Kb(t) that we solve for b(t). HereK = S −1 KS is a n × n constant diagonal matrix. We obtain the solution c(t) using c(t) = Sb(t). (S.4) Using c(t) = Sb(t), we transform b(t) to obtain, The unknowns b 1 (0) and b 2 (0) are determined using the initial conditions for c 1 (0) and c 2 (0), giving the solution for c(t), (S.6) Solutions in Eq (S.6) are restricted to r 1 = r 2 . When r 1 = r 2 the solutions can be evaluated and in this case c 2 (t) is a multiple of c 1 (t) [3].] S1.2 Systems of partial differential equations with linear reaction terms The general method follows [1] and extends Supplementary §S1.1.Consider a system of partial differential equations ∂c(t, x)/∂t − D∂ 2 c(t, x)/∂x 2 = Kc(t, x), where c(t, x) is an n-dimensional vector valued function, K is an n × n constant diagonalisable matrix, and D is a constant parameter. We first determine an n × n constant matrix S whose columns are the eigenvectors of K. Next, we define a new n-dimensional vector valued function, b(t, x), via the relationship c(t, x) = Sb(t, x). Assuming S is invertible, b(t, x) = S −1 c(t, x) and the system of equations can be written as as an uncoupled system of equations ∂b(t, x)/∂t − D∂ 2 b(t, x)/∂x 2 =Kb(t, x) that we solve for b(t, x) using standard methods (e.g. similarity solutions, integral transforms). HereK = S −1 KS is a n × n constant diagonal matrix. We obtain the solution c(t, x) using c(t, x) = Sb(t, x). As an explicit example, consider Eq (19) with c(t, x) = (c 1 (t, x), c 2 (t, x)) on the spatial domain Here, for r 1 = r 2 , We also transform the initial conditions (Eq 20), using b(0, x) = S −1 c(0, x), to obtain, Equation (S.9) represents two uncoupled equations that we can solve analytically [2], to obtain, Using c(t, x) = Sb(t, x), we transform b(t, x) to obtain the solution for c(t, x), Solutions in Eq (S.12) are restricted to r 1 = r 2 . An analytical solution can also be found for r 1 = r 2 . S2 Numerical solution of system of partial differential equations The framework for parameter estimation, identifiability analysis, and prediction in the main manuscript can be applied to mathematical models that are solved analytically and/or numerically. Here we present an explicit example using the system of partial differential equations in Eq (19) that form the mathematical biology case study. In the main manuscript we solve Eq (19) analytically. We obtain the same profile likelihoods, confidence sets for model solutions and confidence sets for model realisations when solving Eq (19) numerically. This is because the output of the mathematical model is independent of the solution method. In particular, comparing the analytical and numerical solutions of Eq (19) we observe excellent agreement (Fig S1). This is a useful result. We often require numerical methods to solve systems of partial differential equations. S3 Comparison to a full likelihood-based approach Here we present an example comparing results from the profile likelihood-based method described in Section 2 of the main manuscript and a gold-standard full likelihood-based method that we describe here [16]. Consider the two-parameter two-species chemical reaction model from Section 4 of the main manuscript. This is given by Eq (16) with the additive Gaussian error model. We fix the measurement error model parameter (σ N = 5) and initial conditions (c 1 (0), c 2 (0)) = (100, 25). This results in a model with two unknown parameters, θ = (r 1 , r 2 ), that we will estimate. We choose to fix σ N and the initial conditions in this illustrative example as it is simpler to interpret and visualise results in two dimensions. Each threshold is calibrated using the chi-squared distribution with one degree of freedom. The MLE,θ = (r 1 , r 2 ) = (1.03, 0.51) (black circle) and true parameter value, θ = (r 1 , r 2 ) = (1.00, 0.50) (green circle), are both contained within the region defined by the approximate 95% approximate confidence interval threshold for a univariate parameter. Furthermore, due to the fine mesh, profile likelihoods obtained by maximising over the normalised log-likelihood values in the grid and by optimisation procedures (Section 2 in the main manuscript) show excellent agreement (Fig S3a). Proceeding with the full likelihood-based approach, we simulate the model solution at all points in (r 1 , r 2 ) parameter space whereˆ (θ | y o 1:I ) ≥ −3.00. In Fig S3b, we present the minimum and maximum of these projections using black-dashed lines and observe qualitatively excellent agreement with the union of the profile likelihood-based confidence sets for the model solutions (shaded). Incorporating measurement noise, we observe qualitatively excellent agreement between Bonferroni correction-based confidence sets for data realisations obtained from the full likelihood-based approach and the union of Bonferroni correction-based profile-wise confidence sets for data realisations from the profile likelihood-based method (Fig S3c). (c) Profile likelihood-based union of Bonferroni correction-based profile-wise confidence sets for data realisations (shaded) and full likelihood-based Bonferroni correction-based confidence set for data realisations (black-dashed). Throughout results from the profile-likelihood approach agree closely with the results from the full likelihood approach. Statistical coverage properties can be evaluated for the confidence sets numerically. We generate 5000 synthetic data sets. For each data set we compute the 95% confidence region for r 1 and r 2 and test whether the true model parameter is contained within the region. This gives an observed coverage probability of 0.950 which is very close to the target coverage probability of 0.950. For each data set we also construct a 95% confidence set for the model solution and test whether the true model solution is entirely contained within the confidence set. This gives an observed curvewise coverage probability of 0.954, which is much greater than results obtained using the profile-likelihood based methods (Section 4 in the main manuscript). To compare with results from the profile-likelihood approach we also compute the pointwise coverage of the model solutions for c 1 (t) and c 2 (t) ( Fig S4). Note that confidence sets for the model solutions from the profile likelihood-based method and the full likelihood-based method appear to agree very well qualitatively (Fig S3b) but observed differences in observed curvewise and pointwise coverage do not agree (Fig S4a,b, 7c,f). This suggests that subtle differences in confidence sets can result in drastic changes to coverage properties. Using the full likelihood-based method, the average observed pointwise coverage of the 95% Bonferroni correction-based confidence set for model realisations is found to be conservative and equal to 99.4% ( Fig S4c,d). S4 Additional results In the main manuscript we demonstrate the framework for systems of ordinary differential equations and partial differential equations. Here we demonstrate the framework with different models including the Lotka-Volterra model (Supplementary §S4.1) and a discrete-time population growth model S4.1 System of ordinary differential equations: Predator-prey Here, we demonstrate that the framework works well for systems of ordinary differential equations that give rise to oscillatory solutions. We consider the Lotka-Volterra predator-prey model for two chemical species C 1 (the 'prey') and C 2 (the 'predator') [10,11], where c 1 (t) and c 2 (t) represent the concentrations of C 1 and C 2 ; and V 1 , K 1 , V 2 , K 2 are positive constants. We treat initial conditions c 1 (0) and c 2 (0) as known. Then Eq (S.15) is characterised by four parameters θ = (V 1 , K 1 , V 2 , K 2 ). For parameter estimation we solve Eq (S.15) numerically using the default ODEproblem solver in Julia (DifferentialEquations package) [4]. We generate synthetic data using Eq (S.15), the Poisson measurement error model, model parameters, θ = (V 1 , K 1 , V 2 , K 2 ) = (0.1000, 0.0025, 0.0025, 0.3000), and initial conditions (c 1 (0), c 2 (0)) = (30.0, 10.0) (Fig S5a). We consider three measurements of c 1 (t) and three measurements of c 2 (t) at t = 25, t = 40, t = 50, t = 60, and t = 75. These parameters and time points are chosen deliberately so that residuals are not normally distributed with zero mean and constant variance (Fig S5b). Using Eq (S.15) and the Poisson measurement error model, we seek estimates of θ = (V 1 , K 1 , V 2 , K 2 ) and generate predictions. Simulating the mathematical model with the MLE, we observe excellent agreement with the data (Fig S5a). Profile likelihoods for V 1 , K 1 , V 2 , and K 2 capture known parameter values and show that these parameters are practically identifiable (Fig S5c-f). Predictions, in the form of the confidence sets for model solutions (Fig S5k-p) and confidence sets for data realisations ( Fig S5q) show greater uncertainty at the peaks of the oscillations. We now repeat this analysis and deliberately misspecify the measurement error model. We use the additive Gaussian measurement error model and find that this leads to non-physical predictions. Simulating the mathematical model with the MLE, we observe good agreement with the data ( Fig S6a). Profile likelihoods for V 1 , K 1 , V 2 , K 2 , σ N capture known parameter values and show that these parameters are practically identifiable (Fig S6c,d). Furthermore, profile likelihoods using the additive Gaussian error model are qualitatively similarly to profile likelihoods obtained using the true Poisson error model. Predictions, in the form of the confidence sets for model solutions (Fig S6i-m,s) and realisations (Fig S6u), show greater uncertainty than results obtained using the Poisson error model. Confidence sets for data realisations give non-physical results with negative concentrations. Figure S6: Lotka-Volterra predator-prey case study with deliberate measurement error model misspecification. S4.2 Difference equations In the main manuscript we present case studies using ordinary differential equations and partial differential equations. Here, we present an example demonstrating that the framework also naturally handles difference equations that frequently appear in ecological applications [11][12][13][14][15] As a simple caricature example consider the discrete-time Ricker logistic model for population growth [12]. In this model N t represents the population at time t and the population at the next time point, t + 1, is where r is the maximum intrinsic growth rate and K is the carrying capacity. We treat the initial condition N 0 as known. Then Eq (S.16) is characterised by two parameters θ = (r, K) that we will estimate. Simulating the mathematical model with the MLE, we observe excellent agreement with the data (Fig S7a). Profile likelihoods for r and K capture known parameter values and show that these parameters are practically identifiable (Fig S7c,d). Predictions, in the form of the confidence sets for model solutions (Fig S7e-j) and confidence sets for realisations (Fig S7k-m), demonstrate how uncertainty in parameter estimates results propagates forward into uncertainty in N t . The framework can also be applied to systems of difference equations. S5 Coverage: Log-normal measurement error model In Section 4 of the main manuscript we explore coverage properties using an example that considers the additive Gaussian measurement error model. Here we show that the same evaluation procedure can be used to assess coverage properties for different measurement error models. As an illustrative example consider Eq (16) with the log-normal error model. After fixing σ L = 0.4, this results in a model with two parameters, θ = (r 1 , r 2 ) = (1.0, 0.5), that we estimate. Initial conditions (c 1 (0), c 2 (0)) = (100, 10) are fixed. Analysing the coverage properties of this model gives similar results to those described in Section 4 of the main manuscript. We generate 5000 synthetic data sets using the same mathematical model, measurement error model, and model parameters, θ. Each data set comprises measurements of c 1 (t) and c 2 (t) at thirtyone equally-spaced time points from t = 0.0 to t = 5.0. For each data set we compute a univariate profile likelihood for r 1 and use this to form an approximate 95.0% confidence interval for r 1 . We then test whether this approximate 95.0% confidence interval contains the true value of r 1 . This holds for 95.0% of the data sets, corresponding to an observed coverage probability of 0.950. Similarly, the observed coverage probability for r 2 is 0.948. Therefore, the observed coverage probabilities for both r 1 and r 2 are close to the target coverage probability of 0.950. In contrast to our profile-wise coverage approach, a full likelihood-based approach, also using 5000 synthetic data sets, recovers an observed coverage probability of 0.951 for the confidence region for r 1 and r 2 . (16)) and the log-normal measurement error model with known σ L . (a) Confidence sets for model solution generated from uncertainty in r 1 , C r1 y,0.95 (shaded), and the true model solution, y(θ) (black). (b)-(c) Difference between curvewise confidence set and solution of the mathematical model evaluated at the MLE, C r1 y,0.95 −y(θ) (c 1 (t) (shaded green) and c 2 (t) (shaded magenta) and the difference between the true model solution and the solution of the mathematical model evaluated at the MLE, y(θ) − y(θ) (black). (f)-(h) Results based on uncertainty in r 2 . (i)-(k) Results for the union of curvewise confidence sets. Throughout, to plot y(θ) the temporal domain is discretised into 101 equally-spaced points (0.00 ≤ t ≤ 5.00) connected using a solid line. Results are obtained by analysing a single data set generated by simulating Eq (16), the log-normal measurement error model with σ L = 0.4, known model parameters (r 1 , r 2 ) = (1.0, 0.5), and fixed initial conditions (c 1 (0), c 2 (0)) = (100.0, 10.0).
2023-07-06T18:14:30.209Z
2023-07-04T00:00:00.000
{ "year": 2023, "sha1": "0b0c8eb88bc2398d2195086737f0c98fe5ffe373", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0b0c8eb88bc2398d2195086737f0c98fe5ffe373", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Biology", "Mathematics" ] }
10012125
pes2o/s2orc
v3-fos-license
The Effect of 0.02% Mitomycin C Injection into the Hair Follicle with Radiofrequency Ablation in Trichiasis Patients Purpose To investigate the inhibitory effect of 0.02% mitomycin C on eyelash regrowth when injected to the eyelash hair follicle immediately after radiofrequency ablation. Methods We prospectively included 21 trichiasis patients from June 2011 to October 2012. Twenty eyes of 14 patients were treated with 0.02% mitomycin C to the hair follicle immediately after radiofrequency ablation in group 1, while radiofrequency ablation only was conducted in ten eyes of seven patients in group 2. Recurrences and complications were evaluated until six months after treatment. Results One hundred sixteen eyelashes of 20 eyes in group 1 underwent treatment, and 19 (16.4%) eyelashes recurred. Eighty-four eyelashes of ten eyes in group 2 underwent treatment, and 51 (60.7%) eyelashes recurred. No patients developed any complications related to mitomycin C. Conclusions Application of 0.02% mitomycin C in conjunction with radiofrequency ablation may help to improve the success rate of radiofrequency ablation treatment in trichiasis patients. Trichiasis is a condition of abnormally positioned eyelashes that grow toward the eyeball and contact the ocular surface [1]. This can cause corneal epithelial defects, corneal ulcer, and corneal scarring and may lead to severe vision loss [2]. Trichiasis can occur due to blepharoconjunctivitis, cicatrical pemphigoid, Stevens-Johnson syndrome, erythema multiforme, pseudomonas pemphigus, trauma, surgery, trachoma, among other reasons [2]. The global prevalence of trichiasis is high. Standard treatment involves removal or destruction of the affected eyelashes with epilation, radiofrequency ablation, cryotherapy, argon laser, surgical procedures, and other methods. However, widely acknowledged and effective treatment has not yet been established [3]. Mitomycin C is a metabolic antagonist extracted from Streptomyces caespitosus. This medication is a cytotoxic chemotherapy drug that shifts to an alkylating agent that is effective in all cell cycles of proliferative and non-proliferative cells. Mitomycin C was first introduced in the pterygium removal procedure in 1970 and has been widely applied in glaucoma surgery, treatment of conjunctiva and corneal epithelial tumors, refractive surgery, and other ophthalmological procedures in recent years [4]. In previous studies, the authors have reported the efficacy and safety of mitomycin C, which inhibits the regrowth of eyelashes, through epilation after treating mice with 0.04 % mitomycin C [5]. The study aimed to verify the effect of locally injected 0.02% mitomycin C in conjunction with radiofrequency ablation in trichiasis patients. Materials and Methods The present study is a prospective and comparative study performed after gaining the approval of the Research Ethics Committee of Gyeongsang National University and the Korea Food and Drug Administration. The procedures used conformed to the tenets of the Declaration of Helsinki. All examinations and surgeries were conduct-ed in the Department of Ophthalmology, Gyeongsang National University. The study involved 30 eyes of 21 patients diagnosed with trichiasis, and all surgical procedures were carried out by a single surgeon from June 2011 to October 2012. The study excluded patients younger than 13 years, and those who were pregnant or had cancer, severe cardiopulmonary disease, intellectual disability, or inflammatory ocular disease such as keratitis, scleritis, conjunctivitis, or blepharoconjunctivitis. An image of the anterior eye segment was collected in the trichiasis-affected region. Twenty-one volunteers were fully informed about the different treatment procedures and provided consent to be classified into group 1 (n = 14) with radiofrequency ablation and in- side of the eyeball, a sterilized epilation needle connected to a 3.8-MHz Ellman Surgitron was inserted into the root of the eyelashes to a depth of 2 to 3 mm in the cut and coagulation mode (power = 3) in order to easily remove eyelashes, and follicles were cauterized. (B) After that, 0.1 mL 0.02% mitomycin C was injected by attaching a 27-gauge cannula to a 1-mL syringe in eyelash-removed follicles and left for two minutes after radiofrequency ablation. A B jection of 0.02% mitomycin C or group 2 (n = 7) with radiofrequency ablation. Anesthesia was carried out by injecting 20 mg/mL lidocaine HCl and 0.018 mg/mL epinephrine in the surgical site. After flipping the lid margin to the far side of the eyeball, a sterilized epilation needle connected to a 3.8-MHz Ellman Surgitron was inserted into the eyelash root to a depth of 2 to 3 mm in the cut and coagulation mode (power = 3) in order to easily remove eyelashes, and the follicles were cauterized. In the mitomycin C group, 0.1 mL 0.02% mitomycin C was injected using a 27-gauge cannula arranged with a 1-mL syringe into the eyelash-removed follicles and left for two minutes after radiofrequency ablation. Subsequently, the follicles were washed with 10 mL normal saline. The procedure was completed by applying antibiotic ointment to the surgical site ( Fig. 1). After six postoperative months, medical records, the findings of slit-lamp biomicroscopy, imaging of the anterior eye segment, and the relapse of trichiasis were evaluated by comparison with the preoperative conditions. In addition, we evaluated complications associated with radiofrequency ablation and mitomycin C (Fig. 2). From one group 1 patient with preoperative lower lid laxity, we were able to obtain tissues after radiofrequency ablation. The sample tissue was fixed with 2% glutaraldehyde, embedded with paraffin, and cut into sections. The tissue block was stained with hematoxylin-eosin and examined under an optical microscope. SPSS ver. 18.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical analysis. Mann-Whitney U-test and Fisher's exact test were performed to verify statistical significance. Results A total of 200 eyelashes were investigated in 21 patients with trichiasis. The mean age of the subjects was 58.1 ± 18.3 years. Females and males accounted for 67% and 33% of patients, respectively. The mean age was 64.7 ± 10.1 in group 1 and 45.71 ± 24.3 in group 2. The ratio of females was 64.3% in group 1 and 71.4% in group 2. The number of eyelashes in the affected eye was 5.8 ± 3.2 in group 1 and 8.4 ± 6.6 in group 2. No significant differences were found in age, sex, and the average number of eyelashes between the two groups (Table 1). A total of 116 eyelashes underwent treatment in 20 eyes in group 1. Trichiasis recurred in 19 eyelashes (16.4%). A total of 84 eyelashes underwent treatment in ten eyes in group 2. Trichiasis recurred in 51 eyelashes (60.7%). In the comparison of the two groups, more favorable results were shown in group 1 where 0.02% mitomycin C was applied (p = 0.005) ( Table 2). In one case in group 1, histological examination was performed using follicle tissue obtained during eyelid surgery conducted six months after treatment. Optical microscopic findings revealed structural damage in all four examined follicles, and thickened follicle epithelium and sebaceous glands were observed after the regeneration process. Dermal papilla tissues were destroyed and not observed in three follicles. However, dermal papillae remained in the remaining follicle, and growth of the hair shaft was observed (Fig. 3). No patients showed complications related to mitomycin C such as keratoconjunctival erosion or scleromalacia in the cornea, sclera, or conjunctiva. No specific complication was observed in the eyelid area where mitomycin C was injected. Eyelid notch, which is known to be a complication of radiofrequency ablation, was not found in group 1 with the use of mitomycin C but was detected in two patients in group 2. Therefore, there is a possibility that the wound site in the mitomycin C applied group might have been more clearly recovered (Fig. 4). Discussion The standard treatments for trichiasis are epilation, radiofrequency ablation, cryotherapy, argon laser, and surgical procedures. However, a widely acknowledged and effective treatment has not yet been established [3]. Epilation is more commonly used because of its convenience and low risk. This method temporarily relieves the pain of corneal abrasion. Removed eyelashes usually regrow within 4 to 6 weeks after removal. In this process, shortened eyelashes may induce continuous corneal injury. West et al. [6] reported a high rate of corneal opacity in patients who underwent epilation in long-term treatment. Radiofrequency ablation is another method that can be performed as swiftly as outpatient treatment. The success rate was reported to be 60% to 67% in a single session of radiofrequency ablation [3,7]. According to a study by Han and Doh [8], the success rate of follicle trephination combined with electrocautery was 83%. Sakarya et al. [9] reported success rates of 66.6% in a single session of electrocautery using an ultrafine needle and 100% with two to three sessions. Potential complications were edema, erythema, hematoma, eyelid notch, and others [7]. Several studies have examined the treatment of trichiasis using argon or diode laser. Laser therapies have exhibited a success rate of 39% to 88% without relapse [1,[15][16][17]. Application of more than three laser treatments increased the success rate to 91.3% to 100% [17,18]. Laser therapy can be usefully implemented in ocular pemphigoid patients who A B C D have mild trichiasis or need to avoid inflammation [1,16]. However, the disadvantages of this treatment are the high costs of the medical instruments and the association of complications such as eyelid notch, skin depigmentation, and dimpling of the skin [1,15,17,19]. A large number of treatment methods have been proposed, such as anterior and posterior lamellar repositioning, tarsal fracture, tarsal marginal rotation-associated full-thickness eyelid resection, and others. The majority of those procedures primarily correct entropion, and the recurrence rate has been reported to range between 3% to 62% [20][21][22][23][24][25][26][27]. All surgical procedures are inf luenced by various factors, including patient choice and the abilities of surgeons. The chief limitations are the time and cost burdens and the invasive nature of the procedure. Mitomycin C inhibits the proliferation of fibroblasts, and it has been commonly used in ophthalmology to prohibit scarring of the surgical area by inducing apoptosis. Moreover, inflammatory complications could occur such as anterior and posterior scleritis, acute toxicity and local stimulation, delayed orbital fibrosis, and others. In a recent study, mitomycin C increased IL-8 (a chemokine of macrophages and chemical attractant) and MCP-1 (an activator of lymphocyte and neutrophils and chemical attractant) in the fibroblasts of the cornea and generated an inflammatory reaction [28,29]. Radiofrequency ablation incurs heat injuries, including bleeding, edema, and others, and severe inflammatory reaction occurs due to lymphocytes, macrophages, and foreign body giant cells. Those inflammatory reactions ultimately destruct matrix and hair papilla, and follicles are replaced with eosinophil collagen, characteristic of scar tissue [30]. In this process, mitomycin C is antic- A B C D ipated to accelerate the destruction of follicles by intensifying inflammatory reactions. In addition, the injection of mitomycin C is thought to be effective in prohibiting follicle regrowth by inducing programmed cell death. Previously, our study with rats verified that application of 0.04% mitomycin C generates the destruction and edema of wrinkles in the mitochondria of follicles following simple epilation [5]. Eyelid notch occurred in two patients in group 2 without the use of mitomycin C. In contrast, no injury in the eyelid structure was detected in group 1 with the use of mitomycin C. Histological findings revealed structural changes in follicles in group 1 where follicle tissues were obtainable. However, scarring and destruction in the surrounding tissue were not found to be severe (Fig. 3). Mitomycin C is thought to inhibit fibrosis and might be effective in preventing structural injuries, including postoperative eyelid notch, ectropion of the eyelid, and others. Mitomycin C drops were locally applied to the eyes as an adjuvant therapy group after glaucoma filtering surgery or pterygium removal procedure. Consequently, associated complications were reported to be dry eye syndrome, superficial punctate keratitis, allergic reaction, punctum obstruction, keratomalacia and scleromalacia, surgical wound infection and dehiscence, and others [31][32][33]. Those complications could be avoided by using a low concentration of mitomycin C with thorough monitoring and treatment [34]. In the present study, no complications were detected after injecting 0.02% mitomycin C once for two minutes. A nearly 100% success rate was acquired by repeatedly performing radiofrequency treatment in some studies [7,9]. However, repeated radiofrequency treatment is difficult to carry out due to high cost, pain, and other problems that influence the compliance of patients in actual clinical settings. In a recent study on eyelash trephination combined with electrocautery, the success rate was 83% [8], aligning with the success rate (83.6%) of the present study using the simpler additional injection of mitomycin C. In addition, this treatment is anticipated to be effective in reducing postoperative complications that occur due to eyelid injury after radiofrequency treatment. There were some limitations to the present study. First, the authors were unable to verify if mitomycin C temporarily prohibited the division of follicular cells or completely destructed follicles. Although the outcome is an absence of misdirected eyelashes, the satisfaction of patients could be improved by extending the lack of symptoms for several months. Second, a small number of volunteers were included in the study. The six-month follow-up was also thought to be short. However, the average eyelash growth cycle is about 90 days [35]. Therefore, the period of postoperative six months was thought to be sufficient for evaluating the direct injury of follicles generated by mitomycin C in conjunction with radiofrequency ablation. Local injection of 0.02% mitomycin C could be applied as an effective adjuvant therapy, increasing the success rate of radiofrequency ablation in patients with trichiasis. Mitomycin C may help to prevent the deformity of eyelids by facilitating the recovery of wounds caused by radiofrequency ablation.
2016-05-04T20:20:58.661Z
2014-01-21T00:00:00.000
{ "year": 2014, "sha1": "dfba29b08d6b0d8aa140dec9a2f0652b54823461", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3913977?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "dfba29b08d6b0d8aa140dec9a2f0652b54823461", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257101269
pes2o/s2orc
v3-fos-license
The SLC25A47 locus controls gluconeogenesis and energy expenditure Significance Given the impenetrable nature of the mitochondrial inner-membrane, most of the known metabolite carrier proteins, including SLC25A family members, are ubiquitously expressed in mammalian tissues. One exception is SLC25A47, which is selectively expressed in the liver. The present study showed that depletion of SLC25A47 reduced mitochondrial pyruvate flux and hepatic gluconeogenesis under a fasted state, while activating energy expenditure. The present work offers a liver-specific target through which we can restrict hepatic gluconeogenesis, which is often in excess under hyperglycemic and diabetic conditions. The role of mitochondria extends far beyond adenosine triphosphate (ATP) generation. Mitochondria serve as an essential organelle that supplies a variety of important metabolites to the cytosolic compartment and nucleus. An example is in the liver, wherein the mitochondria export phosphoenolpyruvate (PEP) and malate, which serve as gluconeogenic precursors in response to fasting. Under a fed condition, the mitochondria supply citrate that contributes to de novo lipogenesis (1,2). In addition, mitochondrion-derived alpha-ketoglutarate (α-KG) functions as a cofactor of Jumonji C domain demethylases and ten-eleven translocation enzymes in the nucleus, thereby controlling the transcriptional program, a.k.a., retrograde signaling (3). Mitochondrial flux in the liver is tightly regulated by hormonal cues, such as insulin and glucagon, and dysregulation of these processes profoundly impacts the maintenance of euglycemia, as often seen under the conditions of hyperglycemia and type 2 diabetes (4)(5)(6). For instance, elevated protein expression or activity of pyruvate carboxylase (PC), a mitochondrial matrix-localized enzyme that catalyzes the carboxylation of pyruvate to oxaloacetate (OAA), is associated with hyperglycemia (7,8). On the other hand, liverspecific deletion of PC potently prevented hyperglycemia in diet-induced obese mice (9). Another example is the mitochondrion-localized phosphoenolpyruvate carboxykinase (PCK2, also known as M-PEPCK) that is expressed highly in the liver, pancreas, and kidney, where it catalyzes the conversion of OAA to PEP (10,11). It has been demonstrated that activation of PCK2 in the liver enhanced the PEP cycle and potentiated gluconeogenesis (12,13). In turn, depletion of PCK2 in the liver impaired lactate-derived gluconeogenesis and lowered plasma glucose, insulin, and triglycerides in mice (14). Accordingly, a better understanding of mitochondrial metabolite flux in the liver may provide insights into therapeutic strategies for the management of hyperglycemia and type 2 diabetes. Of note, the mitochondrial inner-membrane is impermeable to metabolites relative to the outer membrane. As such, a variety of carrier proteins in the mitochondrial inner-membrane play essential roles in the regulation of metabolite transfer between the matrix and the cytosolic compartment (15,16). As an example, mitochondrial pyruvate carrier (MPC) mediates the import of pyruvate into the matrix (17,18). It has been demonstrated that liver-specific deletion of MPC1 or MPC2 reduced mitochondrial tricarboxylic acid (TCA) flux and impaired pyruvate-driven hepatic gluconeogenesis in diet-induced obese mice (19)(20)(21). Recent studies also reported the identification of SLC25A39, which is responsible for glutathione import (22), SLC25A44 for branched-chain amino acids import (23,24), and SLC25A51 for nicotinamide adenine dinucleotide (NAD + ) import (25)(26)(27). Because of their essential roles, nearly all mitochondrial metabolite carriers (e.g., SLC25A family members) are ubiquitously expressed in mammalian tissues. However, there are two exceptions: uncoupling protein 1 (UCP1, also known as SLC25A7) that is selectively expressed in brown/beige fat (28), and an orphan carrier, SLC25A47, which is expressed selectively in the liver (Fig. 1A). SLC25A47 was previously described as a mitochondrial protein of which expression was down-regulated in hepatocellular carcinoma and that could reduce mitochondrial membrane potential in cultured Hep3B cells, a liver-derived epithelial cell line (29). In yeast, SLC25A47 overexpression elevated mitochondrial electron transport chain uncoupling, implicating its protective role against hepatic steatosis (30). In contrast, a recent study showed that genetic loss of Slc25a47 led to mitochondrial dysfunction, mitochondrial stress, and liver fibrosis in mice (31). Given these apparently inconsistent reports, this study aims to determine the physiological role of SLC25A47 in systemic energy homeostasis. SLC25A47 Is a Liver-Specific Mitochondrial Carrier That Links to Human Metabolic Disease. The SLC25A solute carrier proteins comprise 53 members in mammals, constituting the largest family of mitochondrial inter-membrane metabolite carriers (32). Among these 53 members, SLC25A47 is unique because this is the sole SLC25A member that is expressed selectively in the liver of mice (29,31). We independently found that SLC25A47 is selectively expressed in the liver of humans (Fig. 1A) and in mice (SI Appendix, Fig. S1A). The publicly available single-cell RNA-seq dataset (33) shows that hepatocytes are the primary cell type that expresses SLC25A47, while Kupffer cells also express SLC25A47 that account for approximately 10% of total transcripts in the liver (SI Appendix, Fig. S1B). We next examined the genetic mechanism through which Slc25a47 is selectively expressed in the liver. The analysis of assay of transposase accessible chromatin sequencing (ATAC-seq) data (GSE111586) found an open chromatin architecture in the Slc25a47 gene locus (chromosome 12: 108,815,740 to 108,822,741) specific to the liver, whereas the same region appeared to form a heterochromatin structure in the heart and lung ( Fig. 1 B, Upper). Notably, the euchromatin region of the Slc25a47 gene contained binding sites of hepatocyte nuclear factor 4 alpha (HNF4α), to which HNF4α is recruited in the liver ( Fig. 1 B, Lower). This result caught our attention because mutations of HNF4α are known to cause maturity-onset diabetes of the young 1, and it plays a central role in the regulation of hepatic and pancreatic transcriptional networks (34,35). Importantly, HNF4α is required for the hepatic expression of Slc25a47, as the analysis of a previous microarray dataset (36) found that genetic loss of HNF4α significantly attenuated the expression of Slc25a47 in the mouse liver (Fig. 1C). Another important observation is in human genetic association studies from the Type 2 Diabetes Knowledge Portal (type2diabetesgenetics.org), wherein we found significant associations between SLC25A47 and glycemic and lipid homeostasis. The notable associations include fasting glucose levels adjusted for body mass index (BMI), random glucose levels, HbA1c levels adjusted for BMI, high-density lipoprotein (HDL) cholesterol levels, and aspartate aminotransferase (AST)-alanine aminotransferase (ALT) ratio (Fig. 1D). One of the strongest single nucleotide polymorphisms (SNIPs) was located in the intronic region of SLC25A47 (rs1535464), which showed significant associations with lower levels of fasting and random glucose, lower HbA1c levels adjusted for BMI, and higher HDL cholesterol levels (Fig. 1E). Similarly, another SNIP (rs35097172) in the regulatory region of SLC25A47 was associated with lower levels of fasting/random glucose, HbA1c levels adjusted for BMI, and higher HDL cholesterol levels. These data indicate that SLC25A47 is involved in the regulation of glucose and lipid homeostasis, although how these snips (SNPs) affect SLC25A47 expression remains unknown. Weight Gain and Lowers Plasma Cholesterol Levels. To determine the physiological role of SLC25A47 in energy homeostasis, we next developed mice that lacked SLC25A47 in a liver-specific manner by crossing Slc25a47 flox/flox mice with Albumin-Cre (Alb-Cre; Slc25a47 flox/flox , herein Slc25a47 Alb-Cre mice). We validated that the liver of Slc25a47 Alb-Cre mice expressed significantly lower levels of Slc25A47 messenger RNA (mRNA) than littermate control mice (Slc25a47 flox/flox ) by 80 % (Fig. 2A). The remaining mRNA in Slc25a47 Alb-Cre mice could be attributed to inefficient Cre expression or the transcripts in nonhepatocytes, such as Kupffer cells. The expression of the Slc25a47 neighboring genes, including Wdr25, Begain, Dlk1, Meg3, Slc25a29, Yy1, and Degs2, was not altered in the liver of Slc25a47 Alb-Cre mice relative to control mice (SI Appendix, Fig. S2A). At birth, there was no difference in the body weight and body size between Slc25a47 Alb-Cre mice and littermate control mice (SI Appendix, Fig. S2B). However, Slc25a47 Alb-Cre mice gained significantly less weight than controls at 3 wk of age and thereafter on a regular-chow diet (Fig. 2 B, Left). This phenotype was more profound when mice at 6 wk of age were fed on a high-fat diet (HFD, 60% fat) (Fig. 2 B, Right). The difference in body weight arose from reduced adipose tissue mass and lean mass both on a regular-chow diet and a high-fat diet (Fig. 2C). At tissue levels, adipose tissue and liver mass were lower in Slc25a47 Alb-Cre mice relative to control mice (Fig. 2D). Additionally, we found significantly lower serum levels of total cholesterol in Slc25a47 Alb-Cre mice than those in controls both on regular-chow and high-fat diets (Fig. 2E). On the other hand, we observed no difference in serum triglyceride (TG) levels between the two groups both on regular-chow and high-fat diets (Fig. 2F). We found no difference in serum ALT, AST, and albumin levels on a high-fat diet, although serum ALT and AST levels were higher in Slc25a47 Alb-Cre mice at 12 wk of age on a regular-chow diet (SI Appendix, Fig. S2 C-E). Depletion of SLC25A47 Led to Elevated Whole-Body Energy Expenditure. Given the difference in body weight between Slc25a47 Alb-Cre mice and control mice, we examined the wholebody energy expenditure using metabolic cages. Regression-based analysis of energy expenditure by CaIR-analysis of covariance (ANCOVA) (37) showed that Slc25a47 Alb-Cre mice exhibited significantly higher whole-body energy expenditure (kcal/day) Tissue weight Relative Slc25a47 expression Tissue mass (g) Tissue mass (g) independent of body mass at 23 °C. The difference remained significant when mice were kept at 30 °C (Fig. 3A). On the other hand, there was no difference in their food intake and locomotor activity between the genotypes ( Fig. 3 B and C). A possible explanation for the high energy expenditure might be the enhanced thermogenic capacity of brown adipose tissue (BAT) or its sensitivity to β3-adrenergic receptor (β3-AR) signaling. Accordingly, we tested the hypothesis by examining BAT thermogenesis in response to a β3-AR agonist (CL316,243) at 30 °C. This is a gold-standard method to determine BAT thermogenic responses to β3-AR stimuli, while excluding the contribution of shivering thermogenesis by skeletal muscle (38). We found that a single administration of β3-AR agonist (CL316,243) at 0.5 mg/kg (high dose) potently increased whole-body energy expenditure both in Slc25a47 Alb-Cre and littermate controls to a similar degree (SI Appendix, Fig. S3A). This result suggests that the cell-intrinsic thermogenic capacity of BAT, if maximumly activated by a β3-AR stimulus, appears comparable between the two groups. Accordingly, we asked if there was any change in circulating hormonal factors that influenced whole-body energy expenditure of Slc25a47 Alb-Cre mice. In this regard, FGF21 is a probable candidate because it is a well-established endocrine hormone that increases energy expenditure by activating the sympathetic nervous system (39). Consistent with the recent work (31), we found that serum levels of FGF21 in Slc25a47 Alb-Cre mice were significantly higher relative to littermate controls both on regular-chow and high-fat diets (Fig. 3D). The increase in circulating FGF21 levels was due to elevated Fgf21 transcription in the liver (Fig. 3E). This is in agreement with the previous work demonstrating that the liver is the primary source of circulating FGF21 (40). Of note, elevated Fgf21 gene expression in Slc25a47 Alb-Cre mice was already observed at 2 wk of age, a time point in which there was no difference in body weight, serum ATL/AST levels, and mitochondrial stress-related genes in the liver ( Fig. 3F and SI Appendix, Fig. S3 B-D). Importantly, there was no correlation between serum FGF21 levels and AST levels in control and Slc25a47 Alb-Cre mice at 2 and 4 wk of age (Fig. 3G). The results indicate that the stimulatory effect of SLC25A47 loss on FGF21 expression is not merely a consequence of liver damage. We addressed this point further in the following sections. SLC25A47 Is Required for Pyruvate-Derived Hepatic Gluconeogenesis In Vivo. We next examined the extent to which SLC25A47 regulates systemic glucose homeostasis. This is based on the observation that fasting glucose levels of Slc25a47 Alb-Cre mice were consistently lower than littermate controls both on regular-chow and high-fat diets (Fig. 4A). At 4 wk of high-fat diet, we found no major difference in glucose tolerance between the two groups, although fasting glucose levels were lower in Slc25a47 Alb-Cre mice than control mice (Fig. 4B). In contrast, Slc25a47 Alb-Cre mice exhibited significantly higher insulin tolerance than controls in response to insulin at a low dose (0.4 U/kg) (Fig. 4C). It is notable that Slc25a47 Alb-Cre mice remained hypoglycemic (<70 mg/dL) following insulin administration. Pyruvate tolerance tests found that Slc25a47 Alb-Cre mice at 3 wk of high-fat diet exhibited significantly lower hepatic gluconeogenesis than control mice (Fig. 4D). Of note, the difference in pyruvate tolerance was independent of diet and sex, as we observed consistent results both in male and female mice on a regular-chow diet (Fig. 4 E and F). On the other hand, there was no difference in glucose-stimulated serum insulin levels and hepatic glycogen contents between the two groups (SI Appendix, Fig. S4 A and B). These results led to the hypothesis that the lower fasting glucose levels seen in Slc25a47 Alb-Cre mice are attributed to reduced hepatic gluconeogenesis, rather than impaired glycogenolysis or elevated insulin sensitivity in the skeletal muscle. To test the hypothesis, we next examined the contribution of hepatic gluconeogenesis to circulating glucose by infusing fasted mice with U-13 C-labeled lactate or 13 C-labeled glucose. To examine the relative contribution of other gluconeogenic precursors to blood glucose, we also infused fasted mice with U-13 C-labeled glycerol and U-13 C-alanine (Fig. 4G). During the infusion, we collected and analyzed serum from fasted mice using liquid-chromatography-mass spectrometry (LC-MS), as described in recent studies (41,42). We used 13 13 C-labeled tracers to glucose, lactate, and glycerol in (G). n = 6 for Slc25a47 Alb-Cre , n = 6 for controls. *P < 0.05 by unpaired Student's t test. (I) The relative contribution of 13 C-labeled lactate to circulating levels of glucose, pyruvate, and lactate in (G). **P < 0.01. B-F, P-value determined by two-way ANOVA followed by Fisher's least significant difference (LSD) test. AUC: *P < 0.05, **P < 0.01, ****P < 0.0001 by unpaired Student's t test. precursor instead of pyruvate because circulating lactate is the primary contributor to gluconeogenesis and in rapid exchange with pyruvate (43). The analyses showed that glucose production from 13 C-lactate was significantly lower in Slc25a47 Alb-Cre mice than in control mice ( Fig. 4H orange bars). Notably, this impairment was selective to the lactate-to-glucose conversion, as we found no significant difference in glucose production from 13 C-glycerol between the two groups ( Fig. 4H blue bars and SI Appendix, Fig. S4C). The relative contribution of alanine to serum glucose was far less than lactate, with no statistical difference between the genotypes (Fig. 4H red bars). The lactate-to-pyruvate conversion was unaffected in Slc25a47 Alb-Cre mice, suggesting that impaired gluconeogenesis from lactate is attributed to reduced pyruvate utilization in the liver (Fig. 4I). We also found no difference in the conversion from 13 C-glucose to pyruvate and lactate (SI Appendix, Fig. S4D). These results indicate that SLC25A47 is required selectively for gluconeogenesis from lactate under a fasted condition, whereas it is dispensable for gluconeogenesis from other substrates. Acute Depletion of SLC25A47 Improved Glucose Homeostasis without Causing Liver Damage. A recent work suggested the possibility that the metabolic changes in Slc25a47 Alb-Cre mice, such as elevated FGF21 and impaired glucose production, were merely secondary to general hepatic dysfunction and fibrosis (31). To exclude metabolic complications caused by chronic deletion of SLC25A47, particularly during the prenatal and early postnatal periods, we aimed to acutely deplete SLC25A47 in adult mice. To this end, we acutely depleted SLC25A47 in adult mice by delivering adeno-associated virus (AAV)-thyroxine binding globulin (TBG)-Cre or AAV-TBGnull (control) into the liver of Slc25a47 flox/flox mice via tail-vein (Fig. 5A). AAV-Cre administration successfully reduced Slc25a47 mRNA expression by approximately 50% (Fig. 5B). Although the depletion efficacy of AAV-Cre was less than the genetic approach using Albumin-Cre, this model gave us an opportunity to determine the extent to which acute and partial depletion of SLC25A47 in adult mice sufficiently affect hepatic glucose production and energy expenditure, while avoiding metabolic complications associated with chronic SLC25A47 deletion. After 2 wk of AAV administration, we found that acute SLC25A47 depletion led to reduced body-weight gain ( Fig. 5C and SI Appendix, Fig. S5A) and increased serum FGF21 levels (Fig. 5D). The increase in serum FGF21 levels was associated with elevated hepatic FGF21 mRNA expression (Fig. 5E). Consistent with the observations in Slc25a47 Alb-Cre mice, acute SLC25A47 depletion resulted in reduced fasting serum glucose levels (Fig. 5F) and insulin levels (Fig. 5G). Importantly, acute SLC25A47 depletion improved systemic pyruvate tolerance (Fig. 5H) and insulin tolerance (Fig. 5I). In contrast, acute SLC25A47 did not alter systemic glycerol tolerance, although there was a modest change at later time points after glycerol administration (SI Appendix, Fig. S5B). The difference in glycerol tolerance at later time points is likely because glycerol-derived glucose is converted to lactate in peripheral tissues, which is eventually utilized as a gluconeogenic substrate (44). Next, we examined whether such metabolic changes were associated with liver injury in vivo. Histological analyses by Picro-Sirius Red staining did not find any noticeable sign of liver fibrosis (Fig. 5J). Similarly, histological analyses by hematoxylin and eosin (H&E) staining found no difference between control vs. AAV-Cre injected mice (SI Appendix, Fig. S5C). Furthermore, acute SLC25A47 depletion did not alter the expression of liver fibrosis marker genes (Fig. 5K). Also, we found no significant correlation between serum AST levels and hepatic SLC25A47 expression (Fig. 5L) and between serum AST levels and FGF21 levels (SI Appendix, Fig. S5D). Moreover, we observed no significant difference in the Complex I and II activities of isolated liver mitochondria between the two groups (SI Appendix, Fig. S5E). These data suggest that acute SLC25A47 depletion sufficiently enhanced hepatic FGF21 expression, pyruvate tolerance, and insulin tolerance independent of liver damage and hepatic mitochondrial dysfunction. SLC25A47 Is Required for Mitochondrial Pyruvate Flux and Malate Export. We next asked which steps of the lactate-derived hepatic gluconeogenesis were altered in Slc25a47 Alb-Cre mice. To this end, we took unbiased omics approaches-RNA-seq and mitochondrial metabolomics analyses-in the liver of Slc25a47 Alb-Cre mice and littermate controls under a fasted condition. The summary of the results is shown in Fig. 6A. The RNA-seq data analysis found that the liver of Slc25a47 Alb-Cre mice expressed significantly higher levels of Pkm, Eno3, Aldoa, Fbp1, Gpi1, and G6pc3 (Fig. 6B), suggesting a compensatory upregulation of gluconeogenic gene expression in Slc25a47 Alb-Cre mice. A notable finding is the distinct regulation of mitochondrial matrix-localized enzymes vs. cytosolic enzymes: We found that the expression of the mitochondrial TCA cycle enzymes, such as citrate synthase (Cs), the mitochondrial form of isocitrate dehydrogenase (Idh2), and Suclg2 (the subunits of succinate-CoA ligase) was significantly up-regulated in the liver of Slc25a47 Alb-Cre mice relative to controls. In addition, the expression of Pck2, the M-PEPCK that converts OAA to PEP within the mitochondria, was up-regulated in the liver of Slc25a47 Alb-Cre mice. In contrast, the expression of the cytosolic form of PEPCK (Pck1) was unchanged. Similarly, the expression of Mdh2, which catalyzes the conversion between OAA and malate in the mitochondria, was significantly elevated in the liver of Slc25a47 Alb-Cre mice, whereas the expression of Mdh1, the cytosolic form, showed a trend of down-regulation. These results suggest that SLC25A47 loss leads to a distinct gene expression pattern of mitochondrial vs. cytosolic enzymes that control hepatic gluconeogenesis. The mitochondrial metabolomics analysis revealed that the liver mitochondria of Slc25a47 Alb-Cre mice accumulated significantly higher levels of isocitrate, fumarate, and malate than those of control mice (Fig. 6C). In contrast, mitochondrial PEP contents were lower in Slc25a47 Alb-Cre livers relative to controls. We found no difference in the mitochondrial contents of pyruvate, citrate, α-KG, succinyl CoA, succinate, and OAA between the two groups. Additionally, there was no difference in the mitochondrial contents of cofactors required for the TCA cycle reactions, such as coenzyme A, reduced nicotinamide adenine dinucleotide (NADH), nicotinamide adenine dinucleotide phosphate (NADP + ), NADPH, and flavin adenine dinucleotide (FAD), although mitochondrial NAD + and guanosine triphosphate (GTP) levels were higher in Slc25a47 Alb-Cre mice than controls (SI Appendix, Fig. S6A). The above data led to the hypothesis that SLC25A47 controls either pyruvate import to the mitochondrial matrix or pyruvate flux within the mitochondria. To test this, we isolated mitochondria from the liver of Slc25a47 Alb-Cre mice and littermate controls under a fasted condition. The isolated mitochondria were incubated with [U-13 C] labeled pyruvate and subsequently analyzed by LC-MS/ MS (Fig. 6D). We found no difference in the mitochondrial contents of 13 C-pyruvate levels between the two groups, suggesting that mitochondrial pyruvate uptake per se was not altered in the liver of Slc25a47 Alb-Cre mice (Fig. 6E). This is in agreement with the data that the expression of MPC1 and MPC2 was not different between the genotypes (Fig. 6B). On the other hand, the enrichments of 13 C-labeled citrate, isocitrate, succinate, fumarate, and malate were significantly lower in the mitochondria of Slc25a47 Alb-Cre mice than those in controls (Fig. 6E). There was no difference in 13 C-labeled OAA and PEP between the groups. Together, these results suggest that genetic loss of SLC25A47 impaired mitochondrial pyruvate flux, leading to an accumulation of fumarate, malate, and isocitrate in the liver mitochondria. Impaired export of malate from the mitochondria into the cytosolic compartment leads to reduced lactate-derived hepatic gluconeogenesis under a fasted condition. Discussion Mitochondrial flux in the liver is highly nutrition-dependent. Under a fed condition, malate is imported into the mitochondrial matrix in exchange for α-KG via mitochondrial α-KG/malate carrier (SLC25A11) as a part of the malate-aspartate shuttle, a mechanism to transport reducing equivalents (NADH) into the mitochondrial matrix (45). In addition, mitochondrial dicarboxylate carrier SLC25A10 can mediate the import of malate into the mitochondrial matrix in addition to malonate, succinate, phosphate, sulfate, and thiosulfate (46). Under a fasted state, when liver glycogen is depleted, malate is exported from the mitochondrial matrix into the cytosolic compartment, where it is converted to OAA by MDH1 and utilized as a gluconeogenic substrate. However, what controls the nutrition-dependent mitochondrial malate flux remains elusive. The present work showed that SLC25A47 depletion led to an accumulation of mitochondrial malate and reduced hepatic gluconeogenesis, without affecting gluconeogenesis from glycerol. The results indicate that SLC25A47 mediates the export of mitochondrion-derived malate into the cytosol. However, the present study could not exclude the possibility that SLC25A47 mediates the transport of cofactors needed for mitochondrial pyruvate flux, although we found no difference in the mitochondrial contents of coenzyme A and NADH between the genotypes. Our future study aims to determine the specific substrate of SLC25A47 by biochemically reconstituting this protein in a cell-free system, such as liposomes. The present work showed that depletion of SLC25A47 reduced mitochondrial pyruvate flux, thereby restricting lactate-derived hepatic gluconeogenesis and preventing hyperglycemia. This is in alignment with several mouse models with impaired mitochondrial pyruvate flux in the liver. For instance, liver-specific depletion of pyruvate carboxylase (PC limits the supply of pyruvate-derived OAA in the mitochondria, leading to reduced TCA flux and hepatic gluconeogenesis (9). Similarly, liver-specific depletion of the MPC1 or MPC2 or the M-PEPCK reduces hepatic gluconeogenesis and protects mice against diet-induced hyperglycemia (14,(19)(20)(21). A recently developed noninvasive method, positional isotopomer NMR tracer analysis, would be instrumental to determine how SLC25A47 loss alters the rates of hepatic mitochondrial citrate synthase flux vs. PC flux (35). It is worth pointing out that elevated energy expenditure and reduced body weight are unique to Slc25a47 Alb-Cre mice. Indeed, no changes in energy expenditure and body weight were seen in mice that lacked MPC1/2 or M-PEPCK relative to the respective controls. Elevated energy expenditure of Slc25a47 Alb-Cre mice appears to be attributed to elevated FGF21 as recent work demonstrated that deletion of FGF21 abrogated the effects of SLC25A47 on energy expenditure and body weight (31). Importantly, our results suggest that partial SLC25A47 depletion was sufficient to stimulate FGF21 production independently from liver damage. It is conceivable that changes in mitochondrionderived metabolites, such as malate and others, control the transcription of FGF21 via retrograde signaling (3). Our future study will explore the mechanisms through which SLC25A47-mediated mitochondrial signals control the nuclear-coded transcriptional program in a nutrition-dependent manner. In addition, genetic rescue experiments, such as ectopically reintroducing SLC25A47 into the liver of Slc25a47 Alb-Cre mice will determine the direct vs. indirect actions of SLC25A47 on gluconeogenesis and energy expenditure. With these results in mind, we consider that SLC25A47 is a plausible target for hyperglycemia and type 2 diabetes for the following reasons. First, excess hepatic gluconeogenesis is commonly seen in human hyperglycemia and type 2 diabetes (4-6). Notably, genome-wide association studies (GWAS) data found significant associations between SLC25A47 and glycemic homeostasis in humans-particularly, several SNPs in the SLC25A47 were significantly associated with lower levels of glucose and HbA1c adjusted for BMI, although how these SNPs affect SLC25A47 expression awaits future studies. Second, SLC25A47 is exceptionally unique among 53 members of the mitochondrial SLC25A carriers, given its selective expression in the liver. This tissue specificity makes SLC25A47 an attractive therapeutic target, considering the recent successful examples in which liver-targeting mitochondrial uncouplers protected mice against type 1 and type 2 diabetes, hepatic steatosis, and cardiovascular complications (47)(48)(49). A potential caveat is the detrimental effect associated with chronic SLC25A47 deletion, such as mitochondrial stress, lipid accumulation, and fibrosis (31). However, our data showed that acute depletion of SLC25A47 by ~50% sufficiently restricted gluconeogenesis and enhanced insulin tolerance in adult mice without causing liver fibrosis and mitochondrial dysfunction. Thus, it is conceivable that temporal and partial inhibition of SLC25A47 using small-molecule inhibitors or antisense oligos would be effective in restricting excess hepatic gluconeogenesis while avoiding the detrimental side effects. Human SNP Analyses. Data were obtained from the Type 2 Diabetes Knowledge Portal (type2diabetesgenetics.org) and reconstructed. We used the SLC25A47 gene as the primary locus and expanded 5,000 bp proximal and distal to the total gene distance in order to identify regions of interest that may be outside of the coding sequence, i.e., promoters or enhancers. C-Glucose and 13 C-Lactate Infusion Study. Jugular vein catheters (Instech Labs) were implanted in the right jugular vein of 10-wk-old mice (n = 6 per group) under aseptic conditions. The catheter was connected to a vascular access button (Instech Labs) into which the tracer was infused. After 1 wk of the recovery period, mice were fasted for 6 h, and then infused for 2.5 h with U-13 C-glucose (0.2 M, CLM-1396), U-13 C-sodium lactate (0.49 M, CLM-1579), 0.2 M 13 C-alanine (0.2 M, CLM-2184-H), and U-13 C-glycerol (0.1M, CLM15101), respectively, at 2 to 3-d interval. The infusion rate was 0.1 µL g −1 min −1 , and mice moved freely in a cage during the intravenous infusions. Blood (~ 10 µL) was collected from the tail into microvettes with coagulating activator (Starstedt Inc, 16.440.100). Blood samples were kept on ice, and serum was separated by centrifugation at 3,000 g for 10 min at 4 °C. 4 µL serum was added to 60 µL ice-cold extraction solvent (methanol: acetonitrile: water at 40:40:20), vortexed vigorously and incubated on ice for at least 5 min. The samples were centrifuged at 16,000 g for 10 min at 4 °C, and the supernatant was transferred to LC-MS tubes for analyses. Calculation of Direct Contribution Fraction of Gluconeogenic Substrates to Glucose. The calculation follows the method as prior reported (50). Briefly, for a metabolite with carbon number C , the labeled isotopologue is noted as [M + i] , and its fraction is noted as L [M+i] , with i being the number of 13 C atoms in the isotopologue. The overall 13 C labeling L metabolite of the metabolite is calculated as the weighted average of atomized labeling of all isotopologues, or mathematically, The normalized labeling L metabolite←tracer is defined as the labeling of a metabolite normalized by the labeling of the infused tracer, as L metabolite←tracer = L metabolite L tracer As such, the direct contribution of gluconeogenic substrates to glucose production is algebraically calculated by solving the matrix equation Specifically, let M be the matrix and f the vector on the left side, and L the vector on the right side. The operation seeks to min ‖ ‖ M ⋅ f − L ‖ ‖ , subject to vector f ≥ vector 0 The equation is solved using the R package limSolve (51). The error was estimated using Monte Carlo simulation by running the matrix equation 100 times, each time using randomly sampled L metabolite←tracer values drawn from a normal distribution based on the mean and SE of entries in M and f . The calculated f 's were pooled to calculate the error. This scheme was extended to calculate the mutual interconversions among the metabolites. The peak intensity of each measured isotope was corrected by natural abundance. To calculate the fraction of 13 C-labeled carbon atoms of glucose, pyruvate, lactate, glutamine, and alanine derived from 13 C-glucose and 13 C-lactate, percent 13 C enrichment (%) was first calculated from the data corrected by natural abundance and then normalized based on the serum tracer enrichment. 13 C-Tracers in the Liver Mitochondria. Fifty microliters isolated mitochondrial suspension was added into 450 uL modified KPBS (136 mM KCL, 10 mM KH 2 PO 4 , 10 mM HEPES, pH 7.25) containing 2 mM U-13 C pyruvate (Cambridge Isotopes, CLM-2440-0.1) and incubated on ice for 5 min. After incubation, samples were centrifuged at 10,000 g × 30 s, and washed three times by adding 1 mL ice-cold KPBS. Subsequently, the supernatant was removed and 1 mL ice-cold LC/MS 80% methanol was added. To completely extract metabolites from the mitochondria, the sample was homogenized using TissueLyser II (Qiagen, 85300) for 5 min at 30 Hz, followed by centrifugation at 20,000 g for 15 min at 4 °C. The supernatant was kept on dry ice, and the pellet was resuspended with 500 uL 80% LC/MS-grade methanol, vortexed vigorously, and allowed to extract on ice. The samples were then centrifuged at 20,000 g for 10 min at 4 °C. The extraction was vacuum dried using a vacuum concentrator (Eppendorf, concentrator Plus 5305). Dried samples were solubilized in 50 µL LC/MS water. Metabolite analysis was conducted at the BIDMC metabolomics core. The data were normalized by protein concentration. Data, Materials, and Software Availability. Previously published data were used for this work (36). Human SNP data were obtained from the Type 2 Diabetes Knowledge Portal (type2diabetesgenetics.org). scATAC-seq and ChIP-seq data were obtained from GEO (GSE111586 and GSE90533, respectively). For the analysis of SLC25A47 gene expression in human tissues and single cells of the human liver, the data was obtained from Human Protein Atlas (https://www.proteinatlas.org/ENSG00000140107-SLC25A47/tissue and https://www.proteinatlas.org/ ENSG00000140107-SLC25A47/single+cell+type/liver, respectively). The data for mouse Slc25a47 expression in tissues was obtained from GTEx portal (https:// www.gtexportal.org/home/gene/SLC25A47).
2023-02-24T06:18:26.827Z
2023-02-22T00:00:00.000
{ "year": 2023, "sha1": "689e3f8c080ad233da83423a16e723329a562801", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1073/pnas.2216810120", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d6fce4870976e76e9a09e73f60a45f2b77c86419", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235387593
pes2o/s2orc
v3-fos-license
Fault Diagnosis Method for Certain Equipment Based on Case-based Reasoning Absrtact: With the increasing complexity, information and intelligence of combat systems and weapons equipment, the traditional fault diagnosis technology can not meet the requirements of rapid and accurate fault diagnosis of equipment. In this paper, according to the fault characteristics and maintenance status of a certain type of equipment, combined with case-based reasoning technology, an equipment maintenance system which can realize intelligent query, case accumulation and fault reasoning is proposed. Finally, the feasibility of the method is proved by an example. Introduction The rapid upgrading of combat systems and weapons equipment makes the types of equipment increasingly rich and the system structure increasingly complex, which leads to the increasing difficulty of equipment support. A certain type of equipment contains a large number of mechanical, hydraulic, electrical and other parts, its fault form, mechanism is different, in the face of various forms of equipment failure, support personnel in most cases still use the traditional subjective diagnosis, instrument diagnosis and other methods, subjective diagnosis of individual differences are very large, belong to empirical judgment, poor accuracy; and instrument diagnosis cost is high, time-consuming and laborious, therefore, the existing methods are not easy to be efficient, fast troubleshooting, has been difficult to adapt to the combat scene of equipment maintenance rhythm. In order to solve the problem of low efficiency of traditional fault diagnosis, many scholars combine intelligent diagnosis technology to construct fault diagnosis model and system, and put forward some intelligent fault diagnosis methods. Liu Baojie and others [1] use evidence theory and neural network integrated fault diagnosis method to solve the fault problem of hydraulic rocket launcher servo system. Zhou Rusheng and other [2] designed an expert system based on the characteristics of hydraulic system of missile launcher. Based on the particularity of ship hydraulic device and its fault, Yang Guang and other [3] established a fuzzy grey correlation diagnosis model. The above fault diagnosis method realizes the intelligent diagnosis of equipment fault, but the problem of knowledge rule construction and knowledge acquisition bottleneck exists in different degrees. Aiming at this practical problem of this type of equipment, this paper puts forward a kind of equipment fault diagnosis method based on case reasoning, which can realize intelligent query, case accumulation and fault reasoning equipment maintenance system, in order to improve the ability of quick fault location and diagnosis of equipment. Equipment Fault Reasoning Architecture The equipment fault reasoning method based on case reasoning is mainly composed of data acquisition, case base construction and fault diagnosis. The idea of solving the problem is to learn from the historical cases and solutions that have occurred, and to adjust and modify the historical similar cases in combination with the target case phenomenon. The fault diagnosis diagram is shown in figure 1. Fault data acquisition The fault cases used in the invention mainly come from collecting and arranging the troubleshooting and maintenance records during the use of the equipment, including the fault maintenance manual, the maintenance service site work summary. The fault maintenance manual contains the description and solution of the common faults of the equipment by the equipment manufacturer and the user, including the fault phenomenon, the fault equipment, the fault cause, the fault location and so on. The collected fault data are sorted out uniformly, and the low value and redundant data items are eliminated to improve the quality and integrity of the case data. Fault data classification The fault case data is divided into mechanical system, electronic system and hydraulic system according to the equipment system. Fault case feature extraction Define the characteristic attributes of each failure case by standard, the characteristic attributes of each failure case can be expressed as Pi=(P i1 ,P i2 ,P i3 ,P i4 ),(i =1,2,⋯, n), n is the number of fault cases. Specifically, P i1 indicates the operating environment of the equipment, including temperature, humidity, salinity, altitude,recording this information provides a basis for the fault set to quickly match similar failures triggered in the same operating environment. P i2 means the equipment to which the fault belongs, which is three levels, which can be expressed as 3 establish the correlation relationship for the subsequent failure of the same class and the same class parts. P i3 represents the fault phenomenon. By extracting the feature words input into the continuous information data, the key words of the fault phenomenon are obtained, which does not limit the number. This attribute is the core attribute of the system and the basis of analysis and reasoning. P i4 represents the fault solution, which corresponds to the P i3 attribute. Fault classification into the case base After the fault case feature is extracted, the fault feature is saved to the case library according to the equipment system type. Fault inference based on RBF neural network RBF neural network has the advantages of fast convergence speed and good nonlinear mapping ability in fault diagnosis. During the whole fault reasoning process, the RBF neural network is equivalent to a similarity computing network, and the similarity between the vector of the feature element of the target case and the vector of the known case feature element is calculated by the incentive function of the hidden layer. [4][5] RBF network structure is shown in figure 2, with three layers: input layer, hidden layer and output layer. The hidden layer uses a Gaussian function whose weighted network output is: Where the y j (x) is the output of the j node of the output layer, g i (x) is the output of the i node of the hidden layer, and the ω ij is the from the hidden layer to the output layer, and the c i 、 σ i is the center and variance of the Gaussian function at the i node, the ‖•‖ is the distance between the input x and c i , n、 h、m is the number of nodes in each layer. Fault diagnosis process The steps of fault diagnosis are as follows: Step 1, obtain the historical fault case data of the equipment, preprocess, classify and extract the historical fault case data in turn, get each kind of corresponding feature attribute, and establish the fault case database accordingly; Step 2, use the case data in the fault case database to train the RBF neural network and get the trained RBF neural network; Step 3, feature extraction for diagnostic faults. After extracting the feature attributes of the target case, the TF-IDF keyword extraction algorithm is used to extract the feature of the case, mainly to Step 4, the case retrieval reasoning. Query past cases through key feature attributes, that is, retrieve case base. After quantifying the extracted key feature attributes, all the cases in the case library are clustered according to word vectors to find the cases with the same feature attributes as the target cases, that is, similar cases. The fault features of each similar case are transformed into word vectors and input the trained RBF neural network. Step 5, output diagnostic results. According to the output of the RBF neural network, the fault cases with high similarity are selected to locate the fault and find the fault cause and its solution. Step 6, case study adjustment. According to the output results, if the same source case as the target case is retrieved, the definite solution is obtained, the target problem can be solved, and the target case is given up in order to avoid redundancy. If a source case similar to the target case is retrieved, the proposed solution to the target case is obtained by the solution of the similar case, and the case correction is carried out according to the actual situation, and then the solution is determined and saved to the case base as a new case. Examples of fault diagnosis The verification system of this fault diagnosis method is based on the operation page of Windows operating system, B/S architecture, operating and debugging in Pycharm2018.1.3 and Java8.0 programming environment, using Python3.8 programming language to develop, and the system database using MySQL database. The system consists of three main modules: fault case input module, case reasoning module and system management module. The system framework structure is shown in figure 3. Figure 3 System framework diagram Target case: under the condition of temperature 5℃, humidity 58%, air pressure 976hpa, altitude 400m,salinity 1ppm, and day and night temperature difference of 10℃, when loading and unloading vehicle hoisting operation, No.3 vehicle reported that the lifting equipment operation was slow to carry out. First, input fault description, extract fault features, intelligent identification results are shown in Table 1. 0. ] The fault characteristics after vectorization are clustered, and the results are shown in Table 3. Table 3 Fault characteristic clustering results After clustering, the fault features of each case are transformed into word vectors and the radial basis function neural network is input. The similarity between each case and the input case is obtained and sorted. According to the similarity from high to low, the fault features and solution features of the case are displayed, as shown in Table 4.
2021-06-10T20:03:23.113Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "f5905d8811556bc0e54b0e5e74e0ee53eabdcf67", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1939/1/012094", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f5905d8811556bc0e54b0e5e74e0ee53eabdcf67", "s2fieldsofstudy": [ "Engineering", "Physics", "Computer Science" ], "extfieldsofstudy": [ "Physics" ] }
264477181
pes2o/s2orc
v3-fos-license
ANISOTROPIC COSMOLOGICAL MODEL IN f (R,T) THEORY OF GRAVITY WITH A QUADRATIC FUNCTION OF T † In this paper, we study spatially homogeneous and anisotropic Bianchi type-I space-time filled with perfect fluid within the framework of 𝑓(cid:4666)𝑅 , 𝑇(cid:4667) theory of gravity for the functional form 𝑓(cid:4666)𝑅 , 𝑇(cid:4667) (cid:3404) 𝑅 (cid:3397) 2 𝑓(cid:4666)𝑇(cid:4667) with 𝑓(cid:4666)𝑇(cid:4667) (cid:3404) 𝛼𝑇 (cid:3397) 𝛽𝑇 (cid:2870) , where 𝛼 and 𝛽 are constants. Exact solutions of the gravitational field equations are obtained by assuming the average scale factor to obey a hybrid expansion law and some cosmological parameters of the model are derived. Two special cases, leading to the power-law expansion and the exponential expansion are also considered. We investigate the physical and geometrical properties of the models by studying the evolution graphs of some relevant cosmological parameters such as the Hubble parameter ( 𝐻 ), the deceleration parameter ( 𝑞 ) etc. INTRODUCTION More than two decades have passed since the first observational results from Supernovae Type Ia [1][2][3] with strong support from a number of astrophysical and cosmological observations such as Cosmic Microwave Background (CMB), Wilkinson Microwave Anisotropy Probe (WMAP), Large Scale Structure (LSS), Baryon Acoustic Oscillation (BAO), Galaxy redshift surveys [4][5][6][7][8][9][10] etc. that the universe at present is in a state of accelerated expansion.It is accepted as true that there was also a cosmic acceleration, which occurred at the very early epoch of the universe.The early time cosmic acceleration, called inflation, although there is no known direct detection for this, has theoretical explanations, but the root cause of the late time cosmic acceleration having direct detection is yet to be ascertained.Since matter contributes with force and positive pressure that decelerates the rate of cosmic expansion, therefore, as a resolution to this bizarre issue a substantial amount of energy component apart from the baryonic matter is hypothesized to be present in the universe to speed up the cosmic expansion.It is possible only when an unusual component with large negative pressure, dubbed dark energy, covering nearly 68.3% of the total energy content of the universe is present to counteract the gravitational pressure of the baryonic matter.Within the framework of General Relativity, the most efficient candidate for dark energy is the cosmological constant Λ as it works well with the observational data.But due to its problematic nature with the fine-tuning and the cosmic coincidence problems, various other dark energy models such as quintessence, k-essence, tachyon, phantom, Holographic dark energy, Chaplygin gas models etc. have been proposed in the literature. The problem of late time cosmic acceleration has also been approached with some alternative theories of gravity, popularly known as modified theories of gravity, which are developed by modifying the geometric part of the Einstein-Hilbert action.Among the various modified theories of gravity, the simplest and the most studied one is the theory of gravity, the action of which is constructed from the standard Einstein-Hilbert action simply by taking an arbitrary function in place of , where is the Ricci scalar curvature.The other most interesting and viable alternative to General Relativity is the , theory of gravity proposed by Harko et al. [11] in which the gravitational Lagrangian in Einstein-Hilbert action is given by an arbitrary function , of the Ricci scalar and the trace of the stress-energy tensor .In their work, they have obtained the gravitational field equations in the , gravity in the metric formalism and presented the field equations for the three explicit forms of the functional , : (i) , 2 , (ii) , , iii , .Harko et al. also derived the equations of motion of test particles together with the Newtonian limits in , gravity models.Further, they have investigated the constraints on the magnitude of the extra-acceleration on the precession of the perihelion of the planet Mercury.Houndjo [12] discussed transaction of matter dominated phase to an accelerated expansion phase by developing the cosmological reconstruction of , theory of gravity.Since then, many researchers have studied cosmological dynamics in , theory of gravity as it takes care of the early time inflation as well as the late time cosmic acceleration.A number of authors have also investigated Bianchi cosmological models in , theory of gravity in different contexts as Wilkinson Microwave Anisotropy Probe (WMAP) and some other experimental tests support the existence of an anisotropic phase in the early era which might have been wiped out in the course of cosmic evolution resulting in the present isotropic phase.Adhav [13] investigated LRS Bianchi Type I cosmological model with perfect fluid, Reddy et al. [14] explored Bianchi Type III and Kaluza-Klein cosmological model, Chandel and Ram [15] generated a new class of solutions of field equations from a set of known solutions for a Bianchi Type III cosmological model with perfect fluid, Chaubey and Shukla [16] studied a new class of Bianchi Type III, V, VI models in presence of perfect fluid, Sahoo and Mishra [17] investigated Kaluza-Klein dark energy model in the presence of wet dark fluid, Ladke et al. [18] constructed higher dimensional Bianchi Type-I cosmological model, Sahoo et al. [19] investigated an axially symmetric space-time in presence of perfect fluid source, Agrawal and Pawar [20] investigated plane symmetric cosmological model in the presence of quark and strange quark matter, Bhoyar [21] talked about non-static plane symmetric cosmological model with magnetized anisotropic dark energy, Yadav et al. [22] searched the existence of bulk viscous Bianchi-I embedded cosmological model by taking into account the simplest coupling between matter and geometry, Yadav et al. [23] investigated a bulk viscous universe and estimated the numerical values of some cosmological parameters with observational Hubble data and SN Ia data.Singh and Beesham [24] explored a plane symmetric Bianchi Type I model by considering a specific Hubble parameter which yields a constant deceleration parameter, Chaubey et al. [25] considered general class of anisotropic Bianchi cosmological models in , gravity with dark energy in viscous cosmology, Bhattacharjee et al. [26] presented modelling of inflationary scenarios, Tiwari et al. [27] studied Bianchi type I cosmological model for a specific choice of the function of the trace of the energy momentum tensor.Modifications and generalisations of , theory of gravity are also considered in the literature.Singh and Bishi [28] studied Bianchi Type III cosmological model in the presence of cosmological constant Λ.Moraes et al. [29] investigated static wormholes in modified , gravity.Moraes and Sahoo [31] have proposed a new hybrid shape function for wormhole.Azmat et al. [31] studied viscous anisotropic fluid and constructed corresponding dynamical equations and modified field equations in , theory of gravity. Tretyakov [32] discussed the possibility of a further generalization of , gravity by incorporating higher derivative terms in the action and demonstrated that inflationary scenarios appear quite naturally in the theory.Recently, several authors have studied various other cosmological scenarios in the framework of , theory of gravity [33][34][35][36][37][38][39].Motivated by the above-mentioned works, we focus our present work in studying spatially homogeneous and anisotropic Bianchi type-I universe with perfect fluid source in , theory of gravity for the functional form , 2 , where α and β are constants.The field equations are solved by assuming the average scale factor in the form of hybrid expansion law.We organize the paper as follows: in section 2, we give a brief review of the , theory of gravity.In section 3, we derive the gravitational field equations for the Bianchi type-I metric.Exact solutions of the field equations are obtained in section 4. In section 5, some physical and kinematical properties of the model are discussed by graphically representing the evolution of graphs of some parameters of cosmological importance.Two particular scenarios are also examined when the expansion of the universe is governed by power-law expansion and exponential expansion only.We summarize the main results with some concluding remarks in section 6. BRIEF REVIEW OF 𝒇 𝑹, 𝑻 GRAVITY In , gravity proposed by Harko et al. (2011), the action is taken as where , is an arbitrary function of the Ricci Scalar and of the trace of the stress-energy tensor of matter defined by Here, is the matter Lagrangian that generates a specific set of field equations for each choice of .By assuming the Lagrangian of matter to depend only on the metric tensor components and not on its derivatives, the stress-energy tensor can be obtained as By varying the action (1) with respect to the metric tensor components , the field equations of , theory of gravity in the metric formalism are obtained as where, , and , are the partial derivatives of , with respect to and respectively, ∇ is the covariant derivative,  ∇ ∇ is the D'Alembert operator and The stress-energy tensor of matter is assumed to take the perfect fluid form so that where and are respectively the density and pressure of the perfect fluid. For the choice , we thus have For the functional form where is an arbitrary function of , the gravitational field equations of , gravity are obtained from Eq. ( 4), as where .In view of eq (6), the field equations ( 9) become For the choice where and are constants, the eq.( 10), in presence of a time varying cosmological constant Λ, reduces to THE METRIC AND FIELD EQUATIONS The spatially homogeneous and anisotropic Bianchi type-I metric is given by where the directional scale factors A, B and C are functions of the cosmic time t alone.In comoving coordinates, the field equations ( 12) take the form where an overhead dot denotes differentiation with respect to . COSMOLOGICAL SOLUTION OF THE FIELD EQUATIONS Here, we have four field equations with six unknowns , , , , and Λ.So, in order to obtain a complete solution, we have to consider two extra conditions. Therefore, we consider the equation of state for perfect fluid as where is a constant.And, the average scale factor defined by to obey the hybrid expansion law proposed by Akarsu et al. [40]; where and are non-negative constants and represents the present value of the scale factor and represents the present age of the universe. PHYSICAL AND GEOMETRICAL PROPERTIES OF THE MODEL For our model, some important cosmological parameters are: The spatial volume The mean Hubble parameter is The deceleration parameter The expansion scalar The shear scalar The anisotropy parameter where , , are the directional Hubble parameters.The energy density (), the pressure () and the cosmological constant (Λ) are obtained as To explore the physical and geometrical properties of the model from the evolution graphs of the cosmological parameters, we take 0.6, 0.2, 1, 1, 0.7, 0.3, 1, 0.1, 0.1, 1.From the graphs, we observe that the Hubble parameter and the deceleration parameter are decreasing functions of cosmic time.The energy density and pressure are also decreasing function of cosmic time.Figure 5, shows that the cosmological constant Λ decreases rapidly at initial stage and tend to zero in the course of evolution.The hybrid expansion law ( 20) is a combination of the power law expansion and the exponential expansion.It yields the power law expansion for 0 and the exponential expansion for 0. Case (i): When 0 , equation ( 20) reduces to , which is the power-law of expansion.Then, equations ( 21)-( 23) yield Thus, when the expansion of the universe is governed by a power law expansion, then From the Figures 6, 7, 8 and 9, we see that the Hubble parameter (H), energy density (), pressure () are decreasing functions of cosmic time and the cosmological constant (Λ) decreases initially to negative value and then increases tending to zero as time evolves.The deceleration parameter may be positive, negative or zero depending on the values of .For 1, the expansion of the universe corresponding to the constructed model accelerates.For 1, the expansion decelerates and for 1, the universe undergoes uniform expansion.From the graphs, we observe that the energy density, pressure and cosmological constant initially assume negative values and then tend to zero in the course of time. CONCLUDING REMARKS In this paper, we study Bianchi type-I cosmological model within the framework of , theory of gravity considering the functional , 2 , where and are constants.We consider the expansion of the universe to follow a hybrid expansion law and obtain exact solution of the field equations.Two particular cases are also considered when the expansion of the universe is governed by a power law and an exponential law only.We investigate the physical and kinematical properties of various cosmological parameters in all these three cases and find that • Both the hybrid expansion law and power law of expansion induce an initial singular model of the universe as the metric coefficients , and vanish at the initial moment.In case of exponential expansion law, the metric coefficients , and become constants at 0. • For hybrid law and power law of expansion, the physical parameters , , , assume very high value at the initial epoch and tend to zero for large .Also, the volume of the universe is zero at the beginning and increases exponentially with time .Hence, the universe starts with the Big Bang singularity at 0 and then expand throughout the evolution.In case of exponential law, the physical parameters , become constants.Volume is initially very low and increases exponentially in the course of time while the other parameters show similar behavior as hybrid law and power law of expansion. • In hybrid expansion law, the deceleration parameter approaches 1 for large cosmic time.In case of power law of expansion, it may be positive, negative or zero showing thereby that the universe may undergo accelerating expansion, decelerating expansion or uniform expansion.The expression for the deceleration parameter , in the case of exponential law of expansion, shows that the expansion of the universe is decelerating throughout the evolution without depending on . • For hybrid law and power law of expansion, the energy density and pressure increase rapidly at the beginning but it decreases in the course of evolution and tend to 0 at late time.But in case of exponential expansion law, the energy density and pressure are negative and increase exponentially throughout the evolution of the universe and tend to 0 as time → ∞. • The cosmological constant Λ decreases initially and then increases and tends to 0 at late times for hybrid law as well as power-law of expansion.In the case of exponential law, cosmological constant is negative and increases in the course of time tending to zero at late times. Figure 1 . 2 .Figure 3 . 4 .Figure 5 . Figure 1.Variation of the Hubble parameter v/s cosmic time Figure 2. Variation of the deceleration parameter v/s cosmic time
2023-10-26T15:08:42.848Z
2023-09-04T00:00:00.000
{ "year": 2023, "sha1": "5815007a8447e21c2becb219cd8f9fd1ff3f04d5", "oa_license": "CCBY", "oa_url": "https://periodicals.karazin.ua/eejp/article/download/22118/20451", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "bccd0e362fe3b9fdde18f54a5de1e38e01fadaec", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
229677993
pes2o/s2orc
v3-fos-license
The Theory of Resummed Quantum Gravity: Phenomenological Implications We present an overview of the phenomenological implications of the theory of resummed quantum gravity. We discuss its prediction for the cosmological constant in the context of the Planck scale cosmology of Bonanno and Reuter, its relationship to Weinberg's asymptotic safety idea, and its relationship to Weinberg's soft graviton resummation theorem. We also discuss constraints and consistency checks of the theory. Introduction We use the well-known elementary example of "summation", to illustrate why resummation can be worth its pursuit. Even though the mathematical tests for convergence of the series would only guarantee convergence for | | <1, this geometric series is summed to infinity to yield the analytic result that is well-defined except for a pole at x=1. The result of the summation yields a function that is well-defined in the entire complex plane except for the simple pole at x=1 -infinite order summation has yielded behavior very much improved from what one sees order-by-order in the respective series. We are thus motivated to "resum' series that are already being summed to seek improvement in our knowledge of the represented function. This we illustrate as follows: On the LHS (left-hand side) we have the original Feynman series for a process under study. On the RHS (right-hand side0 are two versions of resumming this original series. One, labeled exact, is an exact re-arrangement of the original series. The other, labeled approx., only agrees with the LHS to some fixed order N in the expansion parameter . For some time now, discussion has occurred as to which version is to be preferred [1]. Recently, a related more general version of this discussion occurs for quantum gravity. Whether quantum gravity is even calculable in relativistic quantum field theory is a fair but difficult question? Answers vary. According to string theory [2] the answer is no, the true fundamental theory entails a one-dimensional Planck scale superstring. If we accept loop quantum gravity [3] we also find that the answer is no, the fundamental theory entails a space-time foam with a Planck scale loop structure. The answer is also no in the Horava-Lifshitz theory [4] because the fundamental theory requires Planck scale anisoptropic scaling for space and time. Kreimer [5] suggests that quantum gravity is leg-renormalizable such that the answer is yes. Weinberg [6] suggests that quantum gravity may be asymptotically safe, with an S-matrix that depends only on a finite number of observable parameters, due to the presence of a non-trivial UV fixed point, with a finite dimensional critical surface; this is equivalent to an answer of yes. We would note that the authors in Refs. [7][8][9][10][11][12], using Wilsonian [13] field-space exact renormalization group methods, obtain results which support Weinberg's UV fixed-point. The results in Ref. [14] also give support to Weinberg's asymptotic safety suggestion. In what follows, the YFS [15,16] version of the exact example is extended to resum the Feynman series for the Einstein-Hilbert Lagrangian for quantum gravity. In conformity with the example in eq.(1), the resultant resummed theory, resummed quantum gravity (RQG), is very much better behaved in the UV compared to what one would estimate from that Feynman series. As we show in Refs. [18][19][20][21] the RQG realization of quantum gravity leads to Weinberg's UV-fixed-point behavior for the dimensionless gravitational and cosmological constants -the resummed theory is actually UV finite. RQG and the latter results are reviewed in Section 2. The RQG theory, taken together with the Planck scale inflationary [24,25] cosmology formulation in Refs. [22,23] from the asymptotic safety approach to quantum gravity in Refs. [7][8][9][10][11][12], allows us to predict [27] the cosmological constant Λ. The prediction's closeness to the observed value [28,29] motivates us to discuss its reliability and we argue [30] that its uncertainty is at the level of a factor of O (10). Constraints on susy GUT's follow. We present the Planck scale cosmology that we use and the latter results in Section 3. We note that the pioneering result of Weinberg [17] on summing soft gravitons is an important point of contact for our approach to quantum gravity. Specifically, in an on-shell → process with transition rate Γ 0 without soft graviton effects, Weinberg showed that inclusion of the virtual soft graviton effects results in the transition rate YFS-type soft resummation and its extension to quantum gravity was also worked-out by Weinberg in Ref. [17]. The authors in Ref. [26] also proposed the attendant choice of the scale ∼ 1/ used in Refs. [22,23]. where is the infrared cutoff and is the Weinberg [17] soft cutoff which defines what is meant by infrared. is given by where is Newton's constant, = +1(−1) when particle is outgoing (incoming), respectively, and is the relative velocity = 1 − 2 2 ( ) 2 for particles and with masses , and four momenta , , respectively. In the 2-to-2 case where 1 and 2 are incoming, 3 and 4 are outgoing, and all masses have the same value , (4) shows a growth of the damping represented by with large values of as the exponential of −(4 / ) ln 2 ln( / ) for large values of the cms energy squared for the wide-angle case with the scattering angle at 90 in the center of momentum system. In our discussion below we recover this same type of growth of the analog of with large invariant squared masses in the context of resumming the large IR regime of quantum gravity. Overview of Resummed Quantum Gravity As the Standard Theory of elementary particles contains many point particles, to investigate their graviton interactions, we consider the Higgs-gravition extension of the Einstein-Hilbert theory, already studied in Refs. [33,34]: is the curvature scalar, is the determinant of the metric of space-time ≡ + 2 ℎ ( ), and = √ 8 . We expand [33,34] about Minkowski space with = diag{1, −1, −1, −1}. ( ), our representative scalar field for matter, is the physical Higgs field and ( ) , ≡ ( ). We have introduced Feynman's notation¯ ≡ 1 2 + − for any tensor . In (5) and in what follows, ( ) is the bare (renormalized) scalar boson mass. We set presently the small observed [28,29] value of the cosmological constant to zero so that our quantum graviton, ℎ , has zero rest mass in (5). The Feynman rules for (5) were essentially worked out by Feynman [33,34], including the rule for the famous Feynman-Faddeev-Popov [33,35,36] ghost contribution required for unitarity with the fixing of the gauge (we use the gauge in Ref. [33], h = 0). As we have shown in Refs. [18][19][20], the large virtual IR effects in the respective loop integrals for the scalar propagator in quantum general relativity can be resummed to the exact result The form for ′′ ( ) holds for the UV(deep Euclidean) regime , so that Δ ′ ( )| resummed falls faster than any power of | 2 |. See Ref. [18] for the analogous result for m=0. Here, − Σ ( ) is the 1PI scalar self-energy function so that Δ ′ ( ) is the exact scalar propagator. The residual Σ ′ starts in O ( 2 ). We may We follow D.J. Gross [31] and call the Standard Model the Standard Theory henceforth. We treat spin as an inessential complication [32]. Our conventions for raising and lowering indices in the second line of (5) are the same as those in Ref. [34]. By Wick rotation, the identification −| 2 | ≡ 2 in the deep Euclidean regime gives immediate analytic continuation to the result for ′′ ( ) when the usual − , ↓ 0, is appended to 2 . drop it in calculating one-loop effects. When the respective analogs of Δ ′ ( )| resummed are used for the elementary particles, all quantum gravity loops are UV finite [18][19][20]. Specifically, extending our resummed propagator results to all the particles in the ST Lagrangian and to the graviton itself, we show in the Refs. [18][19][20] 2.56 × 10 4 is defined in Refs. [18][19][20]. For the dimensionless cosmological constant * we use the VEV of Einstein's equation + Λ = − 2 , in a standard notation, to isolate [27] Λ . In this way, we find the deep UV limit of Λ then becomes, allowing is the fermion number of particle , is the effective number of degrees of freedom of and = ( ( )). * vanishes in an exactly supersymmetric theory . Here, we have used the results that a scalar makes the contribution to Λ given by and that a Dirac fermion contributes −4 times Λ to Λ, where = ln 2 with ( ) = 2 2 2 for particle j with mass . We note that the UV fixed-point calculated here, ( * , * ) (0.0442, 0.0817), and the estimate ( * , * ) ≈ (0.27, 0.36) in Refs. [22,23] are similar in that in both of them * and * are positive and are less than 1 in size. Further discussion of the relationship between the two fixed-point predictions can be found in Refs. [18]. Review of Planck Scale Cosmology and an Estimate of Λ The authors in Ref. [22,23], using the exact renormalization group for the Wilsonian [13] coarse grained effective average action in field space in the Einstein-Hilbert theory, as discussed in Section 1, have argued that the dimensionless Newton and cosmological constants approach UV fixed points as the attendant scale goes to infinity in the deep Euclidean regime. This is also in agreement with what we have found in RQG. The contact with cosmology one may facilitate via a connection between the momentum scale characterizing the coarseness of the Wilsonian graininess of the average effective action and the cosmological time . The authors in Refs. [22,23], using this latter connection, arrive at the following extension of the standard cosmological equations: Here, is the density and ( ) is the scale factor with the Robertson-Walker metric given as where = 0, 1, −1 correspond respectively to flat, spherical and pseudo-spherical 3-spaces for constant time t. The attendant equation of state is ( ) = ( ), where is the pressure. The aforementioned relationship between and the cosmological time is ( ) = with the constant > 0 determined from constraints on physical observables. These follow from the spin independence [17,18,37] of a particle's coupling to the graviton in the infrared regime. We note the use here in the integrand of 2 2 0 rather than the 2( ì 2 + 2 ) in Ref. [21], to be consistent with = −1 [38] for the vacuum stress-energy tensor. Using the UV fixed points for 2 ( ) ≡ * and Λ( )/ 2 ≡ * obtained independently, the authors in Refs. [22,23] solve the cosmological system in Eqs. (6). They find, for = 0, a solution in the Planck regime where 0 ≤ ≤ class , with class a "few" times the Planck time , which joins smoothly onto a solution in the classical regime, > class , which coincides with standard Friedmann-Robertson-Walker phenomenology but with the horizon, flatness, scale free Harrison-Zeldovich spectrum, and entropy problems all solved purely by Planck scale quantum physics. We now recapitulate how to use the Planck scale cosmology of Refs. [22,23] and the UV limits { * , * } in RQG [18][19][20] in Refs. [21] to predict [27] the current value of Λ. Specifically, the transition time between the Planck regime and the classical Friedmann-Robertson-Walker(FRW) regime is determined as ∼ 25 in the Planck scale cosmology description of inflation in Ref. [23]. In Ref. [27] we show that, starting with the quantity Λ ( ) ≡ Λ( ) 8 ( ) , we get, following the arguments in Refs. [39] ( is the time of matterradiation equality), 13.7 × 10 9 yrs. is the age of the universe. The estimate in (8) is close to the experimental result [29] In Ref. [27], detailed discussions are given of the three issues of the effect of various spontaneous symmetry breaking energies on Λ, the effect of our approach to Λ on big bang nucleosynthesis(BBN) [41], and the effect of the time dependence of Λ and on the covariance [42][43][44] of the theory. We refer the reader to the respective discussions in Ref. [27]. In Ref. [30], we have argued, regarding the issue of the error on our estimate, that the structure of the solutions of Einstein's equation, taken together with the Heisenberg uncertainty principle, implies the constraint ≥ where Λ( ) follows from (8) (see Eq.(52) in Ref. [27]). This constraint's equality gives the estimate [27] of the transition time, t = / = 1/ tr , from the Planck scale inflationary regime [22,23] to the Friedmann-Robertson-Walker regime via the implied value of . On solving this equality for we get 25.3, in agreement with the value 25 implied by the numerical studies in Ref. [22,23]. This agreement suggests an error on t at the level of a factor O (3) or less and an uncertainty on Λ reduced from a factor of O (100) [27] to a factor of O (10). One may ask what would happen to our estimate if there were a susy GUT theory at high scales? Even though the LHC has yet to see [46] any trace of susy, it may still appear. In Ref. [27], for definiteness and purposes of illustration, we use the susy SO(10) GUT model in Ref. [45] to illustrate how such a theory might affect our estimate of Λ. We show that either one needs a very high mass for the gravitino or one needs twice the usual particle content with the susy partners of the new quarks and leptons at masses much lower than their partners' masses -see Ref. [27].
2020-12-29T02:15:51.701Z
2020-12-23T00:00:00.000
{ "year": 2021, "sha1": "9a9537b392dd0991cb4c1e465c3561d5da97beae", "oa_license": "CCBYNCND", "oa_url": "https://pos.sissa.it/390/674/pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "9a9537b392dd0991cb4c1e465c3561d5da97beae", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
37342780
pes2o/s2orc
v3-fos-license
Image Segmentation to Distinguish Between Overlapping Human Chromosomes In medicine, visualizing chromosomes is important for medical diagnostics, drug development, and biomedical research. Unfortunately, chromosomes often overlap and it is necessary to identify and distinguish between the overlapping chromosomes. A segmentation solution that is fast and automated will enable scaling of cost effective medicine and biomedical research. We apply neural network-based image segmentation to the problem of distinguishing between partially overlapping DNA chromosomes. A convolutional neural network is customized for this problem. The results achieved intersection over union (IOU) scores of 94.7% for the overlapping region and 88-94% on the non-overlapping chromosome regions. Introduction Neural networks are a powerful approach to segmenting images, including for street scenes and biomedical images of tissue. In medicine, visualizing chromosomes is important for medical diagnostics, drug development, and biomedical research. Unfortunately, chromosomes often overlap and it is necessary to identify and distinguish between the overlapping chromosomes. For example, some diseases are associated with particular chromosomes or the existence of more or fewer than the expected number of chromosomes. Challenges to this problem include that the overlapping objects may be nearly identical and that it is arbitrary which object is considered the first object and which one the second. Furthermore, overlapping chromosomes may look like one larger chromosome, may criss-cross, or one may be almost entirely on top of the other. A segmentation solution that is fast and automated will enable scaling of cost effective medicine and biomedical research. Traditional methods of distinguishing between overlapping chromosomes involved printing and cutting out individual chromosomes by hand, thresholding on histogram values of pixels, geometric analysis of chromosome contours, among others, and required human intervention when partial overlaps occur. In this work, we apply neural network-based image segmentation to the problem of distinguishing between partially overlapping human chromosomes. 1 A convolutional neural network, based on U-Net, is customized for this problem. The model is designed so that the output segmentation map has the same dimensions as the input image. To reduce computation time and storage, the model is also simplified. This is because the dimensions of the input image, the set of potential objects in the image, and the set of potential chromosome shapes, are all small, which reduces the scope of the problem, the required capacity of the model, and thus the modeling needs. Various hyperparameters of the model are explored and tested. Section 2 outlines the background, Section 3 describes the data and preprocessing, Section 4 elaborates on the model, Section 5 summarizes the results, and Section 6 concludes with future work. Cytogenetics and Molecular Cytogenetics Cytogenetics is the study of chromosomes, including their numbers and structures up to the nucleotids scale [4] [13] . Pionneering works in species from flies to maise [15] enabled the understanding of genes and their inheritance. Human cytogenetics started in 1956 with the discovery of the exact number of chromosomes in humans [21], soon followed by the discovery that structural chromosomal or number anomalies can be be associated with cancer or developmental diseases. Human cytogenetics become a diagnostic tool. Cytogenetics is also used as a biological dosimeter in radiobiology, which is the study of the effect of radiation on living beings [5]. Digital Image Processing in Cytogenetics The advent of molecular cytogenetics and fluorescent probes (FISH or Fluorescent in-situ Hybridization) yields insights otherwise inaccessible by stained-based cytogenetics. Computers and dedicated software applications started to replace scissor cutouts of black and white pictures of chromosomes for karyotyping. New algorithms and application were developed to process and interpret fluorescent images, study genomic hybridization, and measure the telomere length Q-FISH [12] [18] [2]. Quantitative methods were developed to become metaphase-free and array-based [4]. Metaphasic chromosomes were used to detect targeted chromosomal anomalies [21] or for QFISH [22]. Computer based chromosome segmentation and classification is still an open problem [1], particularly the resolving of overlapping chromosomes. Up to now, approaches rely on geometric approachs based on contour analysis [7], finding a skeleton [19] [17] [16]. These methods can be rule-based or involve classifiers with hand crafted features. Even for a case as simple as a pair of crossing chromosomes forming a cross, there is ambiguity when it comes to reassembling the pieces to reconstitute the two chromosomes [8]. Grisan et al. developed a tree search to address this issue [6]. Contour-Based Resolution of Crossing Chromosomes Chromosomes can be DAPI stained in fluorescence imaging, or stained with giemsa in conventional cytogenetics. After adaptive thresholding and labeling of connected components of binary particles, images of chromosomes can be isolated. Those images can yield single chromosomes, touching chromosomes or overlapping chromosomes. In the following emblematic example taken from a metaphase, shown in Figure 1, a polygonal approximation is computed from the chromosome contour and some remarkable points can be isolated. The four points corresponding to the chromosomal crossing determine a polygon containing the pixels belonging to the overlapping domain. Even for a case as emblematic as a pair of crossing chromosomes forming a four-armed cross, there is ambiguity of a combinatorial nature when it comes to reassembling the pieces to reconstitute the two chromosomes [8]. This ambiguity is illustrated in Figure 2. This ambiguity necessites a decision. Grisan et al. developed a tree search from high resolution Q banded chromosomes to address this issue [6]. Successful results were reported on resolving chromosomes clusters [17] [16], on limited numbers of chromosome clusters extracted from images of metaphases, and in some cases on synthetic images combining chromosomes using Adobe CS [17]. Figure 2: Combinatorial issue when reassembling segmented parts of two crossing chromosomes. In this case three pairs, mutually exclusive, can be generated. Deep Learning for Image Segmentation Convolutional neural networks are popular for image segmentation. These include fully convolutional network [14], dilated convolutions [23], and encoder-decoder architectures [20] [3]. We propose to solve the overlapping chromosome problem by replacing geometric algorithms with methods from deep learning. Collection and Generation To create a segmentation solution to resolve overlapping chromosomes, we built a dataset for semantic segmentation using thousands of semi-synthetically generated overlapping chromosomes. Images of single chromosomes were extracted from an image of human metaphase hybridized with a Cy3 fluorescent telomeric probe [12]. Blue (DAPI) and orange (Cy3) components of the image of a single chromosome were combined into a greyscale image as shown in Figure 3. Then the resolution of the images were decrease by two. In each pair of chromosomes, each chromosome was rotated and one chromosome was relatively translated horizontally and vertically to the other one. The overlapping chromosomes were generated by meaning the two grey scaled images of the chromosomes. The so-called ground-truth labels were generated by adding the mask of each single chromosome. By choosing the value 1 for the mask of the first chromosome and the value 2 for the mask of the other chromosome, the label of the overlapping domain has the value 3. Only pairs with ground-truth containing overlapping domains were kept. Raw images of metaphasic chromosomes, dataset and a jupyter notebook are available from kaggle or from dip4fish blog [9], [10], [11]. Description of the Dataset The final data set is comprised of about thirteen thousand grayscale images (94 x 93 pixels). For each image, there is a ground truth segmentation map of the same size, as shown in Figure 4. In the segmentation map, class labels of 0 (shown as black) correspond to the background, class labels of 1 (shown as red below) correspond to non-overlapping regions of one chromosome, class labels of 2 (show as green) correspond to non-overlapping regions of the second chromosome, and labels of 3 (shown as blue) correspond to overlapping regions. Figure 4: Sample of overlapping chromosomes input image and ground-truth label Preprocessing A few erroneous labels of 4 were corrected to match the label of the surrounding pixels. Mislabels on the non-overlapping regions, which were seen as artifacts in the segmentation map (example in Figure 5), were addressed by assigning them to the background class unless there were at least three neighboring pixels that were in the chromosome class. The images were cropped to 88 x 88 pixels so that the dimensions were divisible by 2, which helped processing in the pooling layers of the neural network. Figure 5: An initial data pre-processing step was performed on segmentation maps that had artifacts Methods and Model Architecture One simple solution is to classify pixels based on their intensity. Unfortunately, when histograms of the overlapping region and the single chromosome regions are plotted, as shown in Figure 6, there is significant overlap between the two histograms. Thus, a simple algorithm based on a threshold pixel intensity value would perform poorly. Figure 6: Histogram of pixel vales A convolutional neural network was created for this problem, illustrated in Figure 7. The deep learning solution used for this problem was inspired by U-Net, a convolutional neural network for image segmentation that was demonstrated on medical images of cells. The model for overlapping chromosomes was designed so that the output segmentation map has the same length and width as the input image. To reduce computation time and storage, the model was also simplified, with almost a third fewer layers and blocks. This is because the dimensions of the input image are small (an order of magnitude smaller than the input to U-Net) and thus too many pooling layers is undesirable. Furthermore, the set of potential objects in the chromosome images is small and the set of potential chromosome shapes is also quite limited, which reduces the scope of the problem and thus the modeling needs. Also, cropping was not done within the network and padding was set to be 'same'. This was because given the small input image, it was undesirable to remove pixels. Since the problem was not straightforward, various architectures were investigated and the design of the model went through several iterations. These investigations included encoding the class labels as integers, using one-hot encodings, combining the classes of the non-overlapping regions, treating each chromosome separately, using or not using class weights, trying different activation functions, and choosing different loss functions. The model was trained on 64% of the data, validated on 16% of the data, and tested on the last 20% of the data. Results Visualizations of the input, ground truth, and model predictions are shown in Figure 8. To quantitatively assess the results, the intersection over union (IOU, or Jaccard's index) is calculated. The model is able to achieve an IOU of 94.7% for the overlapping region, and 88.2% and 94.4% on the two chromosomes. The deep learning model resulted in IOU scores of up to 94.7% on overlapping chromosomes. To improve the prediction results, the data set can be supplemented with images of single chromosomes and more than two overlapping chromosomes. Data augmentation can also include transformations such as rotations, reflections, and stretching. Additional hyperparameters can also be explored, such as sample weights, filter numbers, and layer numbers. Increasing convolution size may improve misclassification between the red and green chromosomes. To build a production system that can operate on entire microscope images, the model proposed in this paper can be combined with an object detection algorithm. First, the object detection algorithm can draw bounding boxes around chromosomes in an image. Then, an image segmentation algorithm, based on the model presented here, can identify and separate chromosomes.
2017-12-20T18:48:41.000Z
2017-12-20T00:00:00.000
{ "year": 2017, "sha1": "490715d5815e625effc84e61a6d106135630e21f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "490715d5815e625effc84e61a6d106135630e21f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Computer Science", "Biology", "Mathematics" ] }
18295903
pes2o/s2orc
v3-fos-license
Physical and Cognitive Performance of the Least Shrew (Cryptotis parva) on a Calcium-Restricted Diet Geological substrates and air pollution affect the availability of calcium to mammals in many habitats, including the Adirondack Mountain Region (Adirondacks) of the United States. Mammalian insectivores, such as shrews, may be particularly restricted in environments with low calcium. We examined the consequences of calcium restriction on the least shrew (Cryptotis parva) in the laboratory. We maintained one group of shrews (5 F, 5 M) on a mealworm diet with a calcium concentration comparable to beetle larvae collected in the Adirondacks (1.1 ± 0.3 mg/g) and another group (5 F, 3 M) on a mealworm diet with a calcium concentration almost 20 times higher (19.5 ± 5.1 mg/g). Animals were given no access to mineral sources of calcium, such as snail shell or bone. We measured running speed and performance in a complex maze over 10 weeks. Shrews on the high-calcium diet made fewer errors in the maze than shrews on the low-calcium diet (F1,14 = 12.8, p < 0.01). Females made fewer errors than males (F1,14 = 10.6, p < 0.01). Running speeds did not markedly vary between diet groups or sexes, though there was a trend toward faster running by shrews on the high calcium diet (p = 0.087). Shrews in calcium-poor habitats with low availability of mineral sources of calcium may have greater difficulty with cognitive tasks such as navigation and recovery of food hoards. Introduction Chronic acidic deposition, which results from air pollution, increases environmental exposure to toxins and depletes important nutrient cations, including calcium [1,2]. Habitats affected by acidic deposition often have low or reduced abundance of high-calcium invertebrate animals, including snails [3][4][5][6]. Snail shells are an important source of calcium for passerines and reduced snail density may result in increased eggshell deformities and population declines [4,7]. Tree swallows (Tachycineta bicolor) experience reduced fitness and altered foraging behavior in areas with low calcium, resulting in longer search times and greater predation risk [8]. Calcium content of invertebrates in forests with calcium-rich soils is greater than invertebrates found associated with calcium-poor soils [9]. Perhaps to compensate for lower calcium availability in invertebrates, passerine birds consume more oak (Quercus spp.) buds in areas with low soil calcium. In poorer soils, calcium levels were higher in oak buds as compared to all other invertebrate taxonomic groups, except for spiders [7]. The need to supplement diets with hardwood buds, in areas depleted of calcium, might interestingly exacerbate losses attributed to white-tailed deer (Odocoileus virginianus) browsing in northern forests [10,11]. The physiological calcium requirement of birds generally is 10 -15 times that of mammals [12], underpinning the vast amount of research on avian diet and physiology in habitats with low calcium availability. The use of supplemental calcium by mammals is less well understood. However, a deficiency of dietary calcium may limit reproduction and development among insectivorous bats in nature [13][14][15]. Indeed, periodic deficiencies in dietary calcium generally may exist for mammals that rely on invertebrate foods [14]. Non-volant insectivores, such as shrews, may be more vulnerable to local calcium deficiency than birds and bats of similar size because they are more closely tied to their local habitats. Northern short-tailed shrews (Blarina brevicauda) apparently use snails heavily in some regions [16]. Decreased dietary calcium availability has been shown to retard growth [17] and decrease motor performance [18] in laboratory rodents. Female round-eared elephant shrews (Macroscelides proboscideus) supplemented with dietary calcium displayed higher density of bone calcium and enhanced reproduction [19]. Limited calcium intake was associated with reduced fecundity in the California vole (Microtus californicus) in nature, and females of this species preferentially ate high-calcium foods during the reproductive season [20]. Calcium-deficient diets may impair the cognitive abilities of mammals, potentially reducing their capacity to learn, forage, acquire mates, avoid predators, and navigate efficiently [21][22][23]. Recognition of environmental landmarks, which is dependent upon spatial memory, can have important consequences for survival and reproduction [24]. The retrieval of food hoards depends upon accurate spatial memory. Calcium deficiency has been observed to severely limit the cognition of female Norway (Wistar) rats (Rattus norvegicus) [23]; and calcium-dependent protein kinases (PRKCs) are significant predictors of spatial memory and behavior [21,25]. When Norway (Sprague-Dawley) rats were exposed to radiation that impaired bodily PRKC function, memory formation was adversely affected [25]. Our aim was to better understand the implications of calcium depletion on shrews. The least shrew (Cryptotis parva) inhabits the forest-floor and consumes invertebrate prey. It is one of the most widespread shrew species in North America [26] and a well-developed laboratory model [27]. More recently shrews have been used as models of bioaccumulation to test environmental changes in terrestrial systems, likely due to their high metabolic rates and constant foraging behavior [28,29]. Cryptotis parva has a high metabolic rate, making them likely responsive candidates to environmental change [30]. We studied the physical and cognitive performance of least shrews maintained on diets that differed in calcium availability. Because the least shrew is known to hoard food [31], spatial memory might be particularly important in meeting the high energetic requirements of this species in an environmentally sensitive area. Animal Husbandry and Diet Our shrews were descendants of a least shrew colony originating from Boone County, Missouri in 1966 [32]. Shrews were marked with passive integrated transponder tags (Biomark, Inc., Boise, Idaho) for unique identification. Least shrews were maintained on a 12:12 L:D cycle and bred throughout the year. Animals were maintained in the Colgate University vivarium on a mixture of laboratory insectivore diet (Lab Diet Advanced Protocol  Insectivore Diet; crude protein ≥ 28.0%, Ca 1.4%), commercial cat food, and spring water. All procedures followed approved Colgate University Institutional Animal Care and Use protocols. Twenty shrews were randomly selected from our colony using random number generation and assigned to two dietary calcium groups: a high-calcium group and a low-calcium group. Random selection was continued until there were 5 females and 5 males in each group. Two males from the high-calcium group died early in the experiment due to unknown causes, necropsies were performed and no abnormalities were noted. As a result, data related to these animals were disregarded. All animals were maintained on the same diet, as described above, for two weeks prior to trial implementation and were fed ad libitum [31]. Experimental diets were prepared by raising mealworms (Grubco, Inc., Fairfield, Ohio) on chick starter. Mealworms for the low-calcium diet were raised on chick-starter alone; mealworms for the high-calcium diet were raised on chick starter with 8% (by mass) reagent grade CaCO 3 [33]. Mealworms were raised on these media, along with apple slices for moisture, for >48 h prior to homogenization and storage at −8 °C until use. Calcium concentrations of both diets were analyzed elementally using inductively coupled plasma-atomic emission spectroscopy following wet digestion. The high-calcium mealworm diet had a calcium concentration that was almost 20 times that of the low-calcium diet ( Table 1). The level of calcium in the low-calcium diet (1.10 ± 0.34 mg/g) was comparable to the calcium concentration in a large assortment of adult beetles from Michigan (1.05 ± 0.05 mg/g [34]) and similar to the level of calcium in assorted beetle larvae collected from a site in Herkimer County, New York (3.39 ± 0.87 mg/g, n = 7, unpublished data). The low-calcium diet also was slightly lower than the calcium concentration in our maintenance diet. Shrews were deprived of food for 5 h prior to all trials to increase the motivating effect of a food reward [35]. Mass (g) of shrews at the start and end of the experimental period were recorded. Performance Assays Our running trials and complex-maze assay followed that of Punzo and Chavez [35]. Running speed was measured on a 4 m circular, closed plywood track ( Figure 1A). Shrews were placed inside the track using a conical plastic tube for transfer. A 25-mL plastic culture dish, partially filled with mealworms, was placed in front of the plastic tube for reinforcement. When the shrew entered the track, the plastic tube was withdrawn and the animal was coaxed around the track by gentle prodding (no physical contact) with a padded wooden dowel to prohibit exploration [35]. Stopwatches recorded the time necessary to complete one lap of the 4 m track. After completing one lap around the track, shrews were allowed to consume mealworms before being returned to the plastic tube for relocation to the holding cage. The track was disinfected with unscented soap and water between trials to reduce olfactory cues. Each shrew completed a set of 5 trials, with 5 min of rest between trials, on each of 2 days every 2 weeks. Thus, each shrew completed 10 trials every 2 weeks, for a total of 60 trials over the 10 weeks study (Week 0, Week 2, Week 4, Week 6, Week 8, and Week 10). Shrews were tested in random order, with a new random order determined each testing period. Data were averaged across the 10 trials within a testing period for each animal to provide a single replicate observation for each animal every 2 weeks. A complex maze ( Figure 1B) was constructed following the published diagram in Punzo and Chavez [35], which was used successfully by these authors to assess spatial learning in C. parva of different ages. The maze was 45 cm × 60 cm with channels constructed from white acrylic and a clear acrylic top. The maze contained five 5 cm blind alleys and start and goal boxes with removable sliding acrylic gates. The goal box contained a dish filled with mealworms as a reward. Shrews were placed in the starting box for roughly 5 min to allow habituation. The number of errors was recorded during each trial. An error was recorded when the entire body of the shrew, minus the tail, entered a blind alley [35]. The trial ended when the shrew reached the goal box. Each shrew was subjected to 10 trials every two weeks during the 10 weeks study for a total of 60 trials. Trials were considered subsamples within each 2 weeks period. Shrews were tested in random order as indicated above for running trials. The track was disinfected with unscented soap and water between animal trials to reduce olfactory cues. Data Analysis Two-way repeated-measures analysis-of-variance, a test robust to unbalanced design, was used to evaluate the influence of dietary calcium, sex, and diet × sex interaction on each of running speed and maze-error rate. Analyses were performed using SPSS ® (version 14.0 for Windows). Residuals were examined for normality after models were fit to the data. The Greenhouse-Geisser correction to degrees of freedom was used for factors in the model involving time [36]. GPower [37] was used to test for effect size on sex and diet treatments. Running Track Trial Shrews ran increasingly faster over the 10 weeks of the experiment (F 3.5,49.1 = 43.8, p < 0.001), presumably as they became more proficient at this assay. Shrews completed the course at approximately 1.2 km h −1 at the beginning of the experiment and at approximately 2.0 km h −1 at the end ( Figure 2). Improvement in performance over time was not affected by diet or sex (all interactions p > 0.05). Running speed was not affected by diet (F 1,14 = 3.4, p = 0.087) or sex (F 1,14 = 2.0, p = 0.18), though there was a tendency for shrews to run faster on the high calcium diet (Figure 2). Complex Maze Trial Shrews made fewer errors in the maze trial over time during the 10 weeks of the experiment (F 3.6,49.9 = 21.7, p < 0.001), but the rate of improvement in performance was not affected by diet or sex (all interactions p > 0.05). Shrews maintained on a high calcium diet made fewer errors than those maintained on a low calcium diet (F 1,14 = 12.8, p = 0.003; Figure 3). Also, females made fewer errors (11.8 ± 0.13) than males (12.4 ± 0.15; F 1,14 = 10.2, p = 0.006). Shrew Mass Fluctuation Shrews in the low calcium diet lost mass over the course of the experiment with males and females losing 3.46% and 8.56% of their starting mass, respectively (Table 2). Contrastingly, shrews in the high calcium diet gained 2.65% and 0.65% of their body mass, among males and females respectively. Discussion To efficiently forage, avoid predators, and reproduce, mammals must properly perceive their environment and recollect the location of foods, safe places, and mates [24]. Healthy diets, complete with normal levels of dietary calcium ensure adequate strength of the musculoskeletal system, as well as proper neurogenesis, particularly in the hippocampus in mammals [38,39]. The hippocampus is an area of the brain associated with learning and sensory reception from the environment [40]. It is in this brain area that the conversion of short-term to long-term memory occurs [40,41]. Various vitamins and minerals are essential to proper hippocampal functioning. In particular, low levels of dietary calcium have been associated with reduction of bone density [42], cardiac disease [43], mood disorders and cognitive deficits [44], in addition to loss of balance [45] in numerous species. This experiment set out to test whether lower levels of dietary calcium affected performance of least shrews in speed and spatial navigation trials. Exercise has been shown to negate dietary deficiencies in vital minerals and nutrients [46,47]. Our shrews were run on a track over the course of the 10 week study and increased their speeds, regardless of trial and gender, in all but the last week. Support for this finding comes from rodent treadmill tests where enhanced performance in memory and swimming tasks was observed [46,48]. It is possible that the positive performance effects of regular exercise negated the negative effects of dietary calcium restriction in speed trials. Shrews in the high calcium diet had a tendency to run faster in trials, although not with statistical significance. It is possible that balance, shown to increase with calcium intake and increase locomotor performance, was increased in these animals as they performed this task. Researchers have shown that diets enhanced with whey, calcium, and vitamin D increase both rates of insulin receptor expression in muscles and lipid oxidation [49], as well as reduce inflammatory stress [50], which suggests a fitness benefit to dietary calcium supplementation. Laboratory maze trials can provide an ecologically relevant way to examine spatial perception and recollection [51]. Least shrews are fossorial animals that inhabit the interface of soil and plant litter in a variety of natural habitats [26]. Researchers have noted that fossorial animals make effective spatial orientation decisions when expending energy constructing tunnel systems and avoiding physiological stressors (e.g., overheating; [52,53]). Shrews likely orient themselves in space using olfactory, tactile, and visual cues [54]. In their natural environment, shrews experience mortality from avian and mammalian predators [55] and are likely most exposed to predation when traveling outside of the nest. Thus, properly recalling the location of food caches, nests, and other resources minimizes travel time and predation risk. Known scatter hoarders such as the Merriam's kangaroo rat (Dipodomys merriami) are more efficient spatial navigators as compared to the Great Basin kangaroo rat (D. microps), which shows preference for leaves [56]. Least shrews are known to larder hoard, stowing disabled prey at various distances from their nest depending on quality [31]. Like Punzo and Chavez [35], we found that shrews completed our maze with a decreasing number of errors over time, demonstrating an ability to learn the course of the maze and remember it from one week to the next. Similarly, rats that were fed low calcium diets experienced reduced proficiency in memory and learning tasks, but not motor performance [23], comparable to our findings with least shrews. Learning and memory are not synonymous, as learning can occur in numerous ways (e.g., habituation or conditioning) and requires input from sensory modalities. Memory, by contrast, is the storage of information received from the senses [57]. The enhanced performance in speed and maze tasks, noted at the start of our trials, could be the result of a short-term memory response. Two types of memory, specifically declarative and nondeclarative, might also explain the early increase in response of our shrews to trials, as declarative memory results from associations following one trial, whereas nondeclarative memory (learning) results after numerous trial exposures [57]. It is possible that the initial improved response observed in our shrews at the start of trial was the result of declarative memory and the later increase in performance a result of nondeclarative memory. Research on the role of intracellular calcium in learning has been addressed at length [58][59][60][61]; however, few studies have been conducted to support the connection between dietary calcium and memory. Many female mammals, including shrews, must satisfy large nutritional requirements by foraging away from the nest when offspring are still dependent on lactation [62,63]. Thus, navigational errors might have larger negative consequences for females than for males if these errors delay return to offspring. Females made fewer errors in our maze trials than males; however caution must be taken when interpreting these results due to the small sample size resulting from male-biased mortality during the experiment. We acknowledge that our power to detect treatment effects was hindered by low sample size and high variability among individuals. For example, our statistical power to detect the effect of diet on running speed was estimated at 0.64. Thus, it is likely that work with a greater number of individuals might elucidate additional effects of a low-calcium diet. Most studies have found a male advantage in spatial learning and navigation [64][65][66] and some attribute this to organizational effects resulting from surges in steroid hormones [67,68]. Meta-analyses of gender-specific differences in learning and spatial memory reveal a species-specific difference in performance [69]. Galea et al. [70] noted that male deer mice (Peromyscus maniculatus) and meadow voles (Microtus pennsylvanicus) outperformed females in maze trials. Similarly, reproductive male rats have outperformed reproductive females in both the Morris water maze [71] and in radial arm mazes [72], perhaps because they are generally more active [73]. One suggested explanation for gender difference in performance is that males use not only landmarks, but also geometry as they navigate in land and water mazes, which might give them the advantage in water trials over females [74]. Contrastingly, radial mazes often reward participants in the same location, which would be to the benefit of females who recognize quickly landmark cues. Gender differences in performance in water versus radial arm mazes are known to arise from the reward motivation (i.e., food, escape from water), which might be perceived with varied levels of urgency [75]. Other researchers have suggested that outcomes may differ between radial arm and water mazes because the former assesses short-term and long-term reference memory, as opposed to short-term working memory. Radial arm mazes appear to lessen an animal's stress level by constraining their searches to limit decisions once the first arm selection has been made [76]. Although gender differences are widespread in maze trial performance, we agree with other research that posits the ultimate factor influencing performance is likely stress-induced reduction in neurogenesis, which often negatively affect working memory and recognition of items among group-housed male, not female, rats [77]. Changes in neurochemistry, resulting from increases in estrogen, has been shown to enhance spatial working memory in dry-land radial arm maze trials in females [78]. More research is needed on sex differences in navigation among shrews both in the field and lab. In the northeastern United States, calcium depletion is occurring in high elevation forests receiving acid rain, and this environmental stressor might reduce viability of populations requiring this nutrient [6,79,80]. In this experiment, the reduction in cognitive performance in least shrews represents a subtle physiological mechanism by which this species might be disadvantaged in calcium-limited environments. Birds have been found to experience reduced rates of reproduction in calcium-limited environments [3,81], and there is growing evidence that acid-induced calcium depletion is associated with the decline of insectivorous migrant songbirds in North America [7,82]. Insectivorous mammals, which also have high calcium requirements during reproduction, could be similarly affected by acid deposition. Our results suggest that in the absence of calcium-rich materials, such as snail shells and bone, shrews might have more difficulty locating food hoards, mates, nests, as well as other ecologically relevant destinations in environments with low calcium availability. Conclusions When placed on a diet with restricted calcium, which simulated conditions in areas of acid deposition, Cryptotis parva were less successful in maze trials than animals maintained on a diet with more calcium. Mammals inhabiting areas with low and declining calcium availability, due to acidic deposition, may experience poor spatial memory and learning. These sublethal effects, which may not be obvious in short-term animal surveys, may nevertheless have negative consequences on reproduction and survival. Our study lends support to the usefulness of shrews as model organisms in behavioral studies. Shrews differ from rodents behaviorally, physiologically, and ecologically. Their high metabolic rate and short generation time may make them particularly useful model vertebrates in studies of environmental change.
2016-05-02T06:50:51.340Z
2012-08-16T00:00:00.000
{ "year": 2012, "sha1": "0ff72f6f237930efd4079e096843b9174a21f177", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/2076-328X/2/3/172/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ff72f6f237930efd4079e096843b9174a21f177", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244305281
pes2o/s2orc
v3-fos-license
Probing guided monolayer semiconductor polaritons below the light line In this work, we demonstrate an approach to study exciton-polaritons supported by transition metal dichalcogenide monolayers coupled to an unstructured planar waveguide below the light line. In order to excite and probe such waves propagating along the interface with the evanescent fields exponentially decaying away from the guiding layer, we employ a hemispherical ZnSe solid immersion lens (SIL) precisely positioned in the vicinity of the sample. We visualize the dispersion of guided polaritons using back focal (Fourier) plane imaging spectroscopy with the high-NA objective lens focus brought to the center of SIL. This results in the effective numerical aperture of the system exceeding an exceptional value of 2.2 in the visible range. In the experiment, we study guided polaritons supported by a WS2 monolayer transferred on top of a Ta2O5 plane-parallel optical waveguide. We confirm room-temperature strong light-matter coupling regime enhanced by ultra-low intrinsic ohmic and radiative losses of the waveguide. Note that in the experiment, total radiative losses can be broadly tuned by controlling SIL-to-sample distance. This gives a valuable degree of freedom for the study of polariton properties. Our approach lays the ground for future studies of light-matter interaction employing guided modes and surface waves. Introduction Rapid development of the field of all-optical devices such as optical switches and transistors boosts the search for highly nonlinear optical systems. Those operating in the regime of strong light-matter coupling show great promise. Monolayers of transition-metal dichalcogenides (TMDs) such as WS 2 , WSe 2 , MoS 2 and MoSe 2 exhibit direct band gaps [1] and are suitable for chip integration [2]. TMD interaction with light is dominated by quasiparticles -excitons with binding energies of the order of 100 meV, large oscillator strength [3] and ability to strongly couple to optical cavities and form polaritons. TMD polaritons were studied in various optical systems, such as plasmonic cavities [4], Bragg mirrors [5] and subwavelength gratings [6,7]. Most of the systems studied so far require relatively complicated fabrication processes, while their designs allow for radiative coupling to free-space waves with a fixed efficiency. In our work, we realize a simple hybrid planar TMD-based waveguide and demonstrate an approach for the study of polaritons intrinsically uncoupled from free-space waves propagating below the light line. Results and discussion In the experiment, we employ back focal (Fourier) plane spectroscopy setup combined with solid immersion lens (SIL) [8,9] schematically shown in Fig. 1(a). We use a high numerical aperture objective lens (Mitutoyo, M Plan Apo HR, 100×, NA obj = 0.9) in combination with ZnSe (refractive index n ZnSe ≈ 2.5 in the visible range) prism coupled to the sample with a precisely controlled air gap in Otto geometry [10]. Such configuration enables the resulting numerical aperture of NA = NA obj n ZnSe ≈ 2.25 and allows for excitation and detection of propagating modes below the light line (see Fig. 1(e)). During the measurement, the sample was excited by a white light halogen lamp (Ocean Optics HL-2000FHSA). The angle-resolved reflectivity spectra were measured by a slit spectrometer (Princeton SP 2500, CCD camera PyLoN 400BReXcelon). The sample was attached to a piezo positioner, which allowed for precise control of the SIL-tosample gap. The size of the gap was directly related to the radiative losses of the mode induced by SIL and thus allowed controlling mode excitation and detection efficiency. The sample we investigated in this work consists of a WS 2 monolayer placed on a 90-nm Ta 2 O 5 waveguide on top of SiO 2 /Si substrate. The WS 2 monolayer was exfoliated from a bulk crystal and transferred on top of the Ta 2 O 5 layer. A schematic representation of our device with a SIL (ZnSe prism) attached is shown in Fig. 1(b). The typical field profile and dispersion of a guided mode is shown in Fig. 1(d) and (e), respectively. For a non-leaky mode, the experimentally available in-plane wavevectors k || satisfy the condition n SiO 2 < k || /k 0 < n ZnSe , where k 0 is the absolute value of free-space wavevector (see Fig. 1(c, e)). Since the mode field is exponentially decaying away from the waveguiding layer, it can be only coupled to free space by a high-index prism brought in a close vicinity to the sample. Fig. 2 shows simulated (a, c) and measured (b, d) angle-resolved reflectivity maps for the sample without (a, b) and with (c, d) TMD monolayer obtained for a 150 nm air gap between sample and SIL. The simulations were performed using Fourier modal method [11]. The central part of the figure marked with red corresponds to small wavevectors k x /k 0 < n SiO 2 and contains strong Fabry-Pérot interference due to reflection from Si substrate. The remaining blue regions correspond to the regime of total internal reflection, while the dips in in these regions are associated with guided TE-polarized modes. The top row in Fig. 2 shows the dispersion of TE waveguide mode, which is efficiently coupled to free-space waves for an air gap between SIL and the bare waveguide of around 150 nm. When coupled to exciton resonance (Fig. 2, bottom row), the waveguide mode experiences Rabi splitting indicating the onset of strong light-matter coupling regime. Fig. 3 shows a magnified reflectivity map containing anticrossing of the modes. From this picture, we can estimate the Rabi splitting to be of the order of few tens of meV. Conclusions We have demonstrated a new approach for the study of guided exciton-polaritons in TMD-based planar photonic structures at room temperature. We have observed strong coupling between excitons in WS 2 monolayer and Ta 2 O 5 waveguide with Rabi splitting of the order of several 10s of meV. Our results provide a basis for future investigations of radiative/non-radiative losses, lifetimes, and nonlinearities of TMD-based guided polaritons. Figure 3. Magnified exciton-photon splitting region: simulation (a) and experiment (b). White dashed line is a guide for the eye. The same region is highlighted in Fig. 2. Acknowledgments The theoretical part of this study was supported by Russian Science Foundation, grant no. 21-12-00218. The experimental work was funded by RFBR according to project no. 19-52-51010.
2021-11-18T20:06:47.346Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "ccc31db77229b323817e992d6ebdfb4f4aa2a627", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2015/1/012069/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ccc31db77229b323817e992d6ebdfb4f4aa2a627", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258547911
pes2o/s2orc
v3-fos-license
Additive Manufacturing of Thermoelectric Microdevices for 4D Thermometry Thermometry, the process of measuring temperature, is one of the most fundamental tasks not only for understanding the thermodynamics of basic physical, chemical, and biological processes but also for thermal management of microelectronics. However, it is a challenge to acquire microscale temperature fields in both space and time. Here, a 3D printed micro‐thermoelectric device that enables direct 4D (3D Space + Time) thermometry at the microscale is reported. The device is composed of freestanding thermocouple probe networks, fabricated by bi‐metal 3D printing with an outstanding spatial resolution of a few µm. It shows that the developed 4D thermometry can explore dynamics of Joule heating or evaporative cooling on microscale subjects of interest such as a microelectrode or a water meniscus. The utilization of 3D printing further opens up the possibility to freely realize a wide range of on‐chip, freestanding microsensors or microelectronic devices without the design restrictions by manufacturing processes. Introduction The temperature that expresses the degree of hotness or coldness of a substance is a fundamental physical quantity governing various physical, chemical, and biological processes [1,2] that are not DOI: 10.1002/adma.202301704 only important for scientific research but also for our daily life from healthcare to climate change. [3,4] The thermometer, an apparatus for measuring temperature, operates in the principle of detecting the change of various physical quantities -, e.g., the volume of a gas or liquid, the resistance of metallic or semiconducting wires or thin films, the voltage across semiconducting diodes -in response to temperature variation. Despite continuous development, it is still a great challenge to improve the thermometer's sensitivity, spatial resolution, and scalability. Resolving 3D temperature fields with high spatio-temporal resolutions is particularly in great demand for exploring thermodynamics -, e.g., local heat production and dissipation -in various microscopic systems. [5][6][7][8][9][10] Among various types of thermometers, a thermocouple (TC) has advantages in its simple configuration and passive nature of the operation. The TC consists of a single junction formed by two dissimilar electrical conductors and the temperature at the junction is measured by the thermovoltage arising from the Seebeck effect. [11] Thus, it has a much simpler structure than the resistance-based thermometry configured with more than two junctions. Moreover, it does not require any external excitation unlike other methods employing resistance temperature detectors (RTDs), infrared thermometers, Raman thermometry, laser interferometers, or temperature-responsive fluorescent materials such as quantum dots and nitrogen-vacancy center diamonds. [10,[12][13][14][15][16][17] TC, therefore, provides the simplest thermometry with minimal sample disturbance. Extensive research has been made to miniaturize TC devices for high spatial resolution thermometry. One of the remarkable progresses is the invention of scanning thermal microscopy (SThM) that uses a thermocouple-integrated scanning nanoprobe to map in-plane temperature fields at the nanoscale. [18][19][20][21][22] The fabrication of the SThM probes, however, involves repetitive lithography, etching, and deposition processes, which limit the practicality, [23,24] and the measurement relies on a pointwise serial scanning manner with a narrow range of approximately 100 μm × 100 μm, and a few Hz scanning speed. To implement a parallelization scheme in thermometry, there have been several efforts to fabricate TC arrays in two-dimension (2D). [25][26][27][28][29] However, the 4D (4D = 3D space + 1D time) thermometry at the microscale has not been realized yet due to the technological challenges associated with the TC fabrication in three dimensions. 3D μ-Thermocouple Network Here, we develop a high-resolution bi-metal printing technique to build a 3D network of TC micro-probes suspended in air and demonstrate that the printed device can map a microscale temperature field in 4D (3D space + 1D time) with a millisecond temporal resolution. Figure 1a,b depicts the design of our device. Each TC consists of freestanding platinum (Pt) and silver (Ag) microwires forming an electrical junction acting as a temperature probe suspended in air and their reference ends are contacted to the substrate. (The substrate temperature, T R is monitored in real-time during the experiment for calibration) Figure 1a illustrates an exemplary 3 × 3 × 3 Pt-Ag TC array printed directly on a patterned electrode substrate, while Figure 1b depicts a configuration of the 3 TCs integrated vertically. The three sensing junctions, J 1 , J 2 , and J 3 (from top to bot-tom), are formed by joining 3 Ag microwires and 1 Pt microwire (acting as a common lead), which are assembled into a 3D network. Thanks to the Seebeck effect, the temperature difference between J 1 and the substrate T 1 − T R , generates a thermoelectric voltage across the Pt and Ag ends contacting to the substrate where S is the Seebeck coefficient of the Pt-Ag TC. Thus, the temperature at J 1 , T 1 = V 1 /S + T R is obtained by measuring V 1 . On the same principle, the temperatures at J 2 and J 3 are measured as T 2 = V 2 /S + T R and T 3 = V 3 /S + T R , respectively, as depicted in Figure 1b. As generalized, the temperature at the junction J xyz located at (x, y, z), T xyz is detected by measuring a thermoelectric voltage, V xyz across two reference ends of the corresponding Pt-Ag TC as follows, Simultaneous acquisition of the thermoelectric voltages for the 27 junctions enables microscale temperature mapping in three dimensions (see Experimental Section). ; c) A AgNP-ink-filled micropipette (diameter: ≈2-3 μm) has a vertical pulling process as a Pt wire printing process; d) A Ag wire is guided to the apex of the printed Pt microstructure to make a bi-metallic junction; e) The Ag wire growth is terminated horizontally at will, resulting a Pt-Ag TC junction. f) Electric current-voltage characteristics of an individual Pt-Ag TC probe before (blue line) and after (red line) annealing (inset: FE-SEM image of a printed TC probe, scale bar: 20 μm). g) Measured thermoelectric voltage as a function of the temperature difference. The dashed line indicates the Seebeck coefficient of the bulk Pt-Ag joint, 6.5 μV K −1 . The Seebeck coefficient of the printed TC is 4.9 μV K −1 . Figure 1c,d shows the photograph of our sensor chip and the field emission scanning electron microscope (FE-SEM) image of the 3D printed 3 × 3 × 3 TC array on the chip, respectively. The pitches -the distances between two adjacent junctions -are 100 μm horizontally and 30 μm vertically, leading to temperature mapping at the microscale. For electric measurements, 5 nmthick chromium (Cr) and 100 nm-thick gold (Au) electrodes were deposited on the silicon oxide (SiO 2 )/silicon (Si) wafer through a patterned photoresist mask defined by mask-free photolithography. A total of 36 reference ends (27 Ag + 9 Pt wires) are connected to the data acquisition system by wire bonding for simultaneous measurements (see Experimental Section). The FE-SEM image and energy-dispersive X-ray spectroscopy (EDS) images in Figure 1e show the exterior and material composition of the printed 1 × 3 TC array which matches well with our design (Figure 1a,b). 3D Printing and Characterization of TC We discuss the 3D printing and basic characterization of the individual Pt-Ag TC. A series of optical micrographs in Figure 2a-e shows the detail of the meniscus-guided 3D printing procedure. [30,31] Glass micropipettes having a diameter of ≈5 μm are used as printing nozzles. One pipette is filled with Ag nanoparticles (NPs) (diameter ≤ 10 nm) suspended in tetradecane/xylene mixture for printing Ag microwires and the other is filled with Pt NPs (diameter ≤ 5 nm) suspended in xylene for printing Pt microwires. As shown in Figure 2a, as soon as the Pt NP-filled pipette contacts the substrate, a femtoliter ink meniscus is produced at the pipette-substrate gap. Pt NPs are rapidly accumulated in the meniscus under the evaporation of solvent, forming a solidified microstructure on a patterned microelectrode. Guiding the meniscus with a programmed path and speed www.advancedsciencenews.com www.advmat.de produces a freestanding Pt microwire (Figure 2b, printing speed: 2 μm s −1 ). The termination of the wire growth is done by increasing the pipette moving speed to 1 mm s −1 . This acceleration was consistently performed for every termination step. The same procedure is applied for the fabrication of a Ag microwire on a neighboring microelectrode at a distance of 20 μm away from the Pt microwire, as shown in Figure 2c. To create a Pt-Ag TC junction, the Ag microwire growth is guided toward the top of the Pt wire ( Figure 2d). Figure 2e illustrates the formation of the TC junction and the subsequent termination of Ag wire growth by a pipette movement with a speed of 1 mm s −1 in a horizontal direction. The printing process of the vertically arranged 1 × 3 Pt-Ag TCs is also shown in Movie S1 (Supporting Information). The physical connection of the TC junction was well formed ( Figure S1, Supporting Information), and its cross-sectional area is as small as 0.38 μm 2 . The heat conduction effect at the junction is neglected due to a high wire length/junction length ratio of over 145. The properties of the 3D printed TC have been characterized. One practical requirement for reliable and fast operation is to secure sufficient electrical conductivity. A two-step thermal annealing process at 100°C for 5 min and 180°C for 7 min was employed to increase the electrical conductivity of the printed TC by five to six orders of magnitude, improving response ( Figure 2f). During the annealing process, the nanoparticles constituting the printed TC were sintered, forming a solid mass. [32] The densification of the printed TC by annealing resulted in its volumetric shrinkage ( Figure S2, Supporting Information), while it did not affect the stability of the device operation. The mechanical robustness was examined by a compression test: The printed TC could withstand a compressive force of up to 4.82 μN ( Figure S3, Movie S2, Supporting Information). A breakage occurred at the TC's reference ends connected to the substrate. The Seebeck coefficient of the printed Pt-Ag TC was measured as 4.9 μV K −1 (relative standard deviation of 3.32%) (Figure 2g). The measured value is smaller than that of the bulk ≈ 6.5 μV K −1 possibly due to the defects generated during printing and/or the dimensional confinement, which was also found in other additive manufactured thermocouples. [33] The performance of the printed TC thermometer was characterized by performing a pulsed laser heating experiment depicted in Figure 3a. A focused laser beam with = 532 nm with a controlled pulse frequency by a mechanical chopper was illuminated to locally heat the TC junction. Figure 3b plots the thermoelectric voltage-time trace of the TC under a single, square waveform laser pulse with a power of 110 mW and a frequency of 10 Hz. The amplitude of the output voltage by the laser illumination was ≈ 55 μV, corresponding to 11.2 K temperature rise. The temperature resolution can be inferred from the noise level of the output voltage: The standard deviations (SD) of the output voltage with/without laser heating were 3.48 and 3.86 μV (corresponding to 0.71 and 0.79 K), respectively, demonstrating ≈ 0.8 K precision. The response times of the TC for heating on and off were measured as 0.36 and 0.35 ms, respectively. Figure 3c shows a laser power dependency. At 10 Hz, as the laser power increases from 50 to 110 mW, the amplitude of the output voltage increases from 20 to 55 μV, corresponding to the temperature rise from 4.1 to 11.2 K. We further increased the laser pulse frequency to 100, 1000, and to 1500 Hz for evaluating the time response. Figure 3d-f shows that the amplitudes of the output voltage obtained at those fre-quencies are identical to the one measured at 10 Hz. The above results indicate that the temporal resolution of our printed TC thermometer could reach a sub-millisecond level. 4D μ-Thermometry: Joule Heating Using our 3D TC array, we first investigated temperature fields produced by a Joule-heated microelectrode, which is a universal issue in microelectronics. As an example, we mapped dynamic temperature fields near a "U"-shaped copper (Cu) microwire (radius: 50 μm) suspended in the air with programmed Joule heating procedures using the printed 3 × 3 × 3 TC array, as shown in Figure 4a,b. The coordinate of the lowest position of the Cu wire is defined as (x, y, z) ≡ (0, 0, 0), which is 20 μm away from the top of the center TC at (0, 0, 20 μm). The Joule heating (i.e., the power P = I 2 R = IV) is precisely controlled and quantified by varying an electric current I and measuring a voltage V across the Cu wire. The generated heat then dissipates through the air with a finite thermal conductivity k and generates a temperature gradient, which can be spatially resolved by the TC array in three dimensions. To validate the measurement accuracy, a numerical simulation was performed using COM-SOL Multiphysics, as shown in Figure 4c. (The detailed procedure is described in the Supporting Information.) The temperature precision of our printed TC was ≈0.5°C, obtained from multiple measurements at a constant temperature environment ( Figure S4, Supporting Information). Figure 4d-g shows the 3D temperature field maps (voxel size: 100 × 100 × 30 μm 3 ) near the Cu microwire measured at different applied electric powers from 0.00, 0.55, 1.24, to 2.20 W. The resulting 3D temperature maps are well matched to our expectation following two general trends: 1) The temperature measured at each junction increases as the heating power increases. 2) The temperature decays with distance from the wire. The experimental result is also consistent with the simulation. The simulated 3D temperature fields at different applied electric powers from 0.00 (Figure 4h), 0.55 (Figure 4i), 1.24 (Figure 4j), to 2.20 W (Figure 4k) coincide with the measured ones with only a small difference, 0.44% on average and up to 6.8%. More quantitative analysis can be carried out to extract useful parameters such as the thermal conductivity of the air. For this, we plot the temperature as a function of the Joule-heating power and the distance along the z-axis at (x, y) = (0, 0) (Figure 4l). The heat transport equation in an isotropic and homogeneous media can be expressed as, where ⃗ q is the magnitude and direction of the heat flow per unit area, k is the thermal conductivity of air, and ⃗ ∇T is the temperature gradient. By assuming a simple 1D heat conduction in a cylindrical coordinate, one can calculate the temperature at a fixed position (x, y, z) = (0, 0, z) from the wire which will increase with the heating power as follows [34] : where T s (p l ) is the temperature of the surface of the Cu wire at a heating power per unit length p l and z 0 = 50 μm is the radius of the Cu wire. T s (p l ) depends on the mass and specific heat of the wire and the heat dissipation/convection through the environment, which require deeper mathematical calculations. To simplify the calculation, we estimated T s (p l ) and k as fitting parameters for the data shown as dashed lines in Figure 4l,m. The fitting curves are in good agreement with the measured (square dots) and simulated data (solid lines). From the slopes of the curves, the thermal conductivity of air is estimated as k = 0.0246 ± 0.0024 W mK −1 which is close to the reported value. [35] The value of k used in the COMSOL Multiphysics simulation (Figure 4h-k) also falls in this range (Supporting Information). Furthermore, we demonstrated that our 3D TC array can track the variation of 3D temperature fields in real-time, realizing the 4D micro-thermometry (Figure 5a). Here, a pulsed electric power (period: 112 s, duration: 56 s) configured with programmed amplitudes of 0.55, 1.24, and 2.20 W was applied to the Cu wire, as plotted in Figure 5b. The substrate temperature T R was measured with time ( Figure S5, Supporting Information) for calibration. Figure 5c temperature over time, demonstrating 4D micro-thermometry of the dynamic Joule-heating (Movie S3, Supporting Information). There exists an 8.05 s delay in the response when reading out 27 TCs using a multi-channel DAQ instrument. This can be improved further by using state-of-the-art fast electronics, which is beyond the scope of our study. 4D μ-Thermometry: Vaporous Space Our 4D micro-thermometry is basically non-invasive as it measures temperatures of the environment at designated points without any excitation. This enables us to study how the heat is dissipated to the air at different ambient conditions such as humidity, i.e., the concentration of water vapor in the air. It is im-portant to understand various humidity-related phenomena associated with evaporation and condensation of water in diverse fields from science, engineering, to public health. [36] In particular, the evaporation of water is known to be a simple yet effective approach to regulating heat transfer, used in a variety of engineering applications from power plant cooling to chemical processing. [37][38][39][40][41] We first investigated how ambient humidity influences heat dissipation from a Joule-heated Cu microwire via the printed 3 × 3 × 3 TC array. Figure 6a,b shows 3D temperature field maps of ambient air near a Joule-heated Cu microwire (2.20 W) at different relative humidity (RH) of 80% and 20%, respectively. We point out that our micro-thermometry can experimentally reveal the humidity effect. Our first observation is that the temperature at (x, y, z) = (0, 0, 20 μm) where the closest probe point to the Joule-heated wire, varies with RH: 100°C at 80% RH, whereas 96.1°C at 20% RH. In addition, Figure 6c plots the temperature profiles along the z-axis (0, 0, z) from z = -5 to 110 μm at 80% RH (blue) and 20% RH (red) (square dots: experimental data, lines: simulation), projecting their heat dissipation behaviors. We observed that the temperature decays faster at 80% RH than at 20% RH. These two findings can be explained by the lower thermal conductivity of water vapor than that of dry air, consistent with our simple model Equation (3) that shows ΔT ≡ T(z) − T(z = 0) ∝ − 1/ : the heat dissipation from the Joule-heated microwire to the air becomes faster as RH decreases, leading to lowering the temperature and slowing its decay. Successful investigations of the microscale Joule heating demonstrated above would open a new possibility to explore the thermal management of the interconnects in 3D chip architecture. In addition to the heating dynamics, microscale evaporative cooling can also be investigated by the printed TC. For this, we placed and 3D scanned a single TC probe near a water-filled glass microcapillary, as schematically illustrated in Figure 6d. When water evaporates from a meniscus formed in a microcapillary, the surrounding air cools down, decreasing the temperature. To observe the evaporative cooling effect clearly, the temperature of the substrate is set to 75°C. Figure 6e plots a 3D temperature map of the air near the evaporating water meniscus formed in a microcapillary with an aperture diameter of 20 μm by 3D scanning of the printed TC probe (voxel size = 10 × 10 × 10 μm 3 ). The temperature of the air is measured as 72.55°C at the center of the meniscus (x, y, z) = (0, 0, 0), and increases as it moves away from the meniscus, clearly showing evaporative cooling. The evaporative cooling of water was also investigated by numerical simulation using COMSOL Multiphysics, as shown in Figure 6f. The temperature difference between the experimental and simulated results was negligible, only up to 0.6%. Furthermore, scanning the TC probe successfully visualized how humidity influences evaporative cooling. As RH increases -, i.e., water vapor concentration increases, the evaporation of water at the meniscus is suppressed. As a result, the evaporative cooling enhances as RH decreases from 60% to 20%, as shown in Figure S6 (Supporting Information). Our vertical TC probe also allowed measuring a temperature jump profile [42] across the air/water interface by penetrating the water surface, as demonstrated in Figure S7 and Movie S4 (Supporting Information). The direct measurement of temperature profile across liquid/gas phases would contribute to understanding not only the thermodynamics of evaporation but also the thermal management of various cooling systems. Conclusion With the successful execution of the bi-metal meniscus-guided printing, we have built micro 3D TC networks for 4D thermometry. The printed TC has demonstrated a sufficient temperature sensitivity of 4.9 μV K −1 and a sub-millisecond response time (<0.4 ms), enabling the measurement of 4D (3D space + time) temperature fields generated by Joule-heating and evaporative cooling. The experimental data obtained by the printed TC array were validated by both analytical theory and numerical simulation, showing good agreement. Although the analysis in this work started with the predetermined position and shape of the heat source, our scheme can be extended to deduce the location of the heat source and its shape from the measured 3D temperature field [ [T] ]. For this, one can begin by assuming a point source at a certain position (X i ,Y i , Z i ) which emits heat with a power P i . Using the heat transport equation, one can calculate the temperature T at the position (x, y, z) as where T ∞ is the temperature far away from the point source (which is the temperature of the substrate in our case) and r xyz,i is the shortest distance from the source to the sensor. The difference between the temperatures measured at the TCs at (x, y, z) and (x′, y′, z′) will be The Equation (5) and measured 3D temperature field [ [T] ] would, therefore, enable one to find the position of the point source and even to find the shape of the source if more complex computational calculation is implemented. This approach could trace the motion of heat or cold sources in space like locating a sound source in 3D acoustic vector sensors. [43] Finite element calculation and/or machine learning could be used to improve the estimation. We believe our 4D micro-thermometry paves the way for exploring microscopic thermodynamics in physics, chemistry, healthcare, environment monitoring, and so on. The practical utilization may necessitate further technical improvements: 1) To improve the voxel resolution, increasing the number of the TC junctions would be necessary. 2) To widen the measuring range of temperature, the stability of the micro-precision experimental apparatus against temperature change should be thoroughly considered. 3) Quantitative studies on the reliability and lifetime of the printed TC would also be necessary. The ability to build a 3D network of heterogeneous junctions can be extended to a variety of electronic devices such as p-n junction or Schottky diodes, memory cells, or field-effect-transistor (FET) devices to name a few. Therefore, our work unveils the full potential of 3D printing for realizing multi-dimensional micro-devices with unprecedented functionalities not only in methodologies but also in various electronic applications. Experimental Section Sample Preparation: For preparing a printing nozzle, a borosilicate glass micropipette with a diameter of 5 μm(1B100F-6, World Precision Instruments) was fabricated by a programmed heat-pulling process (P-97 Flaming/Brown Micro-pipette Puller, Sutter Instruments). For pipette cleaning, a 5-min sonication in acetone, isopropyl alcohol, and deionized water was used for each. Ag nanoparticles (NP) (diameter: ≤ 10 nm) laden in n-tetradecane (50-60 wt.%, Sigma-Aldrich) were used. To achieve continuous meniscus-guided printing, the Ag nanoink was further diluted with O-xylene in 1:5 volumetric ratio. Then Ag ink was sonicated for 10 min to give time to be a complete mixture. Pt NP ink (diameter: 3-nm, 10-12 wt.%, UT Dots) was directly used in all printing conditions. Each metal NP ink was sonicated for 5 min before being introduced into the opening of the pipette. 4.0.0.1. Printing: The motorized three-axis stepping stage (Kohzu precision) with 250 nm precision was used for positioning a metal NP inkfilled micropipette to form an ink meniscus on the substrate. Continuous flow ejection of bi-metallic NP inks was performed by guiding the pipette at a speed of 2 μm s −1 . Horizontal termination of the pipette with forming a printed Pt-Ag junction was performed by the pipette moving speed of 1 mm s −1 . The printing process was carefully monitored in real-time by using a side-view projection microscope consisting of a long-workingdistance objective (50×, Mitutoyo Plan Apo) and a camera (DCC1545M, Thorlabs). In all printings, the relative humidity (RH) was maintained at 50% and the temperature was at room temperature. Characterizations: The Seebeck coefficient of the printed Pt-Ag TC was analyzed as a combination of the thermoelectric power values from Pt and Ag single wires. To perform thermoelectric characterization, each type of a single wire with a diameter of 2 μm and a length of ≈ 2 mm was 3D printed and placed with an assistance of a pipette between two gold (Au) electrodes with a 1.5 mm insulation gap. The two electrodes were connected to two probes from a multimeter (Keithley DAQ 6510, Tektronix), and the electric potential along each metallic wire was measured. As one electrode was heated and kept hot, the other electrode was cooled and kept cold. The generated electric potential was converted to the thermoelectric power of Pt and Ag wires. For the response time of a TC probe, a continuous laser with a wavelength of 532 nm was introduced into the bi-metal junction of a freestanding TC probe. In the laser path, an optical chopper (SR540, Stanford Research Systems) made pulses from 10, 100, 1000 to 1500 Hz to get a sub-millisecond pulse. The ends of the Pt and Ag wires on the substrate were further connected to the voltmeter. Temperature Measurement: A Cu electrode (10971.G6, Alfa Aesar) with a radius of 50 μm was fixed by a micro-manual holder, and both ends were connected to the power supply for Joule heating. A total of 36 channels of the printed 9 Pt and 27 Ag wires were connected to the 40-pin chip (W9530RC, Winslow) through Au electrodes and Au microwire bonding. In the chip with 36 active pins, 27 TC junctions were analyzed using a multichannel multimeter (Keithley DAQ 6510, Tektronix). When the electric power was applied to the Cu electrode, electric potential data points were obtained for each of the 27 channels in every measurement cycle. To control the humidity in the chamber of the measurement setup, a mixture of moist nitrogen and dry nitrogen gas was made by two mass-flow controllers (SLA5800, Brooks Instrument). The RH was controlled in the 30 × 40 × 40 mm 3 sized chamber and monitored in real-time by a digital humidity sensor. To observe evaporative cooling at the microcapillary, the temperature of the substrate was controlled using a thermoelectric heating module and a k-type TC probe. A proportional-integral-derivative (PID) controlling system with feedback was utilized to maintain the temperature. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
2023-05-08T06:16:55.264Z
2023-05-07T00:00:00.000
{ "year": 2023, "sha1": "4ec524d07e70d2cc360a211179bbe7ab9ba1cf3d", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/adma.202301704", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "22013b41d74a51e3621ba6a3ded4c57c6a7897c7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
53558871
pes2o/s2orc
v3-fos-license
The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay $\gamma$-quanta by the residuals in the activated structures and scoring the prompt doses of these $\gamma$-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches. Introduction High residual radiation doses are an important issue in all accelerator-based experiments, both collider and fixed-target. The particle beams accelerated to relativistic energies either collide with each other or impinge upon targets, producing fluxes of secondary particles. Among these fluxes, there are substantial fractions of high-energy stable particles arising in spallation and fragmentation reactions, capable of inducing inelastic reactions in the structural materials surrounding those targets and collision points and leading to nuclear transmutations in these structures [1]. As a result of neutron activation of the walls and other materials that have been exposed to particle fluxes, the structures remain radioactive, emitting α, β, and γ radiation even after the beam has been turned off. This radioactivity can pose a hazard to personnel, who may have to enter the enclosing buildings periodically for maintenance such as target replacement. Thus, quantifying the severity of this hazard is necessary for compliance with radiological standards (see, for example, [2]). Despite this importance, general procedures for the calculation of this dose are not common. The typical method of its calculation involves simulation of high-energy particle collisions with nuclei and low-energy neutron capture in a Monte Carlo particle simulation program (e.g. MARS15 [3], FLUKA [4,5], MCNP6 [6]) producing inventories of residual nuclei and converting activities of those nuclei after a certain period of time (calculated using codes like DeTra [7]) to an estimate of residual dose based on conversion factors. Such methods typically have large uncertainties, typically a factor of two or three [8]. Additionally, such methods are only valid for specific geometries (e.g. semi-infinite slabs), and adjustments -some of which require the use of symmetries of a particular irradiated object -must be made to study small objects or to compute doses at a distance [9,10]. Instead of these dosimetric methods, it is possible to estimate radiation doses by Monte Carlo simulation of gamma rays emitted from activated materials. Such an approach should be more accurate and more general, albeit more computationally intensive, than dosimetric methods. FLUKA [4,5] and the FLUKA-based code DORIAN [11] are two such implementations. Fermi-CORD also uses this approach but is based on the MARS15 code and employs the Delauney triangulation for complex geometries not implemented elsewhere. The codes are sequentially used for 1) sectioning the input geometry into parts of appropriate size (involving the Delaunay triangulation), 2) preparing the list of relevant residual nuclides and their γ-quanta, 3) analysis of the histograms of residual dose on contact, and 4) preparing the γ-ray sampling routines. Description of algorithm The algorithm implemented in FermiCORD splits the procedure for calculating residual doses into two stages: a determination of radionuclides and a simulation of the decay of these nuclides. Both stages of this algorithm rely on the Monte Carlo particle transport code MARS15, a manual for which can be found at [3]. A flowchart of the algorithm is as follows: Stage 1 Split regions in the geometry as necessary Preparation of geometry files In order to calculate the nuclide distributions accurately, the geometry description file (often specified in the .gdml format) typically requires a few modifications. First, large regions must be subdivided into smaller ones if large variations in neutron bombardment (and thus in radionuclide production) are present across this material. Second, each region in which nuclide production will be calculated must be assigned to a unique material. In general, this requires creating a copy of the relevant material and assigning the region to the copy of the material. Many regions must be subdivided manually, but for regions that can be modeled as a prism with polygonal base -for example, the ceiling of a room -it often suffices to triangulate the base and then divide the region into triangular prisms. Since this is often the case, an algorithm was developed to triangulate these regions. For this purpose, the triangles used should not be too thin and should be as close as possible to equilateral. A thin triangle will be longer in some direction, and if the radiation exposure varies significantly along the length of the triangle, this will introduce inaccuracies. In the first step of the algorithm, the algorithm randomly places points inside each of the regions that will form the vertices of the new triangles. Vertices too The arbitrary triangulation generally includes many long, thin triangles, which are undesirable here, whereas the Delaunay triangulation seeks to avoid these and thus provides adequate resolution in all directions. close to the boundary of the region, which would produce thin triangles, are rejected. In the second stage, the added vertices and those on the boundary are triangulated in what is called a Delaunay triangulation. This triangulation has the properties that the circumcircle of any three points does not enclose any additional points and that the smallest angle in the triangulation is as large as possible. These properties have the consequence that long, thin triangles (which have large circumcircles and at least one small angle) are avoided (see Figure 1). A simple algorithm for accomplishing this is to construct an arbitrary triangulation and then adjust it until it becomes Delaunay. Following the approach in [12], a pair of adjacent triangles that share a common edge is inspected. If the two angles in the triangles opposite from the common edge have a combined measure of greater than 180 degrees, the smallest angle measure in this pair can be reduced by removing the common edge and replacing it with the edge connecting the other two (previously disconnected) vertices. This procedure is performed on all pairs of adjacent triangles in the triangulation, but since each edge-flipping step creates new triangles, it is necessary to sweep through the triangulation again and repeat the procedure until it is possible to sweep through the entire triangulation without flipping any edges. 1 The code, in its current stage, requires that any regions passed to it be convex. (A convex shape is one whose interior angles are all less than 180 degrees.) Thus, the user is manually required to divide the region into convex subregions, which this algorithm will then subdivide into triangles. Stage 1. Production of Radionuclides During a simulation of particle transport, MARS15 has the capability to calculate an inventory of radionuclides produced within a material based on collisions of particles with nuclei of that material. Such inventories are saved for every material that the user specifies, up to a total of forty materials, corresponding to forty regions [3]. Since these inventories are summed over the entire region, this has the potential to introduce significant uncertainties for large regions, which motivated the subdivision of these regions into smaller ones. Even with smaller regions, however, there is some variation in nuclide production within a region. To estimate this distribution of nuclide production, histograms of residual dose on contact within that region were constructed. While MARS15's estimation of residual dose on contact is imperfect, it should at least capture the relative levels of activation within a material. The probability density of producing a radionuclide at a point within a region is assumed to be proportional to the residual dose on contact at that point. The problem of sampling position is difficult in general, especially for irregularly shaped regions (as are often encountered in practice). An ideal solution would be to construct a three-dimensional histogram of residual dose, but MARS15 does not have this capacity, and even if it were possible, it would be computationally intensive. Thus, procedures were developed for estimating the distribution from a more limited set of histograms. Sampling position in ceiling. As one application of this code, the Mu2e Target Station was studied. Since the ceiling of this room is irregularly shaped in the horizontal directions but has the same height at all points in consideration, the ceiling was divided into triangles to account for horizontal variation, and histograms were used to estimate vertical variation in nuclide production. Pairs of orthogonal histograms were used to determine the depth profile at various locations in the ceiling. Figure 2 illustrates the horizontal position of the histograms; each histogram also extends vertically into the depth of the ceiling. To reduce computational time, two histograms were used above each of the lines in Figure 2: one with finely divided bins for the lower 30 cm of the ceiling, and one with coarser bins for the remainder of the ceiling, where nuclide production is lower and gammas emitted are more heavily shielded. The depth profile for the histogram above the target (histogram B) is shown in Figure 3. Histograms C and D are for sampling from the sections of the ceiling near the beam dump (the four triangular regions in the lower left corner); A and B are for the remainder of the ceiling. Given the horizontal coordinates of a point on the ceiling (which are chosen from a uniform distribution on the triangular region being sampled), this point is projected onto the north-south and east-west histograms. The depth profiles corresponding to the projections on each of the histograms were averaged to estimate the depth spectrum at the point of interest. (In the case that the relevant coordinate is outside the range of the histogram, the highest or lowest coordinate value of the histogram was used to obtain the profile.) The depth into the ceiling was sampled from this spectrum. While this method was originally written with the ceiling in mind, it can also be used to sample within the floor and walls. The penetration depth of radiation into the ceiling should be roughly the same as the penetration depth into the walls, so the ceiling histogram can also be used for sampling radiation emitted from the walls and floor. For determining the depth in regions with irregular orienation, the vector normal to the interior surface must be specified manually. Other sampling techniques. The method that was used for the ceiling does not generalize to all regions. In particular, creating two-dimensional histograms assumed that the region was aligned with the coordinate axes and that the region had a relatively simple geometry. In practice, these assumptions may not hold for all regions of interest. For small regions, such as the target, it often suffices to sample uniformly from the region. For larger regions with significant variation in activity and where there are no symmetry considerations that might permit a simplifica- tion (such as an irregularly shaped concrete region that surrounds the beam dump, shown in the lower left corner of Figure 5a), it is possible to approximate a three-dimensional histogram by dividing the region into slices and taking a two-dimensional histogram of each slice. Since this requires additional computational time, it is advisable to conduct simulations of these regions during a separate run with lower statistics. Stage 2. Decay of Radionuclides The quantity of various isotopes in a decay chain as a function of time is given by the solution to the Bateman equations, a system of coupled differential equations for radionuclide decay. These equations can be extended to include production of nuclides from external sources, such as an accelerator beam or a reactor. The generalized equations are solved using the program DeTra [7], which can be called from MARS15. From DeTra, a list of isotopes with their corresponding concentrations and activities is obtained for each region. This list of isotopes is compared to the library of gamma rays developed for the SHAMAN nuclear identification system [13] to determine the rate of gamma ray production at various energies. Due to the very short penetration depths of gamma rays with energies less than 100 keV, these are neglected from the analysis (although this threshold can be modified by users). The rate of gamma ray production r of an isotope I is the product of the activity A (the number of nuclear decays per second) and the number of gamma rays per decay above the 100 keV threshold, which is the sum of the branching ratios p j corresponding to these energies: Positrons emitted are assumed to annihilate immediately to produce two 511 keV gamma rays. When sampling gamma emission, a region is randomly selected based on its total rate of gamma production, and a gamma energy is randomly selected based on the relative production rates of gammas within that region. The point of emission within the region is chosen based on the histograms of residual dose (as described above), and the angle of emission is assumed to be isotropic. This gamma ray is then tracked using MARS15, and its energy deposition at relevant points is computed using a separate histogram in the interior of the room. Validation of FermiCORD FermiCORD was benchmarked against the FLUKA-based code DORIAN. FLUKA in turn has been benchmarked to measurements at accelerator facilities and cosmic ray experiments [14]. In [11], a simulation in DORIAN is described in which a 0.433 GeV proton beam impinges on a copper target with radius 5 cm and length 50 cm. The target is irradiated for one year and then allowed Figure 2 of [11]. to cool for a variable length of time before residual dose is measured 50 cm upstream of the target. This simulation was repeated in FermiCORD for comparison with DORIAN. The target was subdivided into forty 1.25 cm-thick slices to account for variations in nuclide production along the length of the target. Since the distance between the point of measurement and the target is much larger than the target radius, the radial distribution of nuclide production in the target does not significantly affect the calculated dose and was therefore ignored here. As a result, division of the target into regions was sufficient for estimating the distribution; no histograms were needed. The results are compared to DORIAN in Table 1. In this simulation, Fermi-CORD's results agree with those from DORIAN to within about 10% or better at all cooling times investigated. Calculation of the residual dose for the Mu2e Target Station The future Mu2e experiment at Fermilab will attempt to observe the neutrinoless conversion of muons into electrons, which, if discovered, would reveal physics beyond the Standard Model. To generate the muons, a high-intensity proton beam (6 × 10 12 protons per second with energy 8 GeV) will impinge on a tungsten target, producing pions that will then decay into muons [15]. A beam dump is located behind the target to capture particles produced in the collisions, although many particles instead strike the walls, ceiling, floor, and other structures in the room. The descriptions of the geometry and of the magnetic fields were taken from a proposed design for the experiment. The irradiation and cooling times were chosen to be 1 year and 1 week, respectively, as the experiment is expected to shut down annually for maintenance. In this simulation emissions from the target, beam dump and surrounding concrete, heat and radiation shield, end cap, walls, ceiling, and floor were considered. The ceiling and floor were triangulated as described above, and the walls were divided into sections as well. Residual radiation levels in the Production Solenoid hall, where the target and beam dump are located, have previously been estimated. One such calculation was made in [16], which considers only the activity of the region surrounding the target and the beam dump (and thus excludes contributions from the walls, floor and ceiling). Additionally, the sampling methods used in [16] were less sophisticated than those in FermiCORD: the target was assumed to be a point source, the heat and radiation shield and the end cap were sampled uniformly. The beam dump was divided into vertical slices parallel to the front face, but within each slice, all the activity was assumed to emanate from a cylindrical region near the center of the slice. The original calculation in [16] was performed using a different proposal for the Mu2e design, but, for comparison purposes, the calculations were modified to match the design that was examined with FermiCORD. 2 A comparison between FermiCORD and this revised calculation is shown in Figure 4. In particular, the dose calculated in corners is noticeably higher using FermiCORD due to the radiation emitted by the walls. In Figure 5, residual doses calculated with FermiCORD at various positions in the Production Solenoid hall are displayed. Conclusion The code system FermiCORD for the MARS15 code for calculation of the accelerator-induced residual dose in complex geometries and for arbitrary irradiation profiles has been developed, benchmarked, and applied to simulations of the residual dose in the Mu2e PS Hall. The code system works with the MARS15 code and requires two stages: one for calculation of the inventories of residual nuclides in the components of the facility of given topology and one for sampling the emission of secondary photons and scoring the dose at the locations of interest. Calculations indicate that although the approximate approach consisting in scoring the residual dose from only the few most radioactive components is rather simple and less time consuming, an accurate determination of the dose at the remote locations of particular importance for the safety of personnel requires a simulation relying on the full set of sources of radioactivity. The FermiCORD system for the MARS15 code accounts for the full set of sources and therefore should be more accurate.
2018-09-22T17:42:22.000Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "e13ae0d46cf90fc2e91acc1074be45084d8dc5d4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1609.00417", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e13ae0d46cf90fc2e91acc1074be45084d8dc5d4", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
245032699
pes2o/s2orc
v3-fos-license
Low plasma renin activity is independently associated with kidney disease progression in patients with type 2 diabetes and overt nephropathy, including those with impaired kidney function: a 2-year prospective study. Plasma renin activity (PRA) is lower in patients with diabetic nephropathy (DN) than in healthy individuals. However, the association, if any, between PRA and renal outcomes in patients with DN remains uncertain. In a 2-year prospective observational study, we aimed to investigate the association of PRA with the decline in kidney function in patients with DN. We studied 97 patients with DN who were categorized according to tertile (T1-T3) of PRA. The annual changes in estimated glomerular filtration rate (eGFR) (mL/min/1.73 m2/year) were determined from the slope of the linear regression curve for eGFR. The secondary endpoint was defined as a composite of the doubling of serum creatinine or end-stage renal disease. Results showed that kidney function rapidly declined with lower tertiles of PRA (median value [interquartile range] of the annual eGFR changes: -8.8 [-18.5 to -4.2] for T1, -8.0 [-14.3 to -3.2] for T2, and -3.1 [-6.3 to -2.0] for T3; p for trend <0.01). Multivariable linear regression analyses showed that, compared with T3, T1 was associated with a larger annual change in eGFR (coefficient, -4.410; 95% confidence interval [CI], -7.910 to -0.909 for T1). Composite renal events occurred in 46 participants. In multivariable Cox analysis, the lower tertiles of PRA (T1 and T2) were associated with higher incidences of the composite renal outcome (T2: hazard ratio [HR], 4.78; 95% CI, 1.64-13.89; T1: HR, 4.85; 95% CI 1.61-‍14.65) than T3. In conclusion, low PRA is independently associated with poor renal outcomes in patients with DN. changes that lead to kidney disease progression [5]. Pharmacological RAAS inhibition using angiotensin (Ang)-converting enzyme (ACE) inhibitors or Ang II receptor blockers ameliorates proteinuria and the decline in kidney function in patients with DN [6][7][8]. Measurement of the circulating components of RAAS in animal models and humans with diabetes mellitus is an inaccurate means of predicting the state of RAAS activation or its response to intrarenal inhibition [9]. Conversely, high intrarenal concentrations of RAAS components, and particularly of Ang II, have been shown to play a role in the progression of kidney damage in an animal model [10] and in the clinical progression of IgA nephropathy [11]. DN may feature low circulating concentrations of RAAS components and high local activity of the RAAS [12]. Plasma renin activity (PRA) is lower in patients with diabetes than in normal individuals [13][14][15][16][17], which implies a "renin paradox" [13,18,19]. The intrarenal RAAS also plays an important role in the pathogenesis of DN [20]. It was previously hypothesized that low PRA reflects high intrarenal Ang II production [13]. However, the association of PRA with the progression of kidney disease in patients with DN remains to be fully characterized. Overt nephropathy is defined by the presence of macroalbuminuria (urine albumin-to-creatinine ratio ≥300 mg/g creatinine [Cr]) or persistent proteinuria (urine protein-to-creatinine ratio ≥0.5 g/g Cr), according to the 2014 Japanese classification of DN [21]. In the Reduction of Endpoints in NIDDM with the Angiotensin II Antagonist Losartan (RENAAL) study, DN was diagnosed in participants if their urine albumin-to-creatinine ratio was >300 mg/g Cr or their 24-h urine protein was >500 mg. In addition, a serum Cr (SCr) concentration of 1.3-3.0 mg/dL (1.5-3.0 mg/dL in men of >60 kg) was an inclusion criterion [3]. In the study conducted by Rossing et al., patients with type 2 diabetes and DN who had macroalbuminuria (albuminuria ≥300 mg/24 h) and a median SCr of 1.20 mg/dL (interquartile range [IQR], 0.56-3.12 mg/dL) were enrolled [4]. In another study of DN, the eligibility criteria included SCr <2.59 mg/dL, 24-h urinary proteinuria >500 mg, and the presence of diabetic retinopathy [13]. In this context, several studies have been conducted in patients with type 2 diabetes and overt nephropathy who presented with various degrees of kidney dysfunction. Furthermore, a previous study divided patients with type 2 diabetes and reduced kidney function into proteinuric (urine albumin-to-creatinine ratio ≥300 mg/g Cr) and non-proteinuric (urine albuminto-creatinine ratio <300 mg/g Cr) diabetic kidney disease (DKD). Those with proteinuric DKD (overt nephropathy) had poor renal outcomes compared with those with non-proteinuric DKD [22]. Therefore, the current study aimed to determine the association between PRA and the decline in kidney function in patients with type 2 diabetes and overt nephropathy, including those with renal impairment. Study design and population We performed a 2-year prospective observational study of 118 patients with DN who were admitted to our hospital for chronic kidney disease assessment between June 2009 and September 2019. The inclusion criteria were as follows: type 2 diabetes mellitus, proteinuria >0.5 g/day, diabetic retinopathy, and SCr ≤4.0 mg/dL. The exclusion criteria were a history of partial nephrectomy for renal cell carcinoma, staghorn calculi of the kidneys, and/or severe renal atrophy. Of the 118 enrolled patients, 14 were lost to follow-up; three required dialysis for acute exacerbation of kidney function caused by infection, congestive heart failure, or surgery for femoral neck fracture; two died; and two complied poorly with the medication, and were therefore excluded. Therefore, data from 97 participants were analyzed. All the participants were discharged from the hospital without renal replacement therapy and were subsequently followed up. The Ethics Committee of the National Hospital Organization Kyushu Medical Center approved the study (approval number: 09-09 and [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29], and all the participants provided their written informed consent. Definition of outcomes The annual change in estimated GFR (eGFR) over the entire study period was determined from the slope calculated during linear regression analysis of the eGFR and is expressed as mL/min/1.73 m 2 /year. The alternative study endpoint was a composite of doubling of SCr or ESRD. The presence of kidney dysfunction that required maintenance hemodialysis or peritoneal dialysis was taken to indicate the development of ESRD. Data collection After admission, all the participants were maintained on a hospital diet that included <6 g/day of salt. In the early morning after an overnight fast, the participants were placed in a supine position for blood sample collection. On the day of admission, 24-h urine collection was started, and the 24-h urinary sodium excretion, and daily proteinuria were measured. To convert mEq of sodium to grams of salt, the quantity of sodium in mEq was multiplied by 0.0585. B-type natriuretic peptide (BNP), PRA, and plasma aldosterone concentration (PAC) were simultaneously measured in each participant. The lower limits of detection for BNP, PRA, and PAC were 5.8 pg/mL, 0.1 ng/mL/h, and 10 pg/mL, respectively. For statistical analyses, 5.8 pg/mL, 0.1 ng/mL/h, and 10 pg/mL were the assigned concentrations of four samples with BNP, seven samples with PRA, and 26 samples with PAC below the lower limits, respectively. eGFR (mL/min/ 1.73 m 2 ) was calculated using the new Japanese equation as follows: eGFR = 194 × SCr -1.094 × age -0.287 × 0.739 (for women) [23]. We then measured the long and short axes of the kidneys of each participant using abdominal ultrasonography and recorded the mean sizes. The cardiothoracic ratio (CTR) at the time of admission was also measured. All the participants underwent a clinical examination and were interviewed at presentation. Their medical history and outpatient records were also evaluated in detail. Demographic information (age and sex), atherosclerotic risk factors (hypertension, history of cigarette smoking, dyslipidemia, and diabetes mellitus), and the presence or absence of diabetic retinopathy were recorded. The current status or history of cigarette smoking was recorded. Hypertension was defined as a systolic blood pressure of >140 mmHg, diastolic blood pressure of >90 mmHg, or the use of antihypertensive drugs. Dyslipidemia was defined as a plasma triglyceride concentration of >150 mg/dL, a plasma low-density lipoprotein cholesterol concentration >140 mg/dL, a plasma high-density lipoprotein cholesterol concentration <40 mg/dL, or the use of lipid-lowering drugs. Diabetes mellitus was defined as a history of or a current fasting plasma glucose concentration >126 mg/dL or the use of hypoglycemic agents. The duration of diabetes mellitus was recorded from the time of initial diagnosis. Body mass index was calculated as the patient's weight in kilograms divided by height in meters squared. Previous prescriptions for RAAS inhibitors, calcium blockers, α-and β-blockers, and diuretics were reviewed for each participant at presentation. Statistical analysis Continuous data are expressed as mean ± SD or median (IQR), depending on their distribution, and categorical data are expressed as number (%). Participants were categorized according to tertile (T1-T3) of PRA. Prior to statistical analysis, PRA, PAC, and BNP, which were not normally distributed, were log-transformed to achieve approximately normal distributions. For nonparametric data, the significance of differences between two groups was evaluated using the Wilcoxon rank sum test. The relationships between two continuous variables were evaluated using Spearman's rank correlation coefficients. The relationships between log PRA and other clinical parameters and that between PRA and the annual change in eGFR were also evaluated using linear regression analyses. Survival curves were constructed using the Kaplan-Meier method and evaluated using the log-rank test. Furthermore, the association between PRA and a composite of doubling of SCr or ESRD was evaluated using a Cox proportional hazards model. We also assessed the discriminative value of the model for composite renal outcomes using Harrell's C-statistic. The models considered were: a basic model, adjusted for age, sex, smoking, systolic blood pressure, dyslipidemia, body mass index, daily proteinuria, hemoglobin, eGFR, serum albumin concentration, and log PAC; and the basic model with the addition of log PRA. Scores of 1.0 and 0.5 indicated perfect and poor discrimination, respectively. C-indexes and 95% CIs were calculated and compared using the somersd package and lincom commands, respectively [24]. All data were analyzed using STATA version 14 (Stata Corp., College Station, TX, USA) with p < 0.05 being accepted as statistically significant. Baseline characteristics The 97 participants comprised 77 men and 20 women and had a mean age of 65 years (range, 30-89 years). Their median (IQR) PRA was 0.6 (0.3-1.4) ng/mL/h. Three participants were diagnosed with DN using renal biopsy and the remaining 94 were diagnosed on the basis of clinical findings. Table 1 summarizes the baseline clinical characteristics of the participants, according to PRA tertile. The duration of diabetes mellitus did not differ among the PRA tertiles. Regarding the use of prescription drugs, the prevalence of the use of RAAS inhibitors did not differ between the tertiles of PRA, whereas that for diuretic use increased from T3 to T1 of PRA. The median (IQRs) PRA for participants who were or were not using an RAAS inhibitor at baseline were 0.6 (0.3-1.4) and 0.4 (0.3-1.1) ng/mL/h, respectively (p = 0.93). The median (IQR) proteinuria also did not significantly differ between participants who were or were not using an RAAS inhibitor (proteinuria: 3.49 [1.85-6.96] g/day for those using an RAAS inhibitor and 3.25 [0.93-6.71] g/day for those not using an RAAS inhibitor; p = 0.44). Conversely, the median (IQR) eGFR was lower in participants who were using an RAAS inhibitor than in those who were not (eGFR: 23.1 [15.8-32.6] mL/min/ 1.73 m 2 vs. 41.8 [28.5-53.8] mL/min/1.73 m 2 ; p < 0.01). All 10 participants who had not previously been prescribed an RAAS inhibitor were treated in the same way during follow-up. Conversely, six participants who were taking an RAAS inhibitor at baseline discontinued their use because of their possible side effects. At the end of the study period, 91 participants were being treated using an RAAS inhibitor. Of the 97 participants, three (3%), eight (8%), 24 (25%), 46 (47%), and 16 (16%) were categorized as having eGFR of ≥60 mL/min/1.73 m 2 , 45 to <60 mL/min/1.73 m 2 , 30 to <45 mL/min/1.73 m 2 , 15 to <30 mL/min/1.73 m 2 , and <15 mL/min/1.73 m 2 , respectively. As the PRA decreased, daily proteinuria worsened and the serum albumin concentration decreased. eGFR and urinary salt excretion showed no significant differences between the tertiles of PRA, but BNP increased from T3 to T1 of PRA. Table 2 shows the relationships between log PRA and other clinical parameters, assessed using univariable linear regression analyses. Log PRA was positively associated with serum albumin but inversely associated with systolic blood pressure, daily proteinuria, log BNP, and CTR. In contrast, there were no relationships between log PRA and RAAS inhibitor use, diuretic use, eGFR, Plasma renin in diabetic nephropathy log PAC, or urinary salt excretion. Associations of PRA with the annual change in eGFR The median (mL/min/1.73 m 2 /year) (IQR) annual changes in eGFR in each tertile of PRA were -8.8 (-18.5, -4.2) for T1, -8.0 (-14.3, -3.2) for T2, and -3.1 (-6.3, -2.0) for T3 (p for trend <0.01). As shown in Fig. 1, log PRA had a significant positive relationship with the annual change in eGFR. Table 3 shows the relationships between PRA and the annual change in eGFR in multivariable linear regression analyses. In the fully adjusted model (Model 3), log PRA (1-log unit increment) was significantly associated with the annual change in eGFR. Compared with T3 of PRA, T1 showed a significant inverse association with the annual change in eGFR. Associations of PRA with composite renal outcomes During the study, 46 participants manifested renal events (a composite of doubling of SCr or ESRD). In Kaplan-Meier analysis, the prevalence of renal events was significantly higher in participants in the lower tertiles (T1-T2) of PRA (Fig. 2). Table 4 shows the HRs for the composite renal outcomes associated with each PRA. In the fully adjusted model (Model 3), log PRA (1-log unit increment) was significantly associated with the composite renal outcomes, and T1 and T2 were significantly associated with poorer renal outcomes than T3. Discriminative values for predicting composite renal outcomes Regarding the discriminative values for composite renal outcomes, the C-indexes for the basic model and for the basic model + log PRA were 0.8148 (95% CI, 0.7583-0.8713) and 0.8474 (95% CI, 0.8001-0.8946), respectively. Thus, the addition of log PRA to the basic model significantly improved the discrimination for composite renal outcomes, showing an increase in Harrell's Cstatistic, with a difference of +0.0326 (p = 0.02). Sensitivity analyses Log BNP showed a strong inverse association with log PRA (Table 2). Therefore, we also performed sensitivity analyses for the 91 participants with available BNP data (Supplementary Table 1). After adjustment for the addition of log BNP to the covariates included in Model 3 (Tables 3 and 4), log PRA (1-log unit increment) was associated with the annual change in eGFR (coefficient, After the exclusion of 10 participants who were not taking an RAAS inhibitor at baseline, 87 were evaluated using alternative sensitivity analyses (Supplementary Table 2). After adjustment for the covariates included in Model 3 (Tables 3 and 4), log PRA (1-log unit increment) was associated with the annual change in eGFR (coefficient, 1.698; 95% CI, 0.221-3.175); and compared with T3, T1 was associated with the annual change in eGFR (coefficient, -4.732; 95% CI, -8.541 to -0.922 for T1). In multivariable Cox analyses, log PRA (1-log unit increment) was associated with the composite renal outcome (HR, 0.58; 95% CI, 0.39-0.87), and T1 and T2 were associated with poorer renal outcomes than T3 (T2: Discussion In this 2-year prospective observational study, we investigated the associations of PRA with the progression of kidney disease in patients with DN. Low PRA was found to be associated with a rapid decline in kidney function and participants with low PRA were found to be at a higher risk of a composite of doubling of SCr or ESRD. To the best of our knowledge, this is the first study to demonstrate a significant association between PRA and renal outcomes in patients with DN. Local activation of the RAAS is important in various tissues, including the brain, heart, adrenal glands, vasculature, and kidneys [25]. In particular, the intrarenal RAAS is unique because all the components necessary to generate intrarenal Ang II are present along the nephron, in the glomerulus and interstitial and intratubular compartments [25][26][27]. In a diabetic milieu, angiotensinogen expression is high in proximal tubular cells and is induced in glomerular mesangial cells [28][29][30]. Urinary angiotensinogen may also represent a biomarker of early dysregulation of the intrarenal RAAS in DN [25]. In diabetes, high glomerular ACE activity and low glomerular ACE2 activity result in the excess accumulation of glomerular Ang II, causing albuminuria and/or glomerular injury [26]. Compared with healthy individuals, low ACE2 and high ACE expression have been shown in both the tubulointerstitium and glomeruli of patients with type 2 diabetes and overt nephropathy [27]. Hyperglycemia may activate the intrarenal RAAS within glomeruli and proximal tubules, thereby triggering the production of local Ang II, which may cause feedback inhibition of systemic renin release [9]. Accordingly, high intrarenal Ang II production might explain the low PRA in patients with DN [13]. Furthermore, greater Ang II generation is attributable to the progression of DN, involving several hemodynamic, tubular, and growth-promoting effects [19]. Given these findings, the declining kidney function in patients with low PRA that was identified in the present study might explain the link between low PRA and the activation of the intrarenal RAAS, generating Ang II. Several studies have addressed the association between the intrarenal RAAS and proteinuria/albuminuria in patients with diabetes. Proteinuria positively correlates with the mRNA expression of ACE and ACE2 in the urinary tract of patients with DN [31]. In patients with type 1 diabetes, a high urine albumin/creatinine ratio is associated with high urinary tract angiotensinogen and ACE activities [18]. Urinary tract angiotensinogen is higher in patients with type 2 diabetes than in healthy individuals, and it progressively increases as they transition from normo-to micro-to macroalbuminuria [32]. Furthermore, Sawaguchi et al. demonstrated that urinary tract angiotensinogen positively correlates with the urine albumin/creatinine ratio and that high urinary tract angiotensinogen expression is associated with a decline in kidney function in patients with diabetes. In addition, high urinary tract angiotensinogen expression may be associated with greater intrarenal RAAS activation in such patients [33]. Therefore, substantial proteinuria or albuminuria may be associated with high intrarenal RAAS activity. Furthermore, a previous study demonstrated that PRA is significantly lower in patients with diabetes and macroalbuminuria than in those with normo-or microalbuminuria [17]. In the present study, lower PRA was also found to be associated with greater proteinuria (Tables 1 and 2), which might reflect activation of the intrarenal RAAS. It has been demonstrated that high plasma prorenin activity is common in patients with diabetes and that high plasma prorenin concentration is associated with microalbuminuria [34] and DN [35]. (Pro)renin receptor (P)RR is a single transmembrane protein that binds renin and prorenin with equal affinity and is widely expressed, including in the brain, heart, liver, and kidney [36]. The binding of prorenin to the extracellular domain of (P)RR causes non-proteolytic activation of renin [37], which accelerates the conversion of angiotensinogen to Ang I. Therefore, (P)RR plays an important role in tissue Ang II generation [36]. Ichihara et al. showed that diabetic rats have significantly lower PRA and higher prorenin concentrations than control rats, and also that the development and progression of DN is associated with more marked increases in kidney Ang I and II concentrations . Moreover, treatment with the "handle region" peptide of prorenin, which acts as a decoy peptide to inhibit the non-proteolytic activation of prorenin, reduces the renal concentrations of Ang I and II and inhibits the development of DN. These findings suggest that the nonproteolytic activation of prorenin contributes to the activation of the intrarenal RAAS [38]. In contrast, soluble (P)RR is generated intracellularly by the cleavage of furin, is found in rat and human plasma, and can bind prorenin [39]. In patients with essential hypertension, the soluble (P)RR concentration positively correlates with urinary tract angiotensinogen expression [40], which is a biomarker of intrarenal RAAS in these individuals [41]. It has also been suggested that soluble (P)RR might represent a marker of intrarenal RAAS activation in patients with diabetes [42]. Mineralocorticoid receptors (MRs) are expressed in various tissues in humans, including the kidney [43]. The pathophysiological implications of an increase in MR expression and activation (either aldosterone-dependent or direct ligand-independent activation) and its blockade have been documented in in vitro and in vivo experimental studies [44]. The small GTPase Rac1 has been identified as a ligand-independent modulator of MR activity [45]. In hypertensive mice, high salt intake led to hypertension and kidney injury, and simultaneously reduced PAC but activated the renal Rac1-MR cascade, which suggests that this alternative method of MR activation contributes to salt-induced kidney injury [46]. In contrast, renal MR expression increases in diabetic rats [47]. It has been reported that high glucose stimulates MR transcriptional activity via Rac1 in a ligand-independent manner [48]. In patients with DN, the albuminurialowering effect of the MR blocker spironolactone is independent of both the baseline levels and changes in PAC [49]. In addition, a previous study demonstrated that hyperglycemia in diabetes, independent of PAC, induces podocyte injury through MR-mediated reactive oxygen species (ROS) production and leads to proteinuria in diabetic rats, and that spironolactone prevents kidney injury by reducing ROS production [50]. In addition, it has been suggested that the activation of the Rac1-MR pathway by a high glucose concentration might explain the relationship between diabetic milieu, kidney injury, and MR-mediated ROS production, independent of PAC [48]. In response to myocardial stretching, atrial natriuretic peptide (ANP) and BNP are secreted from cardiomyo-cytes into the circulation [51], and high ANP and BNP concentrations are indicative of volume overload [52,53]. In a previous study, patients with diabetes who had poor glycemic control also had a higher ANP concentration and lower PRA than those with moderate glycemic control or healthy individuals, and ANP concentration was inversely related to PRA in patients with diabetes [54]. Similarly, another study showed that patients with diabetes have higher ANP concentrations and lower PRAs than healthy individuals [14]. In addition, high tubular Ang II activity, which may be followed by hyperglycemia-induced angiotensinogen synthesis in proximal tubular cells, can directly stimulate sodium and water reabsorption, thereby promoting extracellular fluid volume expansion and subsequently inhibiting the release of juxtaglomerular cell-derived renin into the circulation [9]. Thus, low PRA may reflect fluid retention status. In the present study, CTR, which is commonly used to assess volume status [55], was measured alongside the BNP concentration. Univariable linear regression analyses showed that both log BNP and CTR were inversely associated with log PRA, suggesting that patients with low PRA tend to be in a state of volume overload. We also conducted sensitivity analyses of data from 91 participants for whom BNP data were available. After adjustment by the addition of log BNP to the other covariates included in Model 3 (Tables 3 and 4), low PRA was found to be associated with poor renal outcomes (Supplementary Table 1). Hence, low PRA may indicate a significant risk of kidney disease progression in patients with DN, independent of fluid retention. In general, the administration of diuretics increases PRA by causing water and electrolyte deprivation, but in the present study, the prevalence of diuretic administration increased from T3 to T1 of PRA. Given the possibility that patients with lower PRA had volume overload, the higher prevalence of diuretic use in participants in the lower tertiles of PRA might be attributable to the higher prevalence of edema in patients with low PRA, although we could not assess the edema of each participant at enrollment. In the present study, the prevalence of RAAS inhibitor use at baseline did not differ among the tertiles of PRA. The median PRA also did not differ between participants who were or were not using an RAAS inhibitor at baseline. However, the differences between the two groups were difficult to evaluate because the number of participants who were not taking an RAAS inhibitor was much lower than the number who were taking such medication. Nevertheless, these results suggest that RAAS inhibitor use did not affect PRA in participants with DN at baseline. To reduce the influence of the use of RAAS inhibitors at baseline on subsequent kidney disease progression, we conducted sensitivity analyses to further investigate the association between PRA and kidney disease progression in the 87 participants who were using an RAAS inhibitor (Supplementary Table 2). Low PRA was found to be independently associated with kidney disease progression, similar to the results shown in Tables 3 and 4. Therefore, patients with low PRA were at a higher risk of a decline in kidney function, despite their use of RAAS inhibitors. The present study had some limitations that should be acknowledged. First, all the participants were recruited from a single regional hospital. Therefore, the sample was fairly homogeneous and the sample size was relatively small, which might imply selection bias. We calculated the required sample size using Fisher's z test, comparing one correlation to a reference value. For a significance level of 0.05, a power of 0.8, and a correlation coefficient of 0.300, the estimated required sample size was 85. In the present study, the correlation coefficient for relationship between log PRA and the annual change in eGFR was 0.407, as shown in Fig. 1. Therefore, a sample size of 97 should have been appropriate for the evaluation of this relationship. Second, although DN was clinically diagnosed, a histological diagnosis was not documented in 94 participants. Further studies are also needed to investigate whether PRA is related to the degree of glomerulosclerosis or tubulointerstitial damage. Third, we could not evaluate the duration of RAAS inhibitor use or the dose, which could affect PRA, before enrollment. Fourth, we could not assess whether a sufficient quantity of the hospital diet was consumed by each participant. Therefore, it is possible that some of the participants did not consume enough of the diet, such that their salt intake may have been underestimated. In addition, 24-h urine collections were performed between the first and second days of admission, and this timing does not fully exclude the possibility that the 24-h urinary salt excretion measured in the present cohort may have been affected by dietary sodium intake prior to admission. Fifth, the findings of the present study should be interpreted with some caution. Given the role of PRA in the regulation of fluid volume, it remains unknown whether low PRA is the cause or consequence of greater proteinuria, low serum albumin, high BNP, high CTR, and high systolic blood pressure. In addition, in the present cohort, the prevalence of advanced kidney dysfunction (eGFR <30 mL/min/1.73 m 2 ) at baseline was high, as shown in Table 1. Accordingly, when exploring the risk factors for poor renal outcomes in the present cohort, which included a large number of participants with advanced kidney dysfunction, it should be borne in mind that those with advanced kidney dysfunction would have been at a fundamentally higher risk of progression to ESRD during the 2 years of the study. Finally, a single PRA measurement may not have been a sufficiently accurate means of predicting renal outcomes. In conclusion, low PRA is associated with poor renal outcomes in patients with DN, which suggests that PRA may represent a useful predictor of the renal prognosis of such patients.
2021-12-12T16:22:52.250Z
2021-12-11T00:00:00.000
{ "year": 2021, "sha1": "0e060d41c87b1f1673fc28f20d9d17144c6efbaf", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/endocrj/advpub/0/advpub_EJ21-0608/_pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0ae1e97dab5cc3169f040c4bea320a43b33684ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
220481480
pes2o/s2orc
v3-fos-license
BIM–ENERGY SIMULATION APPROACH FOR DETECTING BUILDING SPACES WITH FAULTS AND PROBLEMATIC BEHAVIOR Heating and cooling consumes most of the energy in buildings. Faults and problems in HVAC systems waste up to 20% of heating and cooling energy. Identifying spaces with HVAC problems within a facility remains a major challenge for facility managers. This study aims to detect spaces with potential problems that causes energy overconsumption, human discomfort, or HVAC work overload. To achieve that, a Building Information Model (BIM)-based framework that combines the output data of building energy simulations, Building Energy Management Systems (BEMS), and Computerized Maintenance Management Systems (CMMS) is proposed. The framework enables BIM components to utilize data collected by the other systems to determine the intended energy performance and compare it with actual energy performance, as well as to provide access to maintenance history and BEMS alarms occurred in the building at element level. The framework was tested using data collected from an educational building over one-month period when the building was unoccupied to prevent users from manipulating the results. Experimental results show that the framework enabled identification of building spaces with abnormal or malfunctioning behavior that was not detected by the BEMS. This study supplements the body of knowledge in facilities energy management by providing a BIM-based framework that utilizes output data of energy simulation, BEMS and CMMS to locate and detect building spaces with potential problems that need maintenance. Furthermore, it enables facility managers to collect and view relevant data from various systems in one central platform; BIM. It also allows them to adjust their maintenance plans based on the poor behavior of specific spaces within their building. INTRODUCTION Buildings are responsible for 30% of the global energy consumption (IEA 2008). Heating, Ventilation, and Air Conditioning (HVAC) accounts for approximately 40% of buildings' energy consumption (USDOE 2011). However, 5% to 20% of HVAC energy consumption goes to waste due to faults and lack of maintenance (Roth et al. 2005). Therefore, it became even more important for facility managers to find more efficient ways for managing building energy (Bush and Maestas 2002). However, facility managers face many challenges to achieve their goals (Jensen and Tu 2015) that includes identifying problematic spaces in a facility in an efficient manner, isolating different types of problems, prioritizing their impact, and developing solutions for these problems (Zhu 2006). Energy simulations can be very effective to help reduce energy consumption in buildings (Kim et al. 2016). However, building energy analyses are mostly conducted during design stages, and the results of these analyses are not typically used during building operations. Facility managers' ability to identify problematic areas and isolate problems is limited due to numerous interconnected Facility Management (FM) systems and their multilayer information content. Nevertheless, Building Information Models (BIM) provide facility managers with an opportunity to manage and coordinate the information collected from these systems. In addition, it supports engagement of multiple stakeholders, and enables collection of various information throughout the project life cycle. This study proposes a new BIM-centric framework that enables identification of building spaces with undesired energy performance and support facility managers to provide proper maintenance in an efficient manner. It evaluates the energy performance by comparing Building Energy Management Systems (BEMS) monitoring data with energy simulation results within the BIM environment. In this framework, BIM coordinates data collected by BEMS, geometry data stored in BIM, maintenance data stored in Computerized Maintenance and Management System (CMMS), and energy simulation results generated by EnergyPlus™ based DesignBuilder software. Detailed tasks include (1) establishing a methodology to collect BEMS data and use it for energy simulations, and (2) developing a framework to identify building spaces with problematic behavior and specifying possible causes. To assess the feasibility of the framework, data was collected from an unoccupied building where areas with problematic behavior were located, and possible causes were identified using its maintenance history data. This paper is organized as follows: a comprehensive literature review on BIM for building energy management, building energy management systems, BIM for FM, and current maintenance practices is provided in the next section. The framework is detailed in the following section. The next section then presents the results of the experiments conducted. The last section draws conclusions and discusses future research needs. BIM for building energy management Energy modeling is a complex and time consuming task (Crawley et al. 2008;Hong et al. 2020;Im et al. 2020;Song et al. 2019) because of the process of gathering and accurately entering the necessary building description data that is required for simulation. Traditionally, the modeler enters all the data manually to describe the building. It is important to note that the modelers make simplifications on the proposed building geometry to minimize the complexity of the energy modeling and information gathering. Programs such as DOE-2.2 and EnergyPlus™ that were developed and used to predict energy consumption in buildings require laborious data entry and are complex to learn and use (Heiple andSailor 2008, Zhu 2006;Kim et al. 2019). Currently, BIM is used to efficiently plan, design, and construct buildings. Energy modeling requires data such as R-values, conductivity and thicknesses, and BIM provides a database that can include such data (Shalabi and Turkan 2016) for all building elements such as exterior and interior walls, roofs, windows, doors, floors, as well as their orientation. Different energy simulation accuracy levels can be achieved based on BIM Level of Details (LOD). The most accurate simulation can be achieved after completing the building design and finalizing the construction decisions. When used with other Facility Management (FM) systems, conducting energy simulations while operating the building not only helps in detecting energy overconsumption, but also helps define causes for that overconsumption (Al- Shalabi and Turkan 2015). Such causes may include lack of maintenance for one or more building elements, users' behavior, or both. The previous research on BIM for building energy management can be categorized into three groups: (1) studies that developed methods and algorithms to use BIM to predict energy performance depending on the results obtained from energy simulation tools such as DOE and EnergyPlus™; (2) studies that investigated data exchange between BIM and energy simulation tools; (3) studies that developed applications for using BIM for energy management. The first group focused on developing BIM based methods and algorithms for energy modeling during design phase (Farzaneh et al. 2019). Cho et al. (2010) developed a strategy that uses BIM technology to include sustainable fixtures in energy generation prediction. Another study focused on optimizing energy performance using a multi-objective generic algorithm that uses the results from BIM-based energy simulation (Chen and Gao 2011). Raheem et al. (2011) used BIM to analyze the annual energy consumption and CO2 emissions of a single house. Kim et al. (2013) developed an IFC-BIM based energy simulation process that runs in DOE 2.2. In addition, they developed a semantic material name matching system that finds standardized material names and their associated material property values. Several researchers focused on developing requirements and guidelines for using BIM for building energy modeling and management. Such studies include developing guidelines for using BIM for building energy modeling (Reeves et al. 2012), and studying key BIM-server requirements for information exchange in energy efficient building retrofit projects (Jiang et al. 2012). In a different line of work, Oh et al. (2011) developed a method that uses EnergyPlus™ and genetic algorithms to determine the optimal design option for various glazing options. In addition, they developed an application to export data from gbXML to EnergyPlus-IDF file. These studies complement and fall within the same scope of the study presented here in terms of developing the energy models using BIM. However, this study differs from the previous work by focusing on using BIM and energy simulations for actively managing building energy performance during building operation phase. The second group investigated information exchange between HVAC systems and energy simulation tools (Kamel and Memari 2019). Bazjanac (2008) investigated the interoperability between IFC-BIM and building energy analysis tools. This work focused on transferring geometry and HVAC information from IFC-BIM into EnergyPlus™. O'Sullivan and Keane (2005) presented a graphical user interface to input necessary data about HVAC systems into BIM-based building energy simulation tool using IFC format. These studies are similar to the work presented here in terms of the methods they use to develop the energy model. However, they differ from this study as they developed methods for information exchange, which is not in the scope of this study. The third group focused on the applicability and usability of building simulation tools in different life-cycle stages of a building (Andriamamonjy et al. 2019). Katranuschkov et al. (2014) developed an energy enhanced BIM (eeBIM) framework with the goal of closing the gap for existing data and tools from building design and operations to enable an efficient life-cycle energy performance estimation and decision-making. Attia (2010) conducted a survey on the selection criteria of building simulation tools among various stakeholders of construction projects, and the results showed a broad range of differences between designers and simulation tools. Difficulties for industry practitioners in implementing BIM are described by Arayici et al. (2011), which included difficulties such as reinventing the workflow, training their staff, assigning responsibilities, and changing the way buildings are modeled. Katranuschkov et al. (2014) described the importance of developing a framework that enables integration of multiple resources (e.g. weather, occupancy, material data, etc.) and the interoperability between energy analysis, cost analysis, CAD, FM and building energy monitoring tools. They also highlighted the importance of combining various construction and FM related data in a typical BIM to be efficiently applied to tasks such as energy simulations and various FM tasks. Kim et al. (2016) built on this work by developing a model for mapping IFC-BIM material information to building energy analysis. Shalabi and Turkan (2016) developed an approach for optimizing data collection from IFC-BIM to be used for corrective maintenance actions. However, none of these studies considered using energy simulation techniques for energy management during building operation phase. This study builds on the work in this group by developing an approach that integrates energy simulation results, actual energy performance monitored by BEMS, and other FM data such as maintenance to move toward a more active building energy management and maintenance. Building energy management systems BEMS adjust and control buildings' HVAC and lighting equipment to manage their environment while optimizing their energy performance and occupants' thermal comfort. BEMS is defined as a collection of microcomputer systems consist of Direct Digital Controllers (DDC) and their control devices, which operate under supervisory control equipment and software collectively. Their capabilities include data sharing with individual controllers for coordination and optimization, linking control processes, and performing operation tasks and reports (Doty and Turner 2012). BEMS is connected to building sensors and controllers that report any flaws or dysfunctions in the system or its equipment. Building controllers send feedback to BEMS or Building Automation System (BAS) if any of the equipment is not working properly. Facility managers receive alarms from BEMS about any dysfunctions or failure, and they can monitor, change any benchmark, or override the system decisions. When maintenance or replacement is needed, facility managers' report the problem to the maintenance personnel, who would in turn typically search the CMMS to locate, inspect, and gather the required maintenance information regarding that element. Facility managers work to achieve and maintain the planned operational performance of buildings, and to guarantee an up-to-date maintenance status of the HVAC equipment, which is dependent on the continuous feedback from the building sensors, controllers, and energy management strategies during building operation phase. Energy performance of buildings deteriorates overtime due to several reasons including lack of prompt response to faults and alarms reported by BEMS systems, imprecise commissioning, and BEMS malfunctioning. This would result in energy waste, and cause occupant discomfort and complaints (IFMA 2013). BEMS reports several types of data that are recorded by FM information systems. The data reported include weather and energy use (e.g., temperature, CO2, zone airflow, daylight levels, occupancy levels, etc.), alarm monitoring and data collected from sensors (e.g., equipment failure, high and low temperatures defective sensors and communication problems), and controllers (e.g. air handler unit controllers, valve controllers and fan controllers) (Doty and Turner 2012). Typically, DDCs are numbered and organized based on their type, function, and location in the building, and presented in list format. However, data about their exact locations, the equipment affected by them and their maintenance history information are stored in different systems. Furthermore, building performance metrics such as sensor outputs, and energy performance metrics are presented in 2D histograms, tables, and lists of tasks or in similar formats, which requires tedious data extraction and interpretation processes to benefit from this data. A BEMS hosts the results of Fault Detection and Diagnostics (FDD) analysis and presents it to facility managers (Dong et al. 2014). Several FDD approaches have been developed to identify faults and deterioration in building equipment (Dong et al. 2014, Qin and Wang 2005, Sallans et al. 2006, Schein et al. 2006, Wang and Xiao 2006, Xiao 2004). This study differs from FDD approaches as it analyzes energy simulation results using real weather data measured by the building systems and then compares the results to actual energy performance of a building. BIM Implementation in FM FM personnel manage HVAC systems and other building components using multiple systems. Their goal is to maintain a thermally comfortable environment for occupants, and to guarantee the functionality of the building while remaining within their operating budget. Two of the major systems used in FM practice are BEMS and CMMS. FM systems interact with multiple users and stakeholders directly and indirectly during building operations including occupants and FM staff (Roper and Payant 2014). Occupants' actions affect the building energy consumption and the faults reported by BEMS concern facility managers (Doty and Turner 2012). Some wellknown problems caused by occupants include: the use of space heaters during winter that wastes cooling power while increasing the plug loads (Beltran et al. 2013), and blocking of thermostats and sensors with furniture or appliances, which gives false readings to FM systems. The lack of manpower in FM affects maintenance and energy consumption of a building greatly (Roper andPayant 2014, Teraoka et al. 2014). As a result, building operators feel overwhelmed by the number of fault alarms they need to address, thus they focus only on critical faults and complaints made by occupants. Furthermore, facility managers may find temporary fixes that resolve the issue temporarily but lead to more energy waste or allow for other related faults to emerge (Teraoka et al. 2014). Throughout facility life cycle, BIM supports a multi-domain and multi-layer collaborative approach, and engages multiple stakeholders in the project including architects, engineers, contractors as well as facility managers and operators. Using BIM leads to decreased information loss during a project's lifecycle (Eastman et al. 2011, Al-Shalabi andTurkan 2015). Effective sharing of data between various stakeholders is among the capabilities of BIM, which has been proven for design and construction phases. However, effective use of BIM for operations and maintenance phase have not been achieved yet, thus BIM adoption in FM is still in its early stages (Kelly 2013). This is mainly due to the limited awareness among FM professionals about the expected BIM benefits for FM, lack of data exchange standards and unproven productivity gains illustrated by case studies. BIM benefits that are sought during operations phase include extracting and analyzing data for various needs to support and improve decision making processes (Azhar 2011). Furthermore, BIM use in FM applications can provide faster access to data and improve the process of locating facility elements via its user-friendly 3D interface, which helps increasing the efficiency of work order executions (Kelly 2013). In addition, carrying BIM from design to operations phase would allow BIM to support all activities throughout the buildings' life cycle (Fallon and Palmer 2007). Previous research on BIM use in FM developed BIM-based frameworks to streamline the existing processes and systems. Such studies include an augmented reality based system for operations and maintenance (AR-based O&M) support (Lee and Akin 2011), a 2D barcode and BIM-based facility management system (Lin, Su, and Chen 2012), and a 3D BIM-based facility maintenance and management system (Chen et al. 2013, Lin andSu 2013). These studies compliment the research presented here in terms of streamlining the existing FM processes and systems. However, this study differs from the previous work as it uses energy simulations and energy performance monitoring to improve building energy management by detecting systems' dysfunctions. Several other studies developed BIM-based approaches to replace current processes to capture, store, and retrieve facility data in an efficient manner. Such studies include using BIM to generate customized templates to capture maintenance work related changes (Akcamete 2011), a knowledge based BIM system that uses casebased reasoning for building maintenance (Motawa and Almarshad 2013), fault-tree analysis for failure root cause detection (Lucas et al. 2012, Motamedi et al. 2014, and using BIM for HVAC troubleshooting (Yang and Ergan 2015). However, none of the studies in this group focused on developing an approach to provide facility managers with solutions that are proactive to improve the performance of their buildings. While BIM is sought to benefit FM practice, there are still many challenges regarding BIM implementation in FM. Two of the major challenges that prevents BIM implementation in FM include unproven productivity gains that can be realized from reduced equipment failure, as well as the productivity increases that may be realized through an integrated platform (Becerik-Gerber et al. 2011). Furthermore, fragmented data, data interoperability, and lack of data transparency throughout the building life cycle are among some of those challenges. Maintenance in FM Maintenance can be preventive, corrective, or predictive. Corrective maintenance is considered as reactive type of maintenance that responds to a failure or to a breakdown (Motawa and Almarshad 2013). Preventive and predictive maintenance are considered as proactive maintenance that prevents a failure or a breakdown of building equipment (Palmer 1999). Preventive maintenance is scheduled and predefined for regular intervals to guarantee a continued optimal performance (Rikey and Cotgrave 2005). Unlike corrective maintenance, preventive maintenance reduces non-planned work and allows estimating the overall maintenance budget (Flores-Colen and de Brito 2010). Predictive maintenance is a condition-based maintenance that is useful for reducing life-cycle costs and achieving more efficient maintenance budgets (Hermans 1995). Corrective maintenance is usually an emergency action that leads to unavoidable extra costs. It is important to minimize the occurrences of this type of maintenance (Flores-Colen and de Brito 2010). The framework described in this paper aims to help achieve predictive maintenance benefits and reduce corrective maintenance occurrences. BIM-CENTRIC FRAMEWORK FOR DETECTING BUILDING SPACES WITH FAULTS AND PROBLEMATIC BEHAVIOR Facility managers depend on various facility management systems to operate their buildings efficiently, with minimum shutdowns. Due to the complexity of buildings, the massive amount of data collected from facility management systems, and the multiple factors such as normal wear and tear in building elements, building users' behavior, and degradation in equipment that affect a building's performance, it has become cumbersome for facility managers and building operators to identify and specify spaces with abnormalities or malfunctions in buildings' systems. The main objective of the approach described here is to detect building spaces with abnormalities or malfunctions in buildings' systems that are causing excess energy consumption, human discomfort, or work overload on HVAC systems. The nature of such faults is usually hidden and undetected by BEMS alarm systems. However, such faults affect the heating and cooling equipment's energy consumption remarkably. Therefore, the focus of this study is on developing a BIM-centric framework that integrates data from BEMS and CMMS systems as well as energy simulations, which enables identification as well as visualization of spaces with heating and cooling equipment that are not functioning properly, i.e. over consume energy, in the BIM environment. Corrective maintenance actions are critical for building performance as such faults in building systems may cause losses in equipment, affect occupants' comfort, and result in unexpected maintenance or replacement costs. The framework described in this paper enables identification and visualization of building spaces with degraded or malfunctioning equipment while also providing information on maintenance history of those equipment in BIM environment. Since BIM is not all-inclusive, data can be aggregated from other FM systems as needed and included in BIM (Figure 1). This would allow facility managers to compare, analyze, and visualize information collected from various FM systems to identify and visualize any faults in building systems and their potential causes. FIG. 1: Role of BIM in the Framework. As mentioned above, BIM coordinates three different types of data, namely outputs of BEMS, CMMS, and energy simulations. BEMS records and keeps interior and exterior weather data that are considered essential to run accurate energy simulations. In addition, it records heating and cooling patterns by controlling heating and cooling outlets such as radiators' valves, Terminal Air Boxes (TAB) fans speeds, and fresh air intake. BIM can store valuable and essential information including energy simulation output data, building geometry, material properties, walls assembly, properties of HVAC systems and components, as well as building operation strategies and schedules (Katranuschkov et al. 2014, Kim et al. 2016, Maile, Fischer, and Bazjanac 2007. In this study, BIM stores and visualizes data input from BEMS and CMMS systems as well as energy simulations to determine building equipment's behavior and maintenance needs. In this study, the complexity of the studied building and its systems were the determining factors in choosing EnergyPlus over other energy simulation tools such as TRNSYS or eQuest. EnergyPlus enables defining building components in detail, and it is capable of combining multiple systems in the simulation, thus providing more realistic simulation results. EnergyPlus has proven its reliability in modelling multi-zone buildings from public housing (Xu et al. 2014) to airports (Griffith et al. 2003). Developing and validating an energy model of a multi-zone building can be a challenging task. In this study, the energy modeling process was conducted in three stages. The first stage involves acquiring or developing a 3D as-built geometry of the building. Since there was no readily available 3D as-built model of the studied building, the research team built one from its LiDAR data (3D point cloud) to guarantee an accurate model that closely represents the as built conditions (this process is commonly referred to as Scan-to-BIM) (Bosché et al. 2015, Volk et al. 2014. The data collected include locations of all architectural elements, HVAC elements, pipes, electrical plugs, sensors and thermostats. The second stage involves collecting non-graphical data of the building materials and systems. Such data include detailed envelope composition, material properties, O&M manuals, nominal powers for main HVAC system components such as the boiler and the Air Handling Unit (AHU), and schedules for equipment and occupants. Stage three involves adding weather data from the local weather station that is located on the roof of the studied building, which is connected to the BEMS, as well as adding the information included in the commissioning documents to the energy model, and finally calibrating the energy model with the as-designed energy model. The reason behind the calibration process is to ensure the accuracy of the generated energy model. Figure 2 presents the framework consisting of three major levels that are detailed below. FIG. 2: Overview of the Framework. • Building Information Level: At this level, building data is collected, retrieved from different systems, and stored in BIM as detailed in Shalabi and Turkan (2016). It includes building geometry, materials and assembly, BEMS alarms, and CMMS data. Building geometry and assembly information are typically stored in BIM, while BEMS and CMMS data needs to be collected and temporarily stored in BIM to identify building spaces with equipment that is not functioning properly. • Energy Simulation Level: Weather data that was collected and stored by BEMS in the previous level is used at this level. The weather data includes exterior dry bulb temperature, relative humidity, dew point, atmospheric pressure, wind speed, and wind direction. This data is used to create the weather file that is needed to run the simulation. In addition, building information including building orientation, openings, HVAC systems, material conductivity, wall assembly, and thicknesses from the previous level is used to develop the energy model. The energy simulations are then performed, and the results are reported to the next level. • Analytical Comparison Level: At this level, actual heating and cooling patterns are compared with heating and cooling load results of energy simulations obtained for each space. A discrepancy or a major flaw between the two highlights the need for a closer observation of that particular space. This will allow facility managers to have a better idea about the potential causes of the fault since they will be looking at a specific area depending on the nature of the simulation result and the information collected from BEMS and CMMS. Level 1: Building Information Aggregation Data and information from multiple systems are needed to manage and operate a facility. In this framework, building geometry and material data are stored in IFC-BIM from the handover and commissioning phase. All thermal properties of wall assemblies can be stored in IFC-BIM as IFC-PROPERTY-SET with different properties as IFC-PROPERTY-SINGLE-VALUEs. Such data is automatically generated by a BIM software (e.g. Revit) when provided during the modeling process. Data that are collected from other systems, such as BEMS and CMMS, are first exported into Excel format manually, and then aggregated into IFC-BIM automatically. Accurate evaluation of building equipment energy consumption requires recording local weather measurements, which is the norm in most modern BEMS systems. Therefore, three types of data are exported from the BEMS including alarms caused by equipment faults, actual heating and cooling system loads, and weather data (e.g. external dry bulb temperature) and utilized in BIM and energy modeling process. Figure 3 illustrates this level in detail. Level 2: Building Information Aggregation Energy simulation tools such as EnergyPlusTM and DOE-2.2 are dependable but not very user-friendly tools. At this level (Figure 4), data collected from other systems and stored in BIM are used to develop the energy model. The energy simulation tool is capable of utilizing properties of the building envelope such as wall thicknesses, assemblies, and different conductivity values from IFC-BIM. In addition, various occupancy schedules and densities are input into the energy simulation tool. Typically, all occupancy schedules and densities are taken from class schedules, which are updated every semester. However, the studied building was unoccupied at the time of data collection. Thus, such data was not included in the energy simulation in this study. EnergyPlusTM based user interface software DesignBuilder was used to run the energy simulations. FIG. 4: Level 2 -Energy Simulation Level. HVAC thermal zones are divided into smaller spaces reflecting the actual HVAC outlets (e.g. TABs and radiators). This simulation, corresponding to the actual as-built BIM data that uses actual occupancy schedules and densities (which is an empty building in this case), differs from the energy simulation that is conducted during design stage at macro level. The energy model used for this simulation is tuned to match the actual operating schedules and the various set points (e.g. temperature, humidity, CO2, etc.) of the BEMS that controls the building's climate. Level 3: Analytical comparison A certain amount of energy is needed to heat or cool a given building space. Energy simulations utilize building operation and BEMS schedules to produce detailed energy consumption loads, i.e. separate heating and cooling loads, for each building space with a HVAC outlet. For the HVAC system, each energy outlet, such as a radiator or a cooling air duct, will diffuse a certain amount of energy either through heating or cooling the space. This amount is controlled by the BEMS. Furthermore, the BEMS tracks the duration and amount of energy used for each HVAC equipment such as the amount of energy provided by heating radiators. Building energy simulations utilize the same information that is fed into the BEMS. Depending on this information, the energy model will simulate the proposed energy outcome for each of the outlets, and for each outlet, the simulated energy amount will be compared to the consumed amount. The system will define the discrepancies and each discrepancy will be flagged. The system will aggregate CMMS and BEMS to present relevant information about the HVAC equipment for the spaces with discrepancies. The final analysis and decision-making falls on the facility manager. After performing an analytical comparison between the results, the following scenarios can be considered ( Table 1). The first scenario deals with cases demonstrating a constant or unresponsive behavior; i.e. the heating or cooling outlet is not corresponding to the changes of heating and cooling demand. This may suggest a malfunctioning valve, a broken controller or an operator override. The second scenario corresponds to cases with above normal behavior, i.e. the heating or cooling outlet responds to the demand but excessively. This may suggest an occupant behavior such as opening a window, a piece of furniture blocking a radiator, or a set point override in BEMS. The third scenario deals with cases demonstrating a below normal behavior; i.e. the heating and cooling outlet is responding to the demand but insufficiently. In this case, the heating or cooling does not satisfy the space needs. This may suggest broken sensors that reports current temperatures, broken valves in the heating or cooling outlet, or an external heating source that affects the temperature sensors. Finally, scenario four examines irregular patterns; i.e. the actual consumption does not follow a pattern. In this case, the problem can be in the central unit, in the simulation itself, BEMS readings, or BEMS programming. However, the last scenario is not in the scope of this study. It should be emphasized that this framework is not designed to detect irregular behavior in a specific system equipment to improve FM tasks. Rather, it provides a methodology to monitor, maintain and help reduce energy consumption of a building. The framework does not detect a specific piece of equipment that causes energy overconsumption, however it helps identify which space inside the building is performing poorly and the equipment connected to this space. Building Description King pavilion is a two-story 15,228 sqft educational building housing design studio for architecture students at Iowa State University. Figure 5 presents its 3D view generated with DesignBuilder. The building is divided into 15 working spaces and is heated by a central boiler and is cooled by a central chiller. Every space is connected to a separate heating radiator and TAB to heat, cool, and ventilate the space ( Figure 6). In each space, there are sensors measuring humidity, dry bulb temperature, and CO2 for mechanical ventilation. All fifteen spaces are in the same thermal zone. Geometric information for the BIM of this building was developed from its laser scan point cloud captured using a Trimble TX5 laser scanner (Trimble 2012). Eight separate scans were taken to cover each floor of the building, sixteen scans for the whole building. Autodesk Revit Scan-to-BIM plugin, a semi-automated modeling tool, was utilized to model HVAC and structural components from the point cloud accurately. In addition to the laser scans, several measures were employed to ensure an accurate representation of the building as is. Such measures include: (1) design reports and LEED documents provided by the architect. (2) Extensive surveys to capture and validate physical construction, equipment utilization, and reviewing the BEMS logs. HVAC System The building is heated with hot water radiators. The water is heated by a central boiler that serves multiple buildings on campus. The BEMS controls the hot water flow in the radiators. The required flow is based on a fixed set point that can be adjusted by the facility manager. The hot water is supplied to the radiator at a max of 215 F° measured at the central boiler unit and this is the temperature assumed for calculating the actual demand. The radiator heating capacity is 1300 BTUH/ft. The building is connected to one main AHU that is connected to a thermal wheel to recover waste heat. This arrangement eliminated the need for heating coils within the AHU, which provides a conditioned ventilation air by a central single-duct forced-air system. Sensible cooling is provided using the central water chilling system that is connected to the AHU in the building. The chilled water temperature ranges between 50 and 55 F. Its flow rate, through the AHU, varies according to the sensible cooling load. Each space in the building has a separate TAB that is connected to a valve and a vent that controls the amount of cool air provided to each specific place depending on the cooling load. This study focuses on heating; therefore, no cooling equipment or strategies are included in the analysis. Material data that is captured from the handover documents and the building commissioning verification results was uploaded to the BIM and then to the energy simulation software. Thermal properties imported from the BIM into EnergyPlus include thermal properties of architectural elements such as Emittance, Permeance, and Resistance (R) values and characteristics of the HVAC system (Table 2). Weather data for the energy simulation were recorded onsite using the sensors of the BEMS systems. Based on those sensor readings, BEMS react and operate the heating and cooling equipment in the building. The energy simulation parameters were set according to the actual measured parameters under which the building is operated. The building is divided into fifteen spaces. Each space includes a hot water radiator, vents, and a TAB. Each TAB contains CO2, relative humidity and dry bulb temperature sensors that read measurements of the returning air. All these details were included in the EnergyPlus simulation model. Hot water radiators, VAV system with gas absorption chiller, gas fired boiler Energy recovery Sensible Energy Recovery 94% effectiveness (1) R is measured in (h ft2 °F/Btu) which is the hours needed for 1 Btu to flow through 1 ft2 of a given thickness of a material when temperature difference is 1 F° (2) SHGC is the Solar Heat Gain Coefficient and represents fraction of incident solar radiation admitted through a window by direct transmission and by absorption and release into the space. The simulation results were compared with the Actual Heating Consumption (AHC) as measured by each radiator valve opening in that specific area or space. The length of the radiator differs from one space to another resulting in a different amount of BTU/h infused in each space. AHC in BTU/h is calculated as follows (Equation1): Where: AHC = Actual Heating Consumption Rc = Radiator Heating Capacity L = Length of the radiator V = percentage (%) of Valve opening In 10 spaces out of 15, the heating radiator valve demonstrated regular behavior (Figure 7). FIG. 7: Actual Heating Consumption vs. Simulation Heating Demand Comparison for Space #3 The regular behavior of a radiator valve is defined as the valve response to the change in Simulated Heating Demand (SHD) by either going up or down. Such demand is predicted in this framework using the energy simulation results for a given building or space. A discrepancy is when AHC does not respond in a similar pattern as the SHD or responds completely different, e.g. AHC increases while SHD is decreasing or vice versa. For Space 3 in Figure 7, the AHC behavior follows the pattern of the SHD with some delay. The ideal scenario is the case when the actual and simulated heating patterns match. However, the average difference between AHC and SHD was (0.016 kBTU/h) over the testing period of one month with SHD being more than AHC. In the three-week testing period, AHC has not always followed the SHD immediately and took some time to adapt to the change in temperature and heating demand. On 12/10, the average daily temperature increased from 22 F to 39 F indicating less heating demand. The SHD responded to this change by reducing the heating demand. However, the AHC acted in opposite manner, reflecting an increase in demand instead. On 12/11, it returned to follow the regular pattern. The system repeated the same behavior on 12/22 and 12/28. On both dates, a sudden change in the average daily temperature occurred and the temperature either dropped or increased significantly. This behavior may indicate a problem with the BEMS itself indicating that a calibration to the system is needed. It can also indicate an external factor that is causing the system to receive false readings. In both cases, a further investigation and more testing is needed to define the causes of the odd behavior. The system developed in this research can define areas with problematic behavior. It also provides the facility manger with a comprehensive approach that allows to define areas and components that need a closer look for maintenance. Note that when the heating demand increases, the valve opening increases to meet the heat demand, following the demand predicted by the simulation. This adaptation is desired by the heating system as it heats the building as needed. Similarly, the actual valve decreases its opening following the demand predicted by the simulation. Facility managers have access to the AHD in BEMS through 2D graphs, but not the predicted consumption i.e. SHD. Therefore, facility managers cannot compare the data they receive to an ideal behavior and detect undesirable behavior in the system. On the other hand, 5 out of 15 spaces in the building depicted different behavior that varied between not responding to heat demand variations at all, overheating, or underheating the spaces. In Figure 8, the radiator in Space#6 did not respond to energy demand variation at all, i.e. radiator valve was closed all the time. The average difference between the AHC and SHD was (0.6 kBTU/h) over the testing period of 1 month, with SHD being more than AHC. Such behavior can indicate more energy savings in this space but compromises occupants' thermal comfort when the space is occupied or increase the load on adjacent spaces' equipment. A similar behavior was detected in space in Space #9 (Figure 9), the radiator valve was not able to open more than 5%. On the contrary, for spaces 7 and 10, ( Figure 10) and ( Figure 11) respectively, the radiators were overheating both spaces resulting in energy overconsumption. The average difference between the AHC and SHD was 3.86 kBTU/h for space #7 and 2.06 kBTU/h for space #10 over the testing period of one month, with AHC being more than SHD. Interestingly, the spaces that were over heated are adjacent to spaces that were under heated. Such behavior directs the facility manager to examine both spaces closely as an undetected fault might be present in the HVAC, sensors or systems of either or both spaces. As mentioned in the methodology section, this framework does not pinpoint the malfunctioning equipment. However, it achieves two main things. First, it provides a proactive approach to energy savings and equipment maintenance in a building as it enables identifying a fault or a problem that is causing energy over consumption that may result in a costly failure in the system. Second, it narrows the search down to a single or small group of spaces that are connected to a limited number of equipment. It should be noted that those problems were not picked up with regular facility management practice, which follows a corrective maintenance approach rather than a predictive one. While such problems seem clear and obvious, they are often overlooked by FM teams and hard to detect in a timely manner. Larger buildings often have more rooms and far more complex systems that cause this process to be cumbersome. This approach provides facility managers with a closer look on how the building equipment and systems are performing at a given time, which enables them to determine spaces that are under or over consuming energy. Facility managers can then analyze that equipment and take actions accordingly. CONCLUSIONS A significant amount of energy is wasted due to faults in HVAC systems and lack of maintenance. Facility managers are aware of the importance of finding efficient ways to manage and reduce the energy consumption in their facilities. Current FM systems are lacking interoperability capabilities and are operated by different teams resulting in poor data coordination and management. In addition, facility managers face challenges in identifying problematic spaces in their facilities, isolating types of problems, and prioritizing the impact of those problems. BIM is capable of coordinating data from different FM and energy management systems, which would provide a comprehensive perspective of building spaces, its equipment, and information to facility managers. Previous studies developed methods that utilize BIM to predict energy performance during design phase; investigated information exchange between BIM and energy simulation tools; and developed methods for energy management during design phase. However, none of these studies focused on using BIM and energy simulation tools to identify and locate problematic spaces in a facility, which is very important for timely maintenance. This paper presented a framework that utilizes BIM to compare energy simulation results obtained using actual HVAC patterns, historical BEMS and maintenance data. The contributions include 1) a methodology that enables comparing actual HVAC behavior with heating and cooling demand obtained from energy simulation using as-built characteristics of a building 2) a BIM-centric framework that helps identify spaces with undesired energy performance in a building so that timely maintenance actions can be taken. While the framework helps to identify building spaces that are not meeting the energy demand, and enables to gather all relevant information from BEMS and CMMS automatically, it still requires manual rationalizing from facility managers. More specifically, facility managers still need to examine and identify the actual cause of the fault or the problem that is detected automatically by the framework. The framework was tested on data collected from an unoccupied educational building that includes several design studio spaces. The results showed that the framework enabled detection of problematic building spaces, and identification of potential causes by using the BEMS and CMMS data corresponding to those spaces. However, the nature of the detected faults is not reported by the automated BEMS alarming system. Thus, the cause of such problems requires further investigation by the facility manager. Comparison between the intended energy performance and the actual performance of a building HVAC equipment pinpoints the faults and problems. In this study, the comparison between the actual energy performance and the intended energy performance of the studied building resulted in one of four cases: unresponsive, excessive, insufficient, or irregular. However, further application on different buildings is required to tune the output of the framework and comparisons. Furthermore, the framework at this stage lacks validation tools to test building energy performance compared to thermal comfort measures of building occupants, or the needs and budgets of facility managers. Such validation can help prioritize the maintenance items and maximize the benefits of any maintenance action. Future work should focus on expanding the framework presented here by incorporating the effects of occupants. In addition, algorithms should be developed for using energy performance comparisons to detect faults and problems in buildings automatically.
2020-07-02T10:08:45.642Z
2020-06-29T00:00:00.000
{ "year": 2020, "sha1": "10e5814f74e4646e4df3f8536b21973f68ff7645", "oa_license": "CCBY", "oa_url": "http://www.itcon.org/papers/2020_20-ITcon-Shalabi.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c8ef1c4ef96c7132ded47ae373f20f874f63035f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
238227197
pes2o/s2orc
v3-fos-license
A Frequency-Domain Approach to Nonlinear Negative Imaginary Systems Analysis In this study, we extend the theory of negative imaginary (NI) systems to a nonlinear framework using a frequency-domain approach. The extended notion is completely characterized via a finite-frequency integration over a"kernel function"on energy-bounded input and output signal pairs. The notion is closely related to and carefully contrasted with the well-studied extension of negative imaginariness -- the theory of counterclockwise dynamics. A condition for feedback stability of the proposed nonlinear NI systems is then developed based on the technique of integral quadratic constraints. Examples and simulations on feedback interconnections of typical nonlinear systems are provided to demonstrate the effectiveness. This work was supported in part by Shanghai Municipal Science and Technology Major Project under grant 2021SHZDZX0100. The well-studied counterclockwise (CCW) input-output dynamics, which can be regarded as a nonlinear generalization of NI systems, were firstly studied in (Angeli, 2006(Angeli, , 2007. Recently, with the blossoming of the NI systems theory (see, e.g., (Petersen, 2016;Khong, Petersen, & Rantzer, 2018;Kurawa, Bhowmick, & Lanzon, 2020;Mabrok, Alyami, & Mahmoud, 2021) and the references therein), there have been attempts on generalizing the theory from various perspectives, among which extending the theory to incorporating nonlinear systems is of great interest and importance. Using dissipativity theories, (Ghallab, Mabrok, & Petersen, 2018;Mabrok et al., 2021) extends the framework to nonlinear systems from a state-space perspective. By introducing a notion called phases of systems, (W. Chen, Wang, Khong, & Qiu, 2021) extends a special version of the theory to phase-bounded (linear time-invariant) LTI systems and (C. Chen, Zhao, Chen, Khong, & Qiu, 2020) extend it to phase-bounded nonlinear operators. As a summary, there have been mainly two types of extensional studies for the NI systems theory, with one using state-space methods (Ghallab et al., 2018;Mabrok et al., 2021) and the other from an input-output perspective (Angeli, 2006;W. Chen et al., 2021;C. Chen et al., 2020). All of these extensions have advantages and disadvantages in different applications and scenarios, leaving us to wonder what the most "natural" way is for the extension to nonlinear systems. This also becomes one of the main motivations for this study, in which we propose an extension of the NI systems theory to nonlinear systems from an input-output perspective via a frequencydomain approach. To be precise, under certain mild conditions, the proposed negative imaginariness of a nonlinear system can be verified purely by the finitefrequency properties of the input-output signal pairs of the system. One important reason to extend the framework using frequency-domain approaches lies in that the NI systems theory was initially studied on transfer functions. Along the line of frequency-domain methods, the extended theory can be easily and naturally traced back to its original form. The elegant results on feedback interconnections of open-loop NI systems are the most appealing and crucial parts of the existing theory and its applications. The robust feedback stability of NI systems was investigated in (Lanzon & Petersen, 2008) as a parallel to the positive real stability results (Brogliato et al., 2007). (Ghallab et al., 2018) obtained feedback stability conditions for "nonlinear NI" systems using state-space methods and dissipativity theories, while (C. Chen et al., 2020) explored similar conditions by proposing a "nonlinear small phase theorem" using techniques on graph separation and multiplier approaches. In this study, by extending stability results in (Khong et al., 2018), we obtain feedback stability of nonlinear NI systems by imposing suitable integral quadratic constraints (IQCs) Cantoni, Jönsson, & Khong, 2013;Khong, 2021) for input-output signal pairs around zero or infinity frequency. The rest of the paper is organized as follows. In Section 2, the basic notation and preliminary results on systems are introduced. In Section 3, we propose a definition of nonlinear NI systems and its strict version, and compare them with existing definitions of CCW dynamics. The main result -a robust feedback stability condition for NI systems -is obtained in Section 4. Illustrative examples and demonstrating simulations are provided in Section 5. Finally, the study is concluded in Section 6. Basic Notation Let F = R or C be the real or complex field, and F n be the linear space of n-tuples of F over the field F. For x, y ∈ F n , the inner product is denoted by x, y and the Euclidean norm by |x| := x, x . The real and imaginary parts of a complex number s ∈ C are denoted by Re s and Im s, respectively, and its conjugate bys. The complex conjugate transpose of a matrix A ∈ C n×n is denoted by A * , its conjugate transpose by A T , and its singular values bȳ Denote the set of all absolutely integrable signals by Denote the set of all energy-bounded signals by For u ∈ L n 2 , its Fourier transform is denoted byû. For T ≥ 0, define the truncation operator Γ T on all signals u : [0, ∞) → R n by Denote the extended L 2 space as (Ĝ(s)) = sup ω∈Rσ (Ĝ(jω)). Denote by RH n×n ∞ the set of all real rational members in H n×n ∞ . A linear time-invariant (LTI) system with transfer matrixĜ is said to be stable ifĜ ∈ RH n×n ∞ . In what follows, the superscripts in H n×n ∞ , L n 2e . . . will be omitted when the context is clear, and so will the frequency-domain symbol s or jω. The following definition on negative imaginariness for stable LTI systems is taken from (Lanzon & Petersen, 2008). Nonlinear Systems We regard a nonlinear system as an operator mapping from L n 2e to L n 2e in this study. Definition 2 (Causality) A system P : L n 2e → L n 2e is said to be causal if for all T > 0 and u 1 , u 2 ∈ L n 2e , The L 2 domain of a causal system P is defined as Denote by C n the set of all absolutely continuous functions, which are differentiable almost everywhere, in L n 2e . In this study, we mainly investigate nonlinear operators in the following set Denote a subset of the above operators with output signals being absolutely continuous as Definition 3 (Stability) A system P ∈ N n is said to be (finite-gain) stable if there exists α > 0 such that The following lemma is a direct consequence of (van der Schaft, 2017, Proposition 1.2.3). Lemma 1 A system P is finite-gain stable if and only if D(P ) = L m 2 and Feedback systems and their stability are the main focus of this study. Denote by P # C the positive feedback system, as shown in Fig. 1, between P ∈ N n and C ∈ N n . In this study, we adopt the well-posedness definition from (Willems, 1971;Vidyasagar, 1993). Definition 4 (Well-posedness) The closed-loop system P # C in Fig. 1 is said to be well-posed if The following definition is tailored from (Angeli, 2006). It is said to have strictly CCW dynamics if for any for some positive definite function ρ and some strictly increasing function γ ∞ with γ ∞ (0) = 0 and lim a→∞ γ ∞ (a) = ∞, where x(t) is the state of the system. Nonlinear Negative Imaginariness In what follows, we present our definition for negative imaginary systems, and compare it with the wellreceived CCW dynamics introduced in the last subsection. Definition 7 (Negative Imaginariness) A system P = u → y ∈ N n is said to be negative imaginary (NI) if it is stable and there existΩ * ≥ Ω * > 0 such that for allΩ ∈ [Ω * , ∞) and Ω ∈ (0, Ω * ], It is said to be strictly negative imaginary (SNI) if it is stable and there existΩ * ≥ Ω * > 0 such that for all Ω ∈ [Ω * , ∞) and Ω ∈ (0, Ω * ], there exists = (Ω, Ω) > 0 such that It is noteworthy that the integrals are taken only over finite frequency ranges that exclude 0 and ∞, similarly to linear NI systems. For notational simplicity, we say a statement holds for sufficiently largeΩ > 0 if there exists Ω * > 0 such that the statement holds for allΩ ≥Ω * ; similar convention applies to sufficiently small Ω > 0 as well. Negative imaginary systems form a convex cone, as is detailed in the following proposition. Proposition 1 For any G, H, K ∈ N n with G, H being NI and K being SNI, it holds the following statements. PROOF. The two statements can be shown using similar arguments and we only prove statement (b) in what follows. Relations to Systems with CCW Dynamics The proposed definition on nonlinear NI systems has a close relation to the well-studied notion of CCW dynamics, as revealed in the following lemmas. Lemma 2 Let P ∈ N C n be NI. Then P has CCW inputoutput dynamics. PROOF. By letting Ω → 0 + andΩ → ∞, we obtain from Definition 7 that for u ∈ L n Since P is causal, substituting u with Γ Tũ for allũ ∈ L n 2e and T > 0 in the above inequality yields that which completes the proof. 2 Assumption 1 For all u ∈ L n 2 with a support of nonzero measure, it holds thatẏ ∈ L n 2 and where y = P u. Lemma 3 Let P ∈ N C n be stable, have CCW dynamics and satisfy Assumption 1. Then P is NI. PROOF. Let u ∈ L n 2 with a support of nonzero measure. By Definition 6 and Plancherel's theorem, we have Then for sufficiently largeΩ > 0 and sufficiently small Ω > 0, it holds that for every u ∈ L n 2 with a support of nonzero measure, Re Ω Ω û(jω), jωŷ(jω) dω ≥ 0, which also holds for u ∈ L n 2 that is zero almost everywhere. 2 Assumption 1 can be weakened by strengthening P to have strictly CCW dynamics, yielding the following result. Assumption 2 For all u ∈ L n 2 with a support of nonzero measure, it holds thatẏ ∈ L n 2 and has a support of nonzero measure, where y = P u. Lemma 4 Let P ∈ N C n be stable, have strictly CCW dynamics and satisfy Assumption 2. Then P is NI. PROOF. For any u ∈ L n 2 with a support of nonzero measure and y = P u, we haveẏ ∈ L n 2 and is nonzero for some time interval. Since P has strictly CCW inputoutput dynamics, we obtain from Definition 6 that where the last inequality follows from that ρ(·) is positive definite and γ ∞ (·) is nonnegative. An application of Lemma 3 shows that P is NI. 2 The above lemmas establish the relations between NI systems and those with CCW dynamics. In particular, Lemma 2 shows that NI systems (with differentiable outputs) necessarily have CCW dynamics, while Lemma 3 shows that under an additional positivity condition (in Assumption 1), a stable system with CCW dynamics is NI. Although we have seen that the proposed definition of nonlinear NI systems has a close relation to the systems with CCW dynamics, they are still essentially two different concepts. One can easily verify that a system with non-differentiable outputs cannot have CCW dynamics but it can be NI. Based on Lemma 3, a system (satisfying Assumption 2) with strictly CCW is necessarily NI as shown in Lemma 4, but not vice versa as will be demonstrated in Section 5. Reduction to The LTI Case The following proposition shows that for LTI systems, Definition 7 is equivalent to its original version in Definition 1. Proposition 2 For an LTI G ∈ N n with transfer functionĜ ∈ RH n×n ∞ , it holds the following relations. For statement (a), its sufficiency can be shown by using (5). For the necessity part, follow the similar notation, and we complete the proof by noting that forω ∈ [Ω a , Ω b ] and (Ω,Ω) ⊃ (Ω a , Ω b ), it holds Remark 1 As revealed by Lemma 3, a nonlinear NI system can be identified by verifying if it possesses CCW dynamics. In addition, by Propositions 1 and 2, a special class of nonlinear SNI systems can be identified, based on a given nonlinear NI system G, as whereĤ is the transfer matrix characterizing an LTI system H. Main Result In this section, we introduce our main results for the feedback stability of (nonlinear) NI systems. We start with the following proposition for feedback stability presented in terms of finite-frequency IQCs, which are adopted to handle the singularity frequencies around zero and infinity. It can be easily verified that Π ∈ L ∞ . Let := min{ 0 , ∞ , m } > 0. By combining (8) and (9) properly, we obtain that where the second inequality follows by Lemma 1, namely, for u ∈ L n 2 , Note that P # (τ C) is well-posed for τ ∈ [0, 1]. The stability of P # C is then established in light of (Khong, 2021, Corollary IV.3), (Rantzer & Megretski, 1997, Theorem 2). 2 It is noteworthy that Proposition 3 is an extension of (Khong et al., 2018, Theorem 4) and is the first stability result involving nonlinear NI systems defined in Definition 7. It generalizes some versions of the previously studied feedback stability results on linear NI systems in (Lanzon & Petersen, 2008;Khong et al., 2018). However, verifying the inequalities in (8) for nonlinear systems is much more difficult than their counterparts in the LTI setting. As a result, the inequalities will be further interpreted in what follows so as to obtain more concise and verifiable feedback stability results for nonlinear NI systems. A class of nonlinear systems, which satisfies a certain IQC on the time-averaged input-output signals, is defined as B(Ξ, ) := P ∈ N n for all u ∈ L 1 ∩ L 2 such that for Ξ ∈ C 2n×2n with Ξ * = Ξ and ≥ 0. In addition, we define a complementary set via that P ∈ B C (Ξ, ) if P ∈ B(Ξ, ), wherẽ In the above sets, we restrict the signals to be in L 1 so as to ensure thatū andȳ are well defined. A special class of systems that belong to the above sets is given as follows. Lemma 5 An LTI system G ∈ N n with transfer matrix PROOF. Let u ∈ L 1 ∩ L 2 such that y = Gu ∈ L 1 ∩ L 2 . It then follows thatŷ =Ĝû, wherebyŷ(j0) = G(j0)û(j0). By the definition of Fourier transform, we obtainȳ =Ĝ(j0)ū, wherē Multiplyingū T andū at both sides of (10) yields that which completes the proof. 2 The uniform instantaneous gain of P ∈ N n that is locally Lipschitz continuous is defined as (Willems, 1971, We are now ready to present our main stability result in the following theorem. Its proof is deferred to Section 4.1. Theorem 1 Let P , C ∈ N n be both locally Lipschitz continuous with γ(P ) < α and γ(C) < α −1 for some α > 0. Then P # C is well-posed and stable, if there exist Ξ = Ξ * ∈ C 2n×2n and > 0 such that P is SNI with P ∈ B(Ξ, ) and C is NI with τ C ∈ B C (Ξ, 0), ∀ τ ∈ [0, 1]. Remark 2 It is noteworthy that if either of γ(P ) and γ(C) is zero, the other can take an arbitrary value in the above theorem. Moreover, in combination with Proposition 2, we have that both Proposition 3 and Theorem 1 reduce to (Khong et al., 2018, Theorem 4) when P and C are taken to be LTI. The following is a typical class of nonlinear systems that are NI and satisfy the IQC constraints in Theorem 1. If we take f (x, u) = −3x 1 − x 2 + u, we can verify that P is LTI, ISS and SNI. Furthermore, if we take which is nonlinear, we can verify that P is ISS and for ∀ u ∈ L 2 it holds By Definition 6, P has CCW dynamics. It then follows from Lemma 3 that P is NI. PROOF. By the definition of the uniform instantaneous gain, we obtain that where the second last equality follows by applying the initial value theorem (Ceschi & Gautier, 2017, Chapter 2) on x and P x, respectively. 2 Based on Proposition 3 and Lemma 6, the proof of Theorem 1 is given below. PROOF. Since L n 1 ∩ L n 2 is dense in L n 2 (Lieb & Loss, 2001, Chapter 2), and P and C are stable and locally Lipschitz continuous, it suffices to examine their inputoutput pairs in L n 1 ∩ L n 2 to show the feedback stability in what follows. Note that for every x ∈ L n 1 ∩ L n 2 , it follows from the definition of Fourier transform that Since P ∈ B(Ξ, ), it holds for all u ∈ L n 1 ∩ L n 2 satisfying P u ∈ L n 1 ∩ L n 2 that lim Ω→0+ 1 Ω Re Since τ C ∈ B C (Ξ, 0), τ ∈ [0, 1], similarly to the above inequality we have for all y ∈ L n 1 ∩ L n 2 such that Let 0 = /3 > 0. Using the fact that L n 1 ∩ L n 2 is dense in L n 2 and the premise that P and C are stable and locally Lipschitz continuous, we obtain that there exists Ω * > 0 such that for all Ω ∈ (0, Ω * ] and u, y ∈ L n 2 , where By hypothesis, we have γ(P ) < α and γ(C) < α −1 . Then there exist 2 , ∞ > 0 such that γ 2 (C) < α −2 − 2 and ( ∞ + α 2 )(α −2 − 2 ) < 1. Useful Corollaries The following corollary gives a sufficient condition for feedback stability between a linear SNI system and a nonlinear system with CCW dynamics. An illustrative example for this corollary is provided in Section 5.1. PROOF. By Proposition 2, we know P is SNI. On the other hand, we obtain from Lemma 3 that C is NI and so is τ C, τ ∈ [0, 1]. Consequently, it follows from Theorem 1 that P # C is well-posed and stable. 2 The following corollary gives a sufficient condition for feedback stability between linear SNI and NI systems, which is exactly (Khong et al., 2018, Theorem 4). Simulation Results In this section, we simulate the behaviours of feedback interconnections between (nonlinear) negative imaginary systems. Recall the nonlinear system P = u → y ∈ N 1 in Example 1 given by which is ISS and NI. It can be easily verified that the uniform instantaneous gain of P is zero, i.e. γ(P ) = 0. Moreover, It follows from (11) that τ P ∈ B C (Ξ i , 0) for all τ ∈ [0, 1] and i = 1, 2, where Ξ 1 = 0 1 1 0 and Ξ 2 = 1 0 0 −1 . Given P in (17), consider the following candidates for C i withĈ i ∈ RH ∞ that compose the feedback systems P # C i , i = 1, 2, 3: Feedback Connection of Nonlinear NI and Linear SNI Systems Clearly, all of them are linear SNI and satisfy that Actually, one can verify that there exists no Ξ with Ξ = Ξ * and > 0 such that P ∈ B C (Ξ, 0) andĈ 3 ∈ B(Ξ, ). The stability of P # C 1 and P # C 2 can be concluded from Corollary 1 using Ξ 1 and Ξ 2 , respectively. As shown in Fig. 2, P # C 1 and P # C 2 produce energy-bounded output signals given an impulse input signal, which is necessary for the feedback systems to be stable. On the other hand, note that P # C 3 does not satisfy the premises in Theorem 1, and the simulation also reveals that such a system is unstable. Consider C 4 = u → y ∈ N 1 with Feedback Connection of Two Nonlinear Systems which can be equivalently represented as C 4 = P + 4G and C 5 = P + G, where G is LTI with transfer function G = − s + 1 s + 2 . Since P is nonlinear NI and G is SNI, it follows by Proposition 1(b) that C 4 and C 5 are SNI. Furthermore, one can verify that C 4 ∈ B(Ξ 1 , ) for each ∈ (0, 0.5] while there exists no > 0 such that C 5 ∈ B(Ξ 1 , ). As shown in Fig. 3, P # C 4 produces an energy-bounded output signal given an impulse input signal, which is necessary for the feedback system to be stable. On the other hand, note that P # C 5 does not satisfy the premises in Theorem 1, and the simulation also reveals that such a system is unstable. Conclusions and Future Directions This paper proposes an extension of negative imaginary systems theory to nonlinear systems framework. In contrast with the existing results based on state-space characterizations (Angeli, 2006;Ghallab et al., 2018) or some special input-output index called "phase" (C. Chen et al., 2020), the proposed extension relies on a pure frequency-domain characterization of the NI property and develops a feedback stability condition for such systems via the theory of integral quadratic constraints. Of future interest are generalizations to more general nonlinear systems (such as, unstable ones) and their feedback interconnections. In addition, as we have obtained that parallel interconnections preserve negative imaginariness, another possible future direction is to explore more useful properties of different interconnections between such systems.
2021-10-01T01:16:22.213Z
2021-09-30T00:00:00.000
{ "year": 2022, "sha1": "299c45a56657bccabdb71ad6cb1b50d1944e3dd7", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "299c45a56657bccabdb71ad6cb1b50d1944e3dd7", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
18034045
pes2o/s2orc
v3-fos-license
Current Understanding of Acute Bovine Liver Disease in Australia Acute bovine liver disease (ABLD) is a hepatotoxicity principally of cattle which occurs in southern regions of Australia. Severely affected animals undergo rapid clinical progression with mortalities often occurring prior to the recognition of clinical signs. Less severely affected animals develop photosensitization and a proportion can develop liver failure. The characteristic histopathological lesion in acute fatal cases is severe, with acute necrosis of periportal hepatocytes with hemorrhage into the necrotic areas. Currently there are a small number of toxins that are known to cause periportal necrosis in cattle, although none of these have so far been linked to ABLD. Furthermore, ABLD has frequently been associated with the presence of rough dog’s tail grass (Cynosurus echinatus) and Drechslera spp. fungi in the pasture system, but it is currently unknown if these are etiological factors. Much of the knowledge about ABLD is contained within case reports, with very little experimental research investigating the specific cause(s). This review provides an overview of the current and most recently published knowledge of ABLD. It also draws on wider research and unpublished reports to suggest possible fungi and mycotoxins that may give rise to ABLD. Introduction Acute bovine liver disease (ABLD), formerly known as phytotoxic hepatitis, is a hepatotoxic disease principally affecting grazing beef and dairy cattle regardless of age, sex, or breed (Table 1). There have been two documented cases of mycotoxicoses in sheep with a similar epidemiology to ABLD, which implies ABLD may not be specific to cattle [1,2]. However, no further reports are documented, suggesting that if sheep are affected it is uncommon, presumably because of their differing grazing habits. This review will therefore focus on ABLD affecting cattle but much of the evidence presented is likely to be applicable to affected sheep. ABLD affecting cattle is observed in the southeastern states of Australia (Victoria, Tasmania and parts of South Australia) [3,4], with at least one report of a possible occurrence in Western Australia in 2002 [5]. It has been a recognized condition since the 1950s though documented evidence from the early years is non-existent [4]. Lancaster et al. [6] collated findings from 15 reports not found in general circulation. Further documented ABLD events are contained in case reports, conference reports or newsletters shared between practicing veterinarians in relevant Australian states (Table 1) [3,[6][7][8][9]. Accordingly, documented ABLD cases predominantly report clinical observations, findings of clinical and anatomical pathology, and suggestions that ABLD is caused by an unknown toxin. In addition to clinical findings, some case reports include limited epidemiological observations such as environmental and weather conditions, seasonality, and the presence or absence of plants of interest [7]. Interestingly, almost all reports have recorded the presence of senescent rough dog's tail grass (Cynosurus echinatus) in the suspected paddock [6]. Rough dog's tail grass was also present in the two reports of mycotoxicosis in sheep [1,2]. Consequently, risk factors for ABLD identified by state departments include long-standing dead or dry grass from the previous seasons, and the presence of rough dog's tail grass [10,11]. Cases of ABLD are commonly reported in autumn/winter (April-July) almost every year, beginning around the time of the first rainfall and cooler temperatures after summer [10,11]. Climatic risk factors for ABLD include a minimum temperature of >12 • C, >4 mm rainfall with high humidity, and calm conditions for greater than days prior [4,7,11]. Furthermore, it is likely that additional cases of unrecognized or unreported ABLD also occur. This has resulted in speculation on the etiology and pathogenesis of the disease. The current advice to graziers includes avoiding putting cattle on high-risk paddocks, grazing out high-risk paddocks with sheep to reduce the amount of dry standing grass, cultivating high-risk paddocks, and grazing a small number of cattle on high-risk paddocks to test for toxicity. Investigation and Diagnosis of ABLD Clarke and Weaver [7] reported that the onset of clinical signs can occur as early as 12 h after apparent exposure. The clinical manifestation of ABLD consistently includes an initial drop in milk production, secondary photosensitization, and altered behavior such as seeking shade even on overcast days. Depression, pyrexia, loss of appetite, and agitation may be observed in some cattle. Acute cases may result in death before clinical signs are observed [7]. Upon necropsy, liver damage is evident and periportal necrosis is often observed histologically. Hemorrhaging into the necrotic periportal area and progressive hepatocyte damage may also be observed [3,7]. Serum biochemistry often reveals a marked elevation of glutamate dehydrogenase and aspartate transaminase activities and moderate increases in gamma-glutamyl transpeptidase activity, indicating liver damage [7]. Although a biochemistry pattern such as this is consistently observed when ABLD is diagnosed, it is not specific for the diagnosis of ABLD. Furthermore, although periportal necrosis is a characteristic feature of acute and fatal ABLD, the pathology that may be present in more chronic, sub-lethal cases has not been documented. It may be more difficult to differentiate from other hepatotoxicoses, and so incorrect diagnoses may be made. Additional documentation of histological features at different stages of toxicosis is required to better define ABLD. While toxic hepatopathies are known to be caused by a range of toxic compounds, periportal necrosis is uncommon, which limits the differential diagnoses. The final diagnosis is currently based on the presence of characteristic acute periportal hepatocellular necrosis or consistent biochemistry changes and the exclusion of other differential diagnoses. The most common causes for hepatic necrosis in grazing cattle in southeastern Australia include blue/green algae poisoning, pithomycotoxicosis (facial eczema), and boobialla (Myoporum tetrandrum) poisoning [14,16,17]. While only boobialla causes periportal necrosis similar to ABLD, it has not been present when an outbreak of ABLD has been suspected. Therefore, a diagnosis of ABLD is achieved by testing available water sources for toxic forms of blue/green algae, a detailed examination of the paddock confirming the absence of other toxic plants, and verification of periportal necrosis in the liver. The presence of rough dog's tail grass is commonly used for the initial diagnosis (before pathological investigation) of ABLD. However, the absence of rough dog's tail grass does not exclude ABLD as a final diagnosis following sufficient pathological investigation. There are currently no methods for confirming ABLD on-farm without post-mortem examination, though ABLD can be strongly suspected based on epidemiology, clinical signs, biochemistry, and environmental examination. The occurrence of ABLD is notoriously sporadic and unpredictable. Not all cattle on a property will be visibly affected and usually only a small number die, though large losses have also been observed [3,4,8,9]. Moreover, pastures that cause ABLD appear to be only transiently toxic, and farms may not experience another occurrence in the following weeks, months, or years. Despite this, multiple occurrences in the same year on the same property have been observed on farms when the first occurrence was early in the season (Mark Hawes, personal communication [18]). Although ABLD has been recognized as a specific condition for 50 years, identification and initial diagnosis remain problematic. Initial investigation of clinical manifestations and pasture conditions is not enough to differentiate ABLD from other common illnesses, particularly in dairy cattle (unpublished data). Furthermore, testing blood and other body fluids does not specifically indicate ABLD; therefore, no specialized surveillance methods are available [7]. A thorough clinical examination, epidemiological investigation, and comprehensive necropsy will often result in a diagnosis. However, this takes time and is dependent on the findings of the initial investigation and the farmer's support for further investigation. A farmer may not support further investigation into the death of a small number of cows if they do not consider the deaths to be a significant loss. These factors make it particularly difficult to investigate possible ABLD occurrences. Suspected Causes of ABLD At locations and times that ABLD has been investigated, the pasture commonly contains senescent rough dog's tail grass [3,5,8]. However, rough dog's tail grass is found worldwide and is not reported to cause illness [19]. Lancaster et al. [6] found that rough dog's tail grass harvested from an affected property and fed to cattle on two occasions resulted in no ill effects. The first feeding trial was conducted in Tasmania with calves and remains unpublished (cited in [6]), while the second was conducted in Victoria in 2003. In the latter, rough dog's tail grass was harvested from six properties that had experienced ABLD in the previous two years. The grass was collected in August (winter), approximately six weeks after the perceived 'danger period' for ABLD. In the trial, oats and rough dog's tail were inoculated with spores of Drechslera biseptata and incubated for seven days. The inoculated grass/oats were fed to young bulls and a fungal broth of D. biseptata was additionally administered by stomach tube. No ill effects were observed, suggesting the grass without the specific conditions needed for toxicity is unlikely to cause ABLD. The significance of D. biseptata will be discussed later in this review. Fungal infections of the grass are now the suspected cause, as the epidemiology of ABLD has similar characteristics to other illnesses caused by mycotoxins [3,17]. Aside from the usual acute and chronic toxicoses, mycotoxins can alter feed intake, production (milk production or growth rate), nutrient utilization, reproduction, and product quality (including residues in milk and meat) [20][21][22]. Some Known Mycotoxins Affecting Cattle Some fungi may be present as endophytes within vegetative plant material, for example Epichloë festucae var. lolii found in perennial ryegrass (Lolium perenne). Reed et al. [23] reported that E. festucae var. lolii provides resources for the grass, enabling its survival under conditions it would not normally survive. However, E. festucae var. lolii also produces alkaloids that cause toxicosis of ruminants and other grazing livestock. Toxicosis commonly results in neurological signs such as tremors and staggering, as well as loss in production and occasional deaths [24]. To date there has been no published research into the presence of specific endophytic fungi associated with rough dog's tail grass, nor are endophytic fungi currently associated with liver damage in cattle. Aspergillus spp. and Pithomyces chartarum are the most noteworthy fungi when investigating hepatotoxic mycotoxin production [22,24,25]. Aspergillus flavus and Aspergillus parasiticus are the most common species in agriculture which produce aflatoxins. These fungi are commonly found in ground nut or peanut meal but can also be present in silage and high-moisture feeds [22,24]. Aflatoxin exposure causes liver damage, resulting in jaundice, photosensitization, diarrhea, anorexia, depression and eventual death in livestock [26,27]. However, Aspergillus spp. and aflatoxins have not been found to cause liver pathology similar to ABLD. Pithomyces chartarum is a saprophytic fungus found on dead or dry pasture [28]. Spores of P. chartarum contain sporidesmin, and ingestion of sufficient spores results in facial eczema in sheep and cattle. Comparable to aflatoxin poisoning and ABLD, early facial eczema in cattle results in diarrhea, anorexia, depression, and eventually jaundice, photosensitization and death [29]. Furthermore, acutely affected sheep and cattle can die suddenly without suffering photosensitization, and periportal hepatocytes may be affected [29]. The epidemiology of facial eczema is also strikingly similar to ABLD. Christensen and Tuite [28] and Riet-Correa et al. [29] both noted that P. chartarum predominates on dead grasses during periods of rainy or overcast days, with high relative humidity, and temperatures close to 20 • C. As such, facial eczema occurs during late summer and autumn. However, P. chartarum is not considered to cause ABLD since the type of periportal necrosis associated with ABLD is not consistent with hepatic pathological changes seen in facial eczema. Furthermore P. chartarum spores have not been detected, either at all or in numbers consistent with causing disease, in pastures associated with outbreaks of ABLD. Thus, ABLD is most likely caused by a currently unidentified fungal pathogen/toxin. Possible Fungi Associated with ABLD With regards to possible fungal contaminants of rough dog's tail grass and their role in ABLD, some published and unpublished data is available. In 2006, samples of rough dog's tail grass were collected within a few days of outbreaks of ABLD and investigated for the presence of possible fungal contamination (Ian Pascoe, unpublished work [15]). The most common fungal species identified were Colletotrichum graminicola and Drechslera sp. aff. siccans. Drechslera biseptata and Colletotrichum sp. aff. coccodes were also identified, but were less common. Colletotrichum graminicola was identified in a majority of the samples collected, but has not been reported to produce mycotoxins, and therefore is not thought to cause ABLD. Since the 2006 investigation, both D. sp. aff. siccans and D. biseptata have been identified in samples collected during outbreaks of ABLD in 2013, 2014 and 2015 [30]. D. sp. aff. siccans has been consistently more abundant than D. biseptata and is therefore the most likely candidate for toxin production. Drechslera sp. as a Source of Mycotoxin Causing ABLD Long-term culturing of D. sp. aff. siccans has been unsuccessful, while cultures of D. biseptata are more stable, making D. biseptata more suitable for toxicity testing. Consequently, the only relevant published data on the toxicity of Drechslera spp. to cattle is by Lancaster et al. [6] and Aslani et al. [14]. As stated earlier, Lancaster et al. [6] fed rough dog's tail grass inoculated with D. biseptata spores to bulls, with and without additional administration of a broth containing D. biseptata via stomach tube. No ill effects were observed during five days of monitoring or in the subsequent necropsy. Consequently, D. biseptata is considered to be an unlikely source of the toxin under these conditions. Later, Aslani et al. [14] extracted the spores and mycelium of D. biseptata using various solvents and tested these in vitro with clone 9 rat hepatocytes. A methanolic extract of pasture samples, collected in 2003, was also tested. Cells were treated with: water or hexane extracts of mycelium or spores (least degeneration); methanol extract of mycelium; methanol fractions of hexane, water and ethyl acetate extracts of mycelium; methanol extract of whole fungal culture (various concentrations); and the methanol extract of rough dog's tail pasture samples. Hepatocyte degeneration was observed in all tests by examining morphological changes of the cells. Furthermore, the methanol extract of the whole fungal culture was found to have a dose-dependent effect. This suggests there are potential toxins present in these extracts. However, a potential toxin was only putatively identified, and an effect in vitro using rat hepatocytes does not indicate a similar effect will occur in in vivo bovine hepatocytes. Further research into the involvement of Drechslera spp. and their toxins is required. Given that Drechslera spp. are predominantly plant pathogens, many of the toxins produced have been characterized for their effect on plants, not on other organisms. For example, Drechslera siccans, a pathogen of ryegrass, has been shown by Evidente et al. [40] to produce drazepinone, which has herbicidal activity. Earlier, Sugawara et al. [33] found Drechslera maydis and Drechslera sorghicola both produce phytotoxic sesterterpenoids belonging to the ophiobolin family of compounds. Strobel et al. [32] summarized that there are a number of different Drechslera spp. that produce ophiobolins. Recently, researchers have shown that ophiobolins are cytotoxic to various mammalian cells. Bencsik et al. [41] found that ophiobolin A (Figure 1) inhibited the mobility of porcine spermatozoa, and damaged the mitochondria in these cells by changing the mitochondrial membrane potential, even at sub-lethal doses. Similarly, Bury et al. [42] found that ophiobolin A caused cytoskeletal changes, interfered with Ca 2+ and K + channel activity, and induced paraptosis-like cell death in human glioblastoma cells. Unfortunately, there is limited published evidence of ophiobolin toxicity in vivo; therefore, the potential toxicity of ophiobolins to cattle is unknown. Later, Aslani et al. [14] extracted the spores and mycelium of D. biseptata using various solvents and tested these in vitro with clone 9 rat hepatocytes. A methanolic extract of pasture samples, collected in 2003, was also tested. Cells were treated with: water or hexane extracts of mycelium or spores (least degeneration); methanol extract of mycelium; methanol fractions of hexane, water and ethyl acetate extracts of mycelium; methanol extract of whole fungal culture (various concentrations); and the methanol extract of rough dog's tail pasture samples. Hepatocyte degeneration was observed in all tests by examining morphological changes of the cells. Furthermore, the methanol extract of the whole fungal culture was found to have a dose-dependent effect. This suggests there are potential toxins present in these extracts. However, a potential toxin was only putatively identified, and an effect in vitro using rat hepatocytes does not indicate a similar effect will occur in in vivo bovine hepatocytes. Further research into the involvement of Drechslera spp. and their toxins is required. Given that Drechslera spp. are predominantly plant pathogens, many of the toxins produced have been characterized for their effect on plants, not on other organisms. For example, Drechslera siccans, a pathogen of ryegrass, has been shown by Evidente et al. [40] to produce drazepinone, which has herbicidal activity. Earlier, Sugawara et al. [33] found Drechslera maydis and Drechslera sorghicola both produce phytotoxic sesterterpenoids belonging to the ophiobolin family of compounds. Strobel et al. [32] summarized that there are a number of different Drechslera spp. that produce ophiobolins. Recently, researchers have shown that ophiobolins are cytotoxic to various mammalian cells. Bencsik et al. [41] found that ophiobolin A (Figure 1) inhibited the mobility of porcine spermatozoa, and damaged the mitochondria in these cells by changing the mitochondrial membrane potential, even at sub-lethal doses. Similarly, Bury et al. [42] found that ophiobolin A caused cytoskeletal changes, interfered with Ca 2+ and K + channel activity, and induced paraptosis-like cell death in human glioblastoma cells. Unfortunately, there is limited published evidence of ophiobolin toxicity in vivo; therefore, the potential toxicity of ophiobolins to cattle is unknown. The potentially causative toxin that Aslani et al. [14] isolated from toxic extracts of D. biseptata was putatively identified as being cytochalasin-like due to its mass spectral profile. Capio et al. [43] identified cytochalasin B (Figure 2) as a possible toxic compound produced by D. wirreganensis and D. campanulata. Correspondingly, Schneider et al. [39] implicated D. campanulata in the poisoning of goats, although the toxic principle was not identified. Furthermore, Collett et al. [44] observed mycotoxicosis in rats fed cultures of D. campanulata, but again the specific toxin was not identified. Earlier research by Smith et al. [45] and Ridler and Smith [46] showed cytochalasin B induced morphological changes in in vitro cultures of human lymphocytes. Additionally, Tanenbaum [47], The potentially causative toxin that Aslani et al. [14] isolated from toxic extracts of D. biseptata was putatively identified as being cytochalasin-like due to its mass spectral profile. Capio et al. [43] identified cytochalasin B (Figure 2) as a possible toxic compound produced by D. wirreganensis and D. campanulata. Correspondingly, Schneider et al. [39] implicated D. campanulata in the poisoning of goats, although the toxic principle was not identified. Furthermore, Collett et al. [44] observed mycotoxicosis in rats fed cultures of D. campanulata, but again the specific toxin was not identified. Earlier research by Smith et al. [45] and Ridler and Smith [46] showed cytochalasin B induced morphological changes in in vitro cultures of human lymphocytes. Additionally, Tanenbaum [47], Kim et al. [48] and Zhang et al. [49] demonstrated that cytochalasins are produced by a variety of fungi and are biologically active in many ways, including: phytotoxicity, anti-microbial activity, cytotoxicity, capping of actin filaments, and inhibition of HIV-1 protease 2. This suggests cytochalasins produced by Drechslera spp. could be potential toxin candidates for ABLD. Toxins 2017, 9, 8 6 of 9 fungi and are biologically active in many ways, including: phytotoxicity, anti-microbial activity, cytotoxicity, capping of actin filaments, and inhibition of HIV-1 protease 2. This suggests cytochalasins produced by Drechslera spp. could be potential toxin candidates for ABLD. Likelihood of Drechslera spp. to Cause ABLD Rough dog's tail grass, present as dry, senescent grass during autumn, is not preferentially grazed by cattle when fresh, green pasture is also available. However, D. biseptata is currently only associated with rough dog's tail and no other pasture species. Commonly, early growth of fresh pasture is found around the base of dry senescent rough dog's tail grass (Mark Hawes, personal communication [18]). As such, cattle may be inadvertently ingesting some rough dog's tail grass with young green grass, thus ingesting associated toxic fungi. Since rough dog's tail is not preferentially grazed, the amount of toxin ingested is likely to be limited in this scenario. This suggests the toxin is either particularly potent, or there is a significant amount of toxic Drechslera spp. present on ingested rough dog's tail grass. Alternatively, Drechslera spp. may be producing toxic spores that can be transferred between rough dog's tail grass and new pasture growth and thus ingested by cattle. Conditions favoring Drechslera spp. sporulation include changes in relative humidity combined with decreasing temperatures [50,51]. Troutt and Levetin [52] reported Drechslera spores were common when there were warmer afternoon temperatures. Drechslera species have been found to preferentially sporulate at ~21 °C with light intensities near UV under laboratory conditions [50,51,53]. When consideration is given to the timing of toxic ABLD events (increasing rainfall with a change from warm to cold weather), autumn would be the ideal time for fungal sporulation [7]. An autumn occurrence is consistent with the findings of Burch and Levetin [54] who found spore densities in the atmosphere were highest around midday during autumn. Furthermore, other spore-related mycotoxicoses such as facial eczema often occur during autumn [55]. Currently, the experimental evidence suggests Drechslera spp. infecting rough dog's tail is the principle source of ABLD toxin. Aslani et al. [14] found spores only had limited cytotoxicity; thus, spores may also contain ABLD toxin, but at a much lower concentration. Consequently, ingestion of either or both mycelium or spores may cause ABLD and the grazing habits of individual cattle would likely affect the concentration of ABLD toxin ingested. As with most poisonings, the concentration of toxin ingested is likely to cause the diversity in clinical signs commonly observed for ABLD. Conclusions Autumn climatic conditions may stimulate the production of fungal toxins or toxic spores which the cattle are exposed to when grazing infected grass. Contaminants of senescing rough dog's tail grass could be transferred to nearby palatable pasture via natural processes or by mechanical means. However, the specific toxin(s) and their source remain conjectural and their stability in the Likelihood of Drechslera spp. to Cause ABLD Rough dog's tail grass, present as dry, senescent grass during autumn, is not preferentially grazed by cattle when fresh, green pasture is also available. However, D. biseptata is currently only associated with rough dog's tail and no other pasture species. Commonly, early growth of fresh pasture is found around the base of dry senescent rough dog's tail grass (Mark Hawes, personal communication [18]). As such, cattle may be inadvertently ingesting some rough dog's tail grass with young green grass, thus ingesting associated toxic fungi. Since rough dog's tail is not preferentially grazed, the amount of toxin ingested is likely to be limited in this scenario. This suggests the toxin is either particularly potent, or there is a significant amount of toxic Drechslera spp. present on ingested rough dog's tail grass. Alternatively, Drechslera spp. may be producing toxic spores that can be transferred between rough dog's tail grass and new pasture growth and thus ingested by cattle. Conditions favoring Drechslera spp. sporulation include changes in relative humidity combined with decreasing temperatures [50,51]. Troutt and Levetin [52] reported Drechslera spores were common when there were warmer afternoon temperatures. Drechslera species have been found to preferentially sporulate at~21 • C with light intensities near UV under laboratory conditions [50,51,53]. When consideration is given to the timing of toxic ABLD events (increasing rainfall with a change from warm to cold weather), autumn would be the ideal time for fungal sporulation [7]. An autumn occurrence is consistent with the findings of Burch and Levetin [54] who found spore densities in the atmosphere were highest around midday during autumn. Furthermore, other spore-related mycotoxicoses such as facial eczema often occur during autumn [55]. Currently, the experimental evidence suggests Drechslera spp. infecting rough dog's tail is the principle source of ABLD toxin. Aslani et al. [14] found spores only had limited cytotoxicity; thus, spores may also contain ABLD toxin, but at a much lower concentration. Consequently, ingestion of either or both mycelium or spores may cause ABLD and the grazing habits of individual cattle would likely affect the concentration of ABLD toxin ingested. As with most poisonings, the concentration of toxin ingested is likely to cause the diversity in clinical signs commonly observed for ABLD. Conclusions Autumn climatic conditions may stimulate the production of fungal toxins or toxic spores which the cattle are exposed to when grazing infected grass. Contaminants of senescing rough dog's tail grass could be transferred to nearby palatable pasture via natural processes or by mechanical means. However, the specific toxin(s) and their source remain conjectural and their stability in the environment is unknown. Therefore, even though the concentration of the causative toxins will be greater in feed source materials, this may be a situation where the examination of tissues from animals that have died may provide some insight. It is hypothesized that any suspicious compounds detected may provide an indication of the nature of the toxin(s). Furthermore, the presence of rough dog's tail grass and Drechslera spp. during an outbreak of ABLD remains suspicious. Consequently, it is hypothesized that Drechslera spp. associated with rough dog's tail grass may be the source of the toxin(s) of interest.
2017-01-15T08:35:26.413Z
2016-12-26T00:00:00.000
{ "year": 2016, "sha1": "78d12210b92967d40a83e51a5123b3cd358a2a4c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/9/1/8/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78d12210b92967d40a83e51a5123b3cd358a2a4c", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9435978
pes2o/s2orc
v3-fos-license
A Massively Parallel Sequence Similarity Search for Metagenomic Sequencing Data. Sequence similarity searches have been widely used in the analyses of metagenomic sequencing data. Finding homologous sequences in a reference database enables the estimation of taxonomic and functional characteristics of each query sequence. Because current metagenomic sequencing data consist of a large number of nucleotide sequences, the time required for sequence similarity searches account for a large proportion of the total time. This time-consuming step makes it difficult to perform large-scale analyses. To analyze large-scale metagenomic data, such as those found in the human oral microbiome, we developed GHOST-MP (Genome-wide HOmology Search Tool on Massively Parallel system), a parallel sequence similarity search tool for massively parallel computing systems. This tool uses a fast search algorithm based on suffix arrays of query and database sequences and a hierarchical parallel search to accelerate the large-scale sequence similarity search of metagenomic sequencing data. The parallel computing efficiency and the search speed of this tool were evaluated. GHOST-MP was shown to be scalable over 10,000 CPU (Central Processing Unit) cores, and achieved over 80-fold acceleration compared with mpiBLAST using the same computational resources. We applied this tool to human oral metagenomic data, and the results indicate that the oral cavity, the oral vestibule, and plaque have different characteristics based on the functional gene category. Introduction Most microbes are difficult to isolate and cultivate [1]. The metagenomic approach with direct sequencing of microbial genomes from environmental samples is a culture-independent way to identify uncultured microbes. Metagenomic studies have been conducted in the human body [2,3], soil [4], seawater [5], and air [6], and the identification of novel genes and species have provided us with new information about microbes in various environments. Moreover, metagenomic studies have reported relationships between genes in microbial communities and environmental conditions. For example, Tringe et al. sequenced indoor air microbes and compared overrepresented genes with those from environmental sources such as seawater, soil, and whale fall [6]. In such studies, environmental samples are characterized by the abundance of ortholog groups [2,6,7]. Studying these abundances enables us to uncover the relationships between gene functions and environmental conditions. We can reconstruct possible metabolic pathways within the microbial community in an environment, and compare each metagenomic sample based on its gene functions or functional categories [7]. The reconstruction of metabolic pathways provides information about potential metabolites, possible paths to a specific metabolite, and the structure of a metabolic network in an environment. To estimate the abundance of ortholog groups in environmental samples, sequence similarity searches have been used to identify ortholog groups of each sequence in the metagenomic data. In metagenomic studies, the query sequences often have no close homologs in database sequences. This necessitates sensitive search methods, such as BLASTX [8], which searches an amino acid sequence database for similarities within the translated nucleotide query sequence. However, searching with BLASTX requires a long calculation time, making it difficult to perform large-scale analyses (i.e., studies including hundreds of environmental samples). For example, each BLASTX search takes about one minute with a single query of a nucleotide sequence of approximately 100 bases and a reference sequence database such as KEGG GENES [9] or NCBI BLAST nr [10]. Metagenomic data with whole genome sequencing using a massively parallel DNA sequencing technique often consists of tens of millions of short reads. Thus, current metagenomic functional annotations using BLASTX require over 1000 days to process metagenomic sequencing data with a single CPU core. The Human Microbiome Project (HMP) has already achieved large-scale functional analyses, albeit with a markedly reduced reference database [3,7]. Some 681 sets of metagenomic shotgun sequencing data from 18 human body sites were analyzed with the HMP Unified Metabolic Analysis Network (HUMAnN) [7]. However, in the HUMAnN workflow, the subset of the KEGG GENES database used for reference consisted of amino acid sequences from only 28 species. The size of this subset of data is approximately 1% of the whole database. Although they are faster, similarity searches with reduced databases can affect the accuracy and availability of functional annotations because it is more likely that no similar sequence is found and the function of each query sequence may not be estimated. For example, the human oral microbiome constitutes more than 600 bacterial species [11,12]. For a detailed analysis of taxonomic composition and functional genes in the bacterial community, there are strong demands for using whole databases. However, an analysis using the whole database needs to perform calculation-cost. Therefore, there is a need for high-speed sequence similarity search algorithms and massively parallel computations. Two approaches have been developed to accelerate sequence similarity searches. The first uses a search algorithm with a sophisticated database index, such as a hash table [13] or a suffix array [14,15]. This method avoids linear searching for alignment candidates used in BLASTX [8], and instead uses the index of the database. This shortens one of the most time-consuming parts of the similarity search, and makes the whole search tens of times faster than the BLASTX algorithm. The second approach employs a parallel search on massively parallel computing systems. This technique is particularly useful for metagenomic data produced by massively parallel DNA sequencing because massively parallel sequencing data consist of many nucleotide sequence fragments, and this approach can search for each fragment in parallel. Ideally, the parallel search approach should reduce the execution time in inverse proportion to the number of computational units. Darling et al. developed mpiBLAST [16], which is a parallel implementation of NCBI BLAST using the Message Passing Interface (MPI). The mpiBLAST software searches in parallel using multiple processes on a distributed memory system with thousands of CPU cores to reduce the search time. Although both approaches accelerate the similarity search process, the acceleration of only one approach is insufficient for large-scale analyses. We typically require 10,000-fold acceleration compared with a single BLASTX run with one CPU core for the functional annotation of shotgun sequencing data within several hours. Search algorithms with database indexes are not fast enough. An ideal parallel search could achieve 10,000-fold acceleration using 10,000 times the computational resources, but those means are not easily available. Thus, a method that combines the advantages of both approaches is needed. In this study, we developed a new massively parallel sequence similarity search tool for large-scale metagenomic sequencing data, such as the human oral microbiome. The system consists of a parallel sequence similarity search on a massively parallel distributed memory system, named GHOST-MP. This enables the analysis of large-scale metagenomic data consisting of hundreds of sets of environmental sequencing data. GHOST-MP employs both a fast search algorithm and parallel computation to accelerate similarity searches for metagenomic sequencing. To demonstrate the applicability of GHOST-MP to large-scale metagenomic functional analyses, we first present the search speed and scalability of GHOST-MP on two massively parallel computing systems. We then show the results of large-scale sequence similarity searches of actual metagenomic data. GHOST-MP achieved faster sequence similarity searches than mpiBLAST, enabling large-scale functional analyses to be performed within a short period of time. Then, we performed metagenomic analysis of human oral microbiome based on a fullset of functional gene reference using GHOST-MP. The results indicated that oral cavity, oral vestibule, and plaque have different characteristics. The GHOST-MP program is implemented in C++, and is available under the BSD (Berkeley Software Distribution) License from http://www.bi.cs.titech.ac.jp/ghostmp/. Evaluation of Scalability and Search Speed Before performing the analysis of the human oral microbiome, we evaluated the search speed of GHOST-MP, which was measured on two systems: TSUBAME 2.5 at Tokyo Institute of Technology, and the K computer at RIKEN Advanced Institute for Computational Science, using human oral metagenomic shotgun sequencing data queries and the KEGG GENES amino acid sequence database. The scalabilities were evaluated in weak and strong scaling experiments. In the weak scaling setting, the number of query sequences per CPU core was fixed as the number of cores increased. This scenario evaluates how a large problem can be efficiently dealt with. In the strong scaling setting, the total number of query sequences was fixed to evaluate how fast the method could process the same amount of data. On TSUBAME 2.5, the search speed of mpiBLAST (version 1.6.0) was also measured and compared with that of GHOST-MP using human tongue dorsum metagenomic data (SRS078182). Parts of the query sequences (1,280,000 and 80,000 query sequences for GHOST-MP and mpiBLAST, respectively) were used for evaluation on TSUBAME 2.5 due to limitations in computational resources. mpiBLAST was not evaluated on the K computer because it encountered a bus error on this system. This error could have been caused by unaligned memory access, as the processor in the K computer does not allow such access. Figure 1 plots the search speeds and scalability of GHOST-MP and mpiBLAST on TSUBAME 2.5. Both GHOST-MP and mpiBLAST achieved almost linear scalability, and the search speed of GHOST-MP was 87-115 times faster than that of mpiBLAST. Scalability means that the serial sections of GHOST-MP and mpiBLAST, such as I/O (Input/Output) and scheduling, account for only a small fraction of the computation time compared with the parallelizable sequence similarity search sections at various scales, in which computational resources were efficiently used. The acceleration of GHOST-MP compared with mpiBLAST should arise from the difference between the GHOSTX and BLASTX algorithms. Furthermore, similar accelerations were observed in experiments with a compute node [15]. We further evaluated the scalability of GHOST-MP on the K computer to investigate its performance on a massively parallel computing system. To evaluate the aforementioned scalability, we used 107 samples of buccal mucosa metagenomic data because the K computer has a larger number of CPUs that can carry out a computational process for a larger dataset necessary for evaluation. Generally, it is more difficult to achieve good scalability on larger systems because the master process must communicate with more workers. However, GHOST-MP scaled well up to over 10,000 CPU cores in reference to both criteria ( Figure 2). GHOST-MP took 1.73 h to process the whole dataset with 24,576 cores. However, the search speed decreased compared with the ideal speed with 24,576 cores (weak scaling) and 49,152 cores (strong scaling). This decrease in search speed under weak scaling indicates that additional data cannot be efficiently processed, whereas the search speed under strong scaling suggests that no further acceleration can be achieved by increasing computational resources. The performance drop may have been caused by the contention of point-to-point communications between the master and workers. To make the parallel search more scalable for large-scale analyses, it is necessary to reduce the contention. Introducing multiple masters or submasters at the MPI level or implementing collective communication instead of point-to-point communication may address this problem. (Table S1). The combination of a sophisticated search algorithm with database indexing and a massively parallel search allowed us to achieve this large-scale similarity search within a short period of time. Large-Scale Sequence Similarity Search for Metagenomic Sequencing Data To demonstrate the applicability of GHOST-MP to large-scale functional analysis of metagenomic data, we applied the functional analysis workflow to healthy human oral metagenomic data consisting of 381 samples taken from eight oral sites, with approximately 18 billion sequence reads (Table S2). Through the functional gene analysis pipeline, 109,127,620 reads (0.6% of the total) and 75,363,198 reads (0.4% of the total) were filtered out by a similarity search against the NCBI nr and KEGG GENES databases, respectively. A total of 10,357,599,878 reads (56.0% of the total) were aligned to similar sequences in the KEGG GENES database. The search (Table S1). The combination of a sophisticated search algorithm with database indexing and a massively parallel search allowed us to achieve this large-scale similarity search within a short period of time. Large-Scale Sequence Similarity Search for Metagenomic Sequencing Data To demonstrate the applicability of GHOST-MP to large-scale functional analysis of metagenomic data, we applied the functional analysis workflow to healthy human oral metagenomic data consisting of 381 samples taken from eight oral sites, with approximately 18 billion sequence reads (Table S2). Through the functional gene analysis pipeline, 109,127,620 reads (0.6% of the total) and 75,363,198 reads (0.4% of the total) were filtered out by a similarity search against the NCBI nr and KEGG GENES databases, respectively. A total of 10,357,599,878 reads (56.0% of the total) were aligned to similar sequences in the KEGG GENES database. The search (Table S1). The combination of a sophisticated search algorithm with database indexing and a massively parallel search allowed us to achieve this large-scale similarity search within a short period of time. Large-Scale Sequence Similarity Search for Metagenomic Sequencing Data To demonstrate the applicability of GHOST-MP to large-scale functional analysis of metagenomic data, we applied the functional analysis workflow to healthy human oral metagenomic data consisting of 381 samples taken from eight oral sites, with approximately 18 billion sequence reads (Table S2). Through the functional gene analysis pipeline, 109,127,620 reads (0.6% of the total) and 75,363,198 reads (0.4% of the total) were filtered out by a similarity search against the NCBI nr and KEGG GENES databases, respectively. A total of 10,357,599,878 reads (56.0% of the total) were aligned to similar sequences in the KEGG GENES database. The search results are summarized in Table S2. We used the relative abundance of orthologous gene groups in each sample and the results of the workflow to compare metagenomic samples between oral sites. Relationships among metagenomic samples were summarized using principal component analysis (PCA). Orthologous gene groups with relative abundances of less than 0.0001 in all samples were excluded in advance. In other words, the number of orthologous groups decreased from 6881 to 3181. However, the remaining orthologous groups account for most of the total relative abundances, and the sums of the relative abundances are >0.98 in all samples. These relative abundances were transformed into principal components by PCA. Figures 3 and 4 show the first three principal components of oral samples. The three principal components describe the relationships among samples well and account for 58% of the total variance. In particular, samples from the same oral sites tend to make clusters with regard to the first and third principal components, and we can group the eight oral sites into three groups (Figure 4). These groups were composed of (a) the oral cavity; (b) the oral vestibule; and (c) plaque. This result is consistent with a phylogenetic analysis of human oral microbiomes [17]. The average relative abundances of the orthologous groups in these oral site groups were also investigated. There was a large number of orthologous groups that are more or less abundant in specific oral site groups ( Figure 5). Some orthologous groups related to specific biological pathways were found to be abundant in specific oral site groups. For example, orthologous groups related to the lipopolysaccharide biosynthesis (PATH: ko00540) are abundant in the oral cavity. This suggests an abundance of Gram-negative bacteria, which have lipopolysaccharide in their outer membrane, in the oral cavity. Almost all orthologous groups related to bacterial chemotaxis (PATH: ko02030) and flagellar assembly (PATH: ko02040) according to the KEGG PATHWAY are abundant in the oral cavity and plaque. Genes related to these pathways are involved in microbial motility, and it has been reported that genes related to microbial motility are over-represented in plaque microbiomes of periodontal disease compared with those of healthy periodontal tissue [18][19][20]. Through this large-scale functional analysis, we have confirmed the applicability of GHOST-MP to current metagenomic shotgun sequencing data. Table S2. We used the relative abundance of orthologous gene groups in each sample and the results of the workflow to compare metagenomic samples between oral sites. Relationships among metagenomic samples were summarized using principal component analysis (PCA). Orthologous gene groups with relative abundances of less than 0.0001 in all samples were excluded in advance. In other words, the number of orthologous groups decreased from 6881 to 3181. However, the remaining orthologous groups account for most of the total relative abundances, and the sums of the relative abundances are >0.98 in all samples. These relative abundances were transformed into principal components by PCA. Figures 3 and 4 show the first three principal components of oral samples. The three principal components describe the relationships among samples well and account for 58% of the total variance. In particular, samples from the same oral sites tend to make clusters with regard to the first and third principal components, and we can group the eight oral sites into three groups ( Figure 4). These groups were composed of (a) the oral cavity; (b) the oral vestibule; and (c) plaque. This result is consistent with a phylogenetic analysis of human oral microbiomes [17]. The average relative abundances of the orthologous groups in these oral site groups were also investigated. There was a large number of orthologous groups that are more or less abundant in specific oral site groups ( Figure 5). Some orthologous groups related to specific biological pathways were found to be abundant in specific oral site groups. For example, orthologous groups related to the lipopolysaccharide biosynthesis (PATH: ko00540) are abundant in the oral cavity. This suggests an abundance of Gram-negative bacteria, which have lipopolysaccharide in their outer membrane, in the oral cavity. Almost all orthologous groups related to bacterial chemotaxis (PATH: ko02030) and flagellar assembly (PATH: ko02040) according to the KEGG PATHWAY are abundant in the oral cavity and plaque. Genes related to these pathways are involved in microbial motility, and it has been reported that genes related to microbial motility are over-represented in plaque microbiomes of periodontal disease compared with those of healthy periodontal tissue [18][19][20]. Through this large-scale functional analysis, we have confirmed the applicability of GHOST-MP to current metagenomic shotgun sequencing data. Sequence Data Human oral metagenomic sequencing data were downloaded from the HMP Data Analysis and Coordination Center (HMP DACC; http://www.hmpdacc.org). Reads from the human genome, duplicate reads, and low-quality bases were removed from these data by HMP DACC in advance. Identifiers and details of human oral samples data are listed in Table S2. The KEGG GENES database [8] (released July 2012) was used for reference sequences. This database contains 8,578,853 amino acid sequences. Functional Gene Analysis Pipeline The functional analysis pipeline mainly consists of four steps. The first step trims away the low-quality tails from the reads. The entire read is filtered out if the remaining sequence is shorter than 60. The second step filters out the reads derived from Eukaryotes. The reads are considered as derivation from Eukaryotes if the top hit of the sequence similarity search against NCBI nr database (accessed July 2012) is the sequence of Eukaryotes. The third step maps the read sequences to the annotated sequences in the KEGG GENES amino acid sequence database (released in July 2012). The second step and the third component perform sequence similarity searches using GHOST-MP with the PAM 30 substitution matrix, with the gap opening penalty of −9 and the gap extension penalty of −1. These steps account for most of the computation time in this workflow. The search results are considered as hits if the alignment score and sequence identity are above 40% and 70%, respectively. The fourth one calculates the relative abundances of orthologous groups (KEGG Orthology entries) in the data. The number of hits of each gene in an orthologous group is summed up with normalization using its gene length and the number of hits of universal single-copy genes [21]. Computing Environments The TSUBAME 2.5 supercomputer consists of 1408 thin compute nodes. Each compute node has two Intel Xeon X5670 processors (2.93 GHz, six cores) and 54 GB of main memory. The nodes are interconnected with a full bisection-bandwidth fat-tree network. Each compute node has three NVIDIA Tesla K20X GPU accelerators, but the accelerators were not used in this study. The K computer consists of 82,944 compute nodes. Each compute node has a SPARC64 VIIIfx processor (2.0 GHz, eight cores) and 16 GB of main memory, and is connected to a six-dimensional mesh/torus network. We used up to 1536 CPU cores (128 nodes) and 49,152 CPU cores (6144 nodes) to measure the scalability of GHOST-MP on TSUBAME 2.5 and the K computer, respectively. Sequence Similarity Search with Indexes Based on Suffix Arrays GHOST-MP uses the GHOSTX [15] search algorithm for sequence similarity search. The GHOSTX program achieves more than 100-fold acceleration over the BLASTX algorithm, albeit with a slight decrease in search sensitivity. Briefly, the algorithm uses suffix arrays [22] as an index to accelerate the search for alignment candidates. The suffix array data structure is a sorted array of indexes of all the suffixes of a string in lexicographical order. The suffix array can be used with a binary search to find all suffixes matching the query string, and is a data structure widely used in biological sequence searches [23]. Binary searches on the suffix array produce efficient enumeration of all intervals in the suffix array that start with each letter representing an amino acid type. We can recursively apply the same procedure for subsequent letters to narrow down the intervals. Moreover, the GHOSTX algorithm uses an additional data structure (an array of ranges of the same fixed length prefixes of the suffixes in the suffix array) to avoid several initial steps in the binary search. These data structures make it possible to obtain intervals as alignment candidates by filtering out dissimilar intervals in terms of a substitution matrix score. The main search algorithm consists of the following steps: (1) search for alignment candidates with the suffix array; (2) perform ungapped extension of the candidates; (3) filter out overlapping candidates; and (4) perform gapped extension of the candidates. Query nucleotide sequences are treated as amino acid sequences throughout the search procedure, with translations over six possible reading frames for a sensitive search with an amino acid substitution matrix. Hierarchical Parallelization of the Sequence Similarity Search with Data Parallelism GHOST-MP adopts a two-level hierarchical parallelization. The sequence similarity search is parallelized by MPI at the inter-node level and by OpenMP at the intra-node level. The original GHOSTX algorithm only provides a parallel similarity search with OpenMP. Thus, we could not execute GHOSTX on distributed-memory systems (inter-node), which account for a large portion of current supercomputers in use, because OpenMP only provides parallelization on shared-memory systems (intra-node). Thus, GHOSTX could not take advantage of the large computational power of supercomputers. The hierarchical parallelization has two advantages compared with MPI-only parallelization, described as follows. (1) Hierarchical parallelization largely reduces the memory use of worker processes because it enables the sharing of common data in intra-processes, such as database sequences. The size of database sequences and their index often exceed the memory size in massively parallel environments. Index size is the product of the length of the concatenated database sequence and the size of the index pointing to the corresponding position in the concatenated database sequence. For example, when KEGG GENES (3.5 GB, released July 2012) is used as an amino acid sequence database, the total size of the database sequence and its suffix array with auxiliary data is approximately 20 GB. If we use MPI for both inter-and intra-node parallelization, each individual process, even those within the same computing node, has to store the same database. In current massively parallel computing systems, nodes rarely have sufficient memory to store multiple copies of the database and its index. To reduce memory use, it is possible to split the database by assigning different partitions to each intra-node process. However, searching by this approach is inefficient for two reasons. First, splitting the database requires an additional merging step to combine the most similar hits for the same query sequence. Second, searching for alignment candidates with a split database requires more CPU time than searching with an unsplit database because the search time for alignment candidates with a suffix array is proportional to the logarithm of the database size. (2) Hierarchical parallelization can also lead to scalable parallel searching. Since the communication between the master and the workers involves MPI point-to-point communication, parallel searches with a smaller MPI process reduce the number of communications sent from the workers to the master compared with searches in nonhierarchical parallelization (MPI-only parallelization). Details of this two-level hierarchical parallelization of GHOST-MP are as follows. At the inter-node level, GHOST-MP adopts a master-worker model. Communication between the master process and the worker process is implemented with MPI. Firstly, query sequences are split into the same number of chunks as the number of worker processes. The master process assigns a sequence chunk to each worker process as a task. At the intra-node level, similarity searches are parallelized with OpenMP. Query sequences in a chunk are subdivided into more specific tasks. These subdivided tasks are put into a queue, and each OpenMP thread sequentially dequeues a task from the queue using a lock. Finally, each worker process writes search results to a clustered file system and reports results to the master process ( Figure 6). Conclusions We have developed GHOST-MP, a massively parallel sequence similarity search tool for metagenomic data to allow for large-scale analyses of metagenomic data. GHOST-MP uses a search algorithm with suffix arrays, and its parallel similarity search procedure is implemented using a two-level hierarchical model. GHOST-MP achieved over 80-fold acceleration compared with mpiBLAST, and exhibited almost a linear increase in speed with an increase in the number of CPU cores on TSUBAME 2.5. GHOST-MP also scaled well to over 10,000 CPU cores on the K computer. The fast search ability of GHOST-MP enabled us to perform a large-scale sequence similarity search of 381 human oral microbiome metagenomic sequencing data (18 billion reads) against the whole database of KEGG GENES (8.6 million amino acid sequences) on a massively parallel computing system. This massive data analysis indicated characteristics of functional gene in the microbial community of three human oral parts. Author Contributions: Masanori Kakuta developed the hierarchical parallelization, carried out experiments, and analyzed the data. Shuji Suzuki developed the sequence similarity search and analyzed the data. Takashi Ishida developed the hierarchical parallelization and analyzed the data. Masanori Kakuta, Takashi Ishida and Kazuki Izawa wrote the main text. Yutaka Akiyama designed the study, analyzed the data, and helped to draft the manuscript. All authors approve the publication of this version. Conflicts of Interest: The authors declare no conflict of interest. Conclusions We have developed GHOST-MP, a massively parallel sequence similarity search tool for metagenomic data to allow for large-scale analyses of metagenomic data. GHOST-MP uses a search algorithm with suffix arrays, and its parallel similarity search procedure is implemented using a two-level hierarchical model. GHOST-MP achieved over 80-fold acceleration compared with mpiBLAST, and exhibited almost a linear increase in speed with an increase in the number of CPU cores on TSUBAME 2.5. GHOST-MP also scaled well to over 10,000 CPU cores on the K computer. The fast search ability of GHOST-MP enabled us to perform a large-scale sequence similarity search of 381 human oral microbiome metagenomic sequencing data (18 billion reads) against the whole database of KEGG GENES (8.6 million amino acid sequences) on a massively parallel computing system. This massive data analysis indicated characteristics of functional gene in the microbial community of three human oral parts. Supplementary Materials: Supplementary materials can be found at www.mdpi.com/1422-0067/18/10/2124/s1. Acknowledgments: This research study used the computational resources of the K computer, provided by the RIKEN Advanced Institute for Computational Science through the HPCI System Research Project (Project ID: hp120311, hp130017, hp140230). This work was partially supported by Core Research for Evolutional Science and Technology (CREST) "Extreme Big Data" (Grant number JPMJCR1303) and the Research Complex Program "Well-being Research Campus: Creating new values through technological and social innovation" from Japan Science and Technology Agency (JST). The authors thank Masahito Ohue and Kota Goto (Tokyo Institute of Technology) for their helpful discussions on heterogeneous parallelization mechanisms and other software implementation issues on GHOST-MP. Author Contributions: Masanori Kakuta developed the hierarchical parallelization, carried out experiments, and analyzed the data. Shuji Suzuki developed the sequence similarity search and analyzed the data. Takashi Ishida developed the hierarchical parallelization and analyzed the data. Masanori Kakuta, Takashi Ishida and Kazuki Izawa wrote the main text. Yutaka Akiyama designed the study, analyzed the data, and helped to draft the manuscript. All authors approve the publication of this version. Conflicts of Interest: The authors declare no conflict of interest.
2017-10-23T12:56:56.348Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "08e9b1ab4d6c07ed9f1200ab509cb8c8ec75d879", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/10/2124/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "08e9b1ab4d6c07ed9f1200ab509cb8c8ec75d879", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
31291562
pes2o/s2orc
v3-fos-license
Switching Signal Design for Global Exponential Stability of Uncertain Switched Neutral Systems The switching signal design for global exponential stability of switched neutral systems is investigated in this paper. LMI-based delay-dependent and delay-independent criteria are proposed to guarantee the global stability via the constructed switching signal. Razumikhin-like approach is used to find the stability results. Finally, some numerical examples are illustrated to show the main results. Introduction It is well known that the existence of delay in a system may cause instability or bad system performance in control systems.Time-delay phenomenon appears in many practical systems, such as AIDS epidemic, aircraft stabilization, chemical engineering systems, inferred grinding model, manual control, neural network, nuclear reactor, population dynamic model, rolling mill, ship stabilization, and systems with lossless transmission lines.Hence stability analysis for time-delay systems has been considered in the recent years 1-3 .Neutral systems are described by functional differential equations which depend on the delays of state and state derivative.Some practical examples of neutral systems include distributed networks, heat exchanges, and processes including steam 4 . Switched system is a class of hybrid systems which is consisting of several subsystems and uses the switching signal to specify which subsystem is activated to the system trajectories at each instant of time.Some examples for switched systems are automated highway systems, constrained robotics, power systems and power electronics, transmission and stepper motors 5 .Stability analysis of switched time-delay systems has been an attractive research topic 6-13 .It is interesting to note that the stability for each subsystem cannot imply that of the overall system under arbitrary switching signal 9 .Another interesting fact is that the stability of a switched system can be achieved by choosing the switching signal even when each subsystem is unstable 6, 7, 10 .In this paper, the switching signal design will be considered for uncertain switched neutral systems with mixed delays.The switching signal will be proposed to guarantee the stability of switched system even when each subsystem is unstable.Based on Razumikhin-like approach 11 , delaydependent and delay-independent results are provided.New and flexible LMI conditions are proposed to design the switching signal which guarantees the global exponential and asymptotic stability of uncertain switched neutral systems.Some numerical examples are provided to demonstrate the use of our results. The notation used throughout this paper is as follows.For a matrix A, we denote the transpose by A T , spectral norm by A , symmetric positive negative definite by A > 0 A < 0 , maximal eigenvalue by λ max A , and minimal eigenvalue by λ min A .A ≤ B means that matrix B − A is symmetric positive semidefinite.For two sets X and Y , X − Y means that the set of all points in X that are not in Y .For a vector x, we denote the Euclidean norm by x and x t s sup −H≤θ≤0 x t θ 2 ẋ t θ 2 .I denotes the identity matrix.R n denotes n-dimensional real space. Problem Formulations and Main Results Consider the following switched neutral system with mixed time delays: where x ∈ R n , x t is state at time t defined by x t θ : x t θ , ∀θ ∈ −H, 0 , σ is a switching signal which is a piecewise constant function and may depend on t or x, σ, taking its values in the finite set {1, 2, . . ., N}, and time-varying delay satisfies 0 ≤ h t ≤ h M , ḣ t ≤ h D , h M > 0, τ > 0, H max{h M , τ}.Matrices D, A 0i , and A 1i ∈ R n×n , i 1, 2, . . ., N, are constant, and the initial vector φ ∈ C 1 , where C 1 is the set of differentiable functions from −H, 0 to R n .Now we define some functions λ i t , i 1, 2, . . ., N, that will be used to represent our system: 2.2 The switched system in 2.1 can be rewritten as follows: where λ i t is defined in 2.2 and N i 1 λ i t 1, ∀t ≥ 0. Lemma 2.1 see 14 .Let U, V , W, and M be real matrices of appropriate dimensions with M satisfying M M T , then if and only if there exists a scalar ε > 0 such that Since F is Hurwitz, there exist positive definite matrices P and Q satisfying Define some domains From the similar proof of 7 , it is easy to show N i 1 Ω i R n .Construct some domains 2.8 We can obtain N i 1 Ω i R n and Ω i ∩ Ω j Φ, i / j, where Φ is an empty set.If Assumption 2.3 is satisfied, then the following results can be derived: Define the following switching function: 2.10 Definition 2.4 see 14 .The system 2.1 with the designed switching signal is said to be the globally exponentially stabilizable with convergence rate α > 0 by the designed switching signal, if there are two positive constants α and Ψ such that Now we present a result to design the switching signal that guarantees global exponential stability of system 2.1 .Theorem 2.5.Assume that for D < 1, 0 < α < − ln D /τ, 0 ≤ α i ≤ 1, i 1, 2, . . ., N, and N i 1 α i 1, there exist some n × n matrices P, Q, R 1 , R 2 > 0, such that the following LMI conditions hold for all i 1, 2, . . ., N: where 2.13 Then the system 2.1 is globally exponentially stabilizable with convergence rate α by the switching signal given in 2.10 . Proof.Define the Lyapunov functional 2.14 where P, R 1 , R 2 > 0. The time derivatives of V x t along the trajectories of system 2.3 under the switching function 2.10 satisfy 2.15 By the condition 2.9 and switching function 2.10 , we obtain where Ξ i , i 1, 2, . . ., N, are defined in 2.12 , X T x T t x T t − τ x T t − h t .From 2.16 with Ξ i < 0, we have 2.17 where Consider the following uncertain switched neutral system with mixed time delays: where ΔA 0i t and ΔA 1i t are some perturbed matrices and satisfy the following condition: where M i , N 0i , and N 1i , i 1, 2, . . ., N, are some given constant matrices with appropriate dimensions, and F i t , i 1, 2, . . ., N, are unknown matrices representing the parameter perturbation which satisfy The uncertain switched system in 2.22a -2.22c can be rewritten as follows: where λ i t is defined in 2.2 and N i 1 λ i t 1, ∀t ≥ 0. Now we consider the exponential stability for uncertain switched system 2.22a -2.22c . , N, and such that the following LMI conditions hold for all i 1, 2, . . ., N: 2.25 where Ξ jki , j, k 1, 2, 3, are defined in 2.12 Then the system 2.22a -2.22c is globally exponentially stabilizable with convergence rate α by the switching signal given in 2.10 . Proof.The time derivatives of V x t in 2.14 along the trajectories of system 2.22a -2.22c under the switching function 2.9 satisfy where By Lemmas 2.1 and 2.2, the condition Ξ i < 0 in 2.25 is equivalent to Ξ i < 0. By the same derivation of Theorem 2.5, this proof can be completed. If we choose the convergence rate α 0, we can obtain the following delayindependent condition for the global asymptotic stability of system 2.22a -2.22c ., N, and N i 1 α i 1, there exist constants ε i > 0, i 1, 2, . . ., N, some n × n matrices P, Q, R 1 , R 2 > 0, such that the following LMI conditions hold for all i 1, 2, . . ., N: 2.30 Then the system 2.22a -2.22c is globally asymptotically stabilizable by the switching signal given in 2.10 . If D 0, Corollary 2.7 can be reduced to the following corollary. 2.32 Then the system 2.22a -2.22c is globally asymptotically stabilizable by the switching signal given in 2.10 . Assumption 2.9.Assume that there exists a convex combination F N i 1 α i A 0i , some positive definite matrices P and Q, some matrices S i , i 1, 2, . . ., N, such that where 0 ≤ α i ≤ 1 and N i 1 α i 1. Define some domains 2.34 From the similar proof of 7 , it is easy to show N i 1 Ω i R n .Construct some domains 2.35 We can obtain N i 1 Ω i R n and Ω i ∩ Ω j Φ, i / j, where Φ is an empty set.If Assumption 2.9 is satisfied, then the following results can be derived: 2.36 Define the following switching function: 2.40 Then the system 2.22a -2.22c is globally exponentially stabilizable with convergence rate α by the switching signal given in 2.37 . Proof.Define the Lyapunov functional 2.41 where The time derivatives of V x t along the trajectories of system 2.24 satisfy where S i PS i .By the inequality in 1, page 322 , we have 2.43 By system 2.24 and Leibniz-Newton formula, we have 2.44 By the conditions 2.42 -2.44 , we obtain the following result: where 2.46 By Lemmas 2.1 and 2.2, the condition Σ i < 0 in 2.39 is equivalent to Σ i < 0 in 2.45 .From Σ i < 0 and by the similar derivation of Theorem 2.5, the proof can be completed. If D 0, Theorem 2.11 can be reduced to the following corollary. 2.48 Then the system 2.22a -2.22c with D 0 is globally exponentially stabilizable with convergence rate α by the switching signal given in 2.37 . 3.3 Select the switching signal by 3.5 The switching regions Ω 1 and Ω 2 are sketched in Figure 1.The system 2.22a -2.22c with h D 0.2 and 3.1 is globally asymptotically stabilizable by the switching signal 3.4 .Some comparisons are made in Table 1.The result of this paper provides a major improvement to guarantee the global asymptotic stability of system 2.22a -2.22c with 3.1 .For the given feedback control 3.9 , system 3.7 can be rewritten as ẋ t A 0σ x t A 1σ x t − h t , t ≥ 0, 3.10 where A 1σ B σ K σ .As shown in Table 3, the results obtained in this paper provide larger allowable time delay bounds guaranteeing the global stability of system 3.7 with 3.9 by switching signal 2.37 .In 7, 10 , the convex combination parameters are chosen by α 1 1/3 and α 2 2/3.The convex combination parameters of our results are chosen by α 1 0.1 and α 2 0.9. Conclusions In this paper, the switching signal design for global exponential stability of uncertain switched neutral systems with mixed time delays has been considered.LMI and Razumikhinlike approaches are used to derive delay-dependent and delay-independent stability criteria.The results obtained in this paper are less conservative than the previous ones for the numerical examples investigated in this paper. Example 3 . 1 . Consider the system 2.22a -2.22c and the following parameters: Table 1 : Comparison with other previous results. Table 2 : Comparison with other previous results.By Corollary 2.13, some comparisons with the obtained results for switched system 2.22a -2.22c with 3.6 are made in Table2.The results of this paper provide a larger allowable upper bound for time delay to guarantee the global asymptotic stability of system 2.22a -2.22c with 3.6 by the switching signal 2.37 . Table 3 : Comparison with other previous results.Allowable time-varying delay bounds retaining global asymptoticand exponential stability of the system 3.7 with 3.9
2017-08-17T09:52:35.253Z
2009-08-27T00:00:00.000
{ "year": 2009, "sha1": "40f4d4b6d062895792df94c92dd5909b26c4d41f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2009/191760.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9eabc3d519b5cf8afb1653e536ad1ed1bd15232c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
8143500
pes2o/s2orc
v3-fos-license
A Novel Anti-Inflammatory Role for Ginkgolide B in Asthma via Inhibition of the ERK/MAPK Signaling Pathway Ginkgolide B is an anti-inflammatory extract of Ginkgo biloba and has been used therapeutically. It is a known inhibitor of platelet activating factor (PAF), which is important in the pathogenesis of asthma. Here, a non-infectious mouse model of asthma is used to evaluate the anti-inflammatory capacity of ginkgolide B (GKB) and characterize the interaction of GKB with the mitogen activated protein kinase (MAPK) pathway. BALB/c mice that were sensitized and challenged to ovalbumin (OVA) were treated with GKB (40 mg/kg) one hour before they were challenged with OVA. Our study demonstrated that GKB may effectively inhibit the increase of T-helper 2 cytokines, such as interleukin (IL)-5 and IL-13 in bronchoalveolar lavage fluid (BALF). Furthermore, the eosinophil count in BALF significantly decreased after treatment of GKB when compared with the OVA-challenged group. Histological studies demonstrated that GKB substantially inhibited OVA-induced eosinophilia in lung tissue and mucus hyper-secretion by goblet cells in the airway. These results suggest that ginkgolide B may be useful for the treatment of asthma and its efficacy is related to suppression of extracellular regulating kinase/MAPK pathway. Introduction Asthma is a complex disease characterized by acute and chronic airway inflammation, airway hyper-responsiveness (AHR), eosinophilia and mucus hypersecretion by goblet cells. Many cytokines contribute to this inflammation mediated by T-helper 2 (Th2) cells, which play central roles in the pathogenesis of allergic asthma [1,2]. Interleukin (IL)-5 expressed by Th2 cells, is responsible for eosinophil growth, differentiation, mobilization, recruitment, activation, and survival [3][4][5]. Interleukin (IL)-13 play an important role in T-cell differentiation toward a Th2 phenotype and isotype switching of B cells to immunoglobulin IgE production [6,7]. Interleukin (IL)-13 promotes acute inflammatory processes and underlying structural changes to the airways [8]. Thus, antagonizing the action of Th2-type cytokines represents one of the major new therapeutic strategies in the treatment of bronchial asthma. The morbidity and mortality of asthma appear to be increasing, and it has been suggested that medications used to treat asthma, that is Chinese Materia Medica, are contributing to this trend. Ginkgo biloba has been used as an herb in traditional Chinese medicine for thousands of years. Ginkgolide B (GKB), the major active component of G. biloba extracts, is a known inhibitor of platelet activating factor (PAF), which is important in the pathogenesis of asthma [9]. GKB primarily induces activation of intracellular signaling events and has the potential to prime cellular functions such as PMN defense activities [10], and induces apoptosis via activation of c-Jun N-terminal kinase (JNK) and p21-activated protein kinase 2 in mouse embryonic stem cells [11]. Ginkgolides offer a desirable approach for this due to their low toxicity [11]. Moreover, Tosaki A et al. showed that G. biloba extract can improve contractile function after global ischemia in the isolated working rat heart by reducing the formation of oxygen free radicals [12]. The mitogen activated protein kinases (MAPKs) are evolutionary conserved enzymes which play a key role in signal transduction mediated by cytokines, growth factors, neurotransmitters and various types of environmental stresses. The MAPK family includes three distinct stress-activated protein kinase pathways: p38, JNK, and extracellular regulating kinase (ERK) [13]. It has been reported that inhibition of the MAPK signalling pathway in lung inflammatory cells (e.g., mast cells) may have therapeutic potential in the treatment of allergic diseases such as asthma [14]. Based on studies investigating the effect of GKB, however, no available study has been done in a mouse model of allergic airway inflammation, so we focused on investigating whether GKB possesses a distinct anti-inflammatory activity on a non-infectious mouse model of asthma, and elucidated the involvement with MAPK pathway for the first time. GKB Reduces Ovalbumin-induced Bronchoalveolar Lavage Fluid T Helper Type 2 Cytokine Levels Th2 cytokines levels in the bronchoalveolar lavage were measured by a sandwich ELISA. The concentrations of IL-5 and IL-13 were increased in OVA-immunized samples compared to control mice ( Figure 1). Treatment with GKB caused a reduction in the levels of IL-5 and IL-13 compared to ovalbumin-immunized mice ( Figure 1). GKB Reduces OVA-Induced Serum Levels of OVA-specific IgE OVA-induced serum levels of OVA-specific IgE were analyzed by a sandwich enzyme-linked immunosorbent assay. OVA-immunized mice treated with a vehicle had high levels of serum anti-OVA IgE antibodies compared to control mice ( Figure 2). A significant reduction in OVA-specific IgE antibodies was observed in mice treated with GKB ( Figure 2). GKB Reduces OVA-Induced Bronchoalveolar Lavage Fluid (BALF) Inflammatory Cell Recruitment The total cell counts and differential cell counts in the BALF were evaluated 24 h after the last OVA challenge. As shown in Figure 3, OVA-immunized mice treated with a vehicle had higher levels of eosinophils, neutrophils, and macrophages compared to the control group. However, GKB significantly decreased the number of eosinophils, neutrophils, and macrophages ( Figure 3). Figure 3. Effects of ginkgolide-B on the recruitment of inflammatory cell in BALF. The lavage fluid was centrifuged, and the cell pellets were resuspended and applied to a slide by cytospinning to obtain differential cell counts by staining with a modified Giemsa method. The values represent the means ± SEM of three independent experiments. GKB = ginkgolide-B ( ## p < 0.01 vs. control group mice, * p < 0.05, ** p < 0.01 vs. OVA-challenged mice). Effects of GKB on OVA-Induced Airway Hyper-Responsiveness To investigate the effect of GKB on AHR in response to increasing concentrations of methacholine, we measured both RI and Cdyn in mechanically ventilated mice. OVA-challenged mice developed AHR, as was typically reflected by a high RI and low Cdyn ( Figure 4). GKB treatment significantly reduced RI and restored Cdyn in OVA-challenged mice in response to methacholine ( Figure 4). Effects of GKB on OVA-Induced Airway Goblet Cell Hyperplasia and Mucus Production To evaluate the effect of GKB on airway inflammation, airway goblet cell hyperplasia and mucus production. We stained lung tissues with haematoxylin-eosin ( Figure 5) and alcian blue-periodic acid-Schiff ( Figure 6) staining solutions to examine the inhibitory effect of GKB on the histological change in the OVA-induced asthma model. Total cellular proteins from lung were analyzed by Western blot with specific antibodies. Experiments were repeated three times and similar results were obtained, n = 10 mice per treatment group ( # p < 0.05, ## p < 0.01 vs. control group mice, ** p < 0.01 vs. OVA-challenged mice). Effects of GKB on Activation of p38, ERK and JNK Our data showed that p38, ERK, and JNK were activated after the last OVA challenge. Treatment with GKB significantly inhibited the activation of ERK/MAPK compared to OVA-challenged mice. However, there was no significant change in p-JNK and p-p38 between OVA-challenged mice and the group treated with GKB ( Figure 7). These results showed that GKB exert its anti-inflammatory actions via inhibition of ERK/MAPK signaling pathway. Discussion Allergic asthma is a chronic airway inflammation disease. In most asthma phenotypes, increases in eosinophil levels are observed in the tissues, blood, and bronchoalveolar lavage fluid. Furthermore, Th2 cells and their secreted products aggregate into airway and lung tissues. In addition, high serum levels of immunoglobulin E (IgE), persistent airway AHR, and goblet cell hyperplasia are observed. Ginkgolide B is a component of traditional Chinese herbal medicines. It improves cardiac function after ischaemia in both non-preconditioned and preconditioned non-diabetic and diabetic rats [15]. The combinations of Ginkgo biloba leaf extract (EGb761) plus the carotenoid antioxidant astaxanthin (ASX) and vitamin C are evaluated for summative dose effect in inhibition of asthma associated inflammation in asthmatic guinea pigs [16]. Tosaki et al. demonstrated that the combination of superoxide dismutase (SOD), catalase and EGB 761 may synergistically reduce the formation of free radicals and the incidence of reperfusion-induced VF and VT [17]. However, this is the first time the anti-inflammation and AHR-inhibiting effect of GKB is demonstrated in a mouse model of bronchial asthma. We also investigated the association with the MAPK pathway. Our results suggest that GKB may be used as a therapeutic reagent for patients with allergic airway inflammation. Th2 cells are essential for the pathogenesis of asthma. Numerous studies have established a critical function for the Th2 cytokines IL-5 and IL-13 in the asthmatic response. The growth, activation, and survival of eosinophils are associated with IL-5 [18,19], and with the help of IL-13, it regulates eosinophil trafficking into sites of inflammation [20,21]. IL-13 control eosinophil trafficking directly by up-regulating adhesion molecules on endothelial cells [22] or by inducing chemokine expression in the airway. In the present study, the expression of IL-5 and IL-13 in lung, which was measured by a sandwich ELISA in the OVA group, was increased compared to the control group. In contrast, pretreatment with GKB resulted in a significant reduction of IL-5 and IL-13 in lung tissues. Recent studies have demonstrated that airway inflammation is a major contributing factor to the pathogenesis and pathobiology of allergic asthma. The levels of airway inflammation often correlate with the severity of clinical symptoms, the degree of airway obstruction, and AHR. Anti-inflammatory therapies are central to long-term asthma management. Treatment strategies aimed at normalizing surrogates of airway inflammation (e.g., sputum eosinophils and AHR) have better outcomes than solely treating the symptoms or improving lung function [23,24]. Our data demonstrated that GKB inhibited OVA-induced AHR resulting in inhaled methacholine. Meanwhile, eosinophilia aggregation into tissue was also inhibited by GKB. IL-13 has been shown to induce AHR in mouse models of asthma [25]. IL-5-mediated eosinophilia contributes to AHR by generating cytotoxic products [26]. Therefore, the inhibition of AHR by GKB may be associated with the reduction of Interleukin IL-5 and IL-13 production and the eosinophilia aggregation into the lungs. In animal models, OVA challenges induced a significant increase in the total serum IgE and BALF IgE [27,28]. Our data showed that the serum concentration of IgE was significantly reduced in allergic mice after GKB administration. This result suggests that GKB has an effect on the allergic asthma that developed in an IgE-dependent manner. MAPKs are highly conserved, eukaryotic signal transducing enzymes that respond to environmental stresses, as well as to plasma membrane receptor stimulation, by regulating key molecular targets, up to the transcriptional machinery in the nucleus. This enzyme family includes several subgroups such as JNK, ERK and p38. ERK signaling pathway is activated upon ligation of T cell receptor in T cells, B cell receptor in B cells, and FcεRI in mast cells, leading to proliferation, differentiation, cytokine production, and degranulation [29][30][31][32]. ERK activity in the lungs of asthmatic mice was significantly higher as compared with normal mice [33]. Duan et al. reports that regulation of ERK signaling pathway could modulate allergic airway inflammation [34]. In trying to understand the mechanisms by which GKB elicits its salutary effects, we investigated the effect of GKB on the MAPK. Western blot analysis showed that GKB markedly attenuated OVA-induced tyrosine phosphorylation of ERK1/2. However, there was no significant change in phosphorylation of JNK and p38 between OVA-challenged mice and the group treated with GKB. Our results showed that ERK signaling pathway plays an important role in the anti-inflammatory property of GKB in asthma model. Animals Female BALB/c mice, weighing approximately 16 to 18 g, were purchased from the Center of Experimental Animals of Baiqiuen Medical College of Jilin University (Jilin, China). Mice were housed for 2-3 days to adapt them to the environment before experimentation. The mice were housed in micro-isolator cages and received food and water ad libitum. The laboratory temperature was 24 ± 1 °C, and relative humidity was 40-80%. All animal experiments were performed in accordance with the guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health. Reagent The IL-5 and IL-13 ELISA kits were purchased from Biolegend (California, USA). Ovalbumins (Grade δ) were purchased from Sigma-Aldrich (St. Louis, MO, USA). GKB (purity >98%, Figure 8) was purchased from National Institute for the Control of Pharmaceutical and Biological Products (Beijing, China). Phospho-specific antibodies for ERK1/2, p38 and JNK as well as antibodies against ERK1/2, p38, JNK and β-actin proteins were obtained from Cell Signaling Technologies (Beverly, MA, USA). Peroxidase-conjugated Affinipure Goat Anti-Mouse IgG (H+L) and Peroxidase-conjugated Affinipure Goat Anti-Rabbit IgG (H+L) were purchased from PTG (Chicago, IL, USA). The ELISA kits for IgE were purchased from R&D (Anniston, AL, USA). The purity of all chemical reagents was at least analytical grade. Antigen Sensitization, Challenge and GKB Treatment Groups of mice (n = 10), receiving the following treatments were studied: (1) sham-sensitization plus challenge with PBS; (2) sensitization plus challenge with OVA; and (3) sensitization plus challenge with OVA and treated with GKB. Groups of 10 animals were used for each experimental condition. Mice were sensitized with OVA (20 μg) adsorbed in Imject Alum (100 μg/mL, Pierce, Rockford, IL, USA) by intraperitoneal application on days 0 and 14. On days 25-27, mice were again anesthetized, intranasally challenged with OVA (100 μg) in PBS (50 μL). The negative controls were sham-sensitized and challenged with PBS following the same protocol. GKB (40 mg/kg dissolved in PBS) was administered by intraperitoneal application at 1 h before the OVA challenge on days 25-27. BALF and Serum Collection Mice were anesthetized 24 h after the last OVA challenge and were bled via the brachial plexus to collect the blood samples that were used to estimate IgE production. Ice-cold PBS (0.5 mL) was instilled twice into the lungs, and BAL fluid was collected. Total cell counts were performed using a hemocytometer. The fluid recovered from each sample was centrifuged (4 °C, 3,000 rpm, 10 min) to pellet the cells, and the supernatant was kept at −70 °C until it was used for cytokine measurements. The cell pellets were resuspended in PBS to stain and count the total number of cells using the Wright-Giemsa staining method. At least 200 cells were counted per slide. Cytokine Levels in Lung Tissues The concentrations of cytokine IL-5 and IL-13 in the supernatants of the BALF were measured by sandwich enzyme-linked immunosorbent assay using commercially available reagents according to the manufacturer's instructions. Mouse Anti-OVA IgE ELISA To define serum levels of OVA-specific IgE, an ELISA analysis was carried out using a mouse-specific anti-IgE-antibody. Briefly, microplate wells were coated with 1% OVA in coating buffer (0.05 M sodium carbonate-bicarbonate, pH 9.6) overnight at 4 °C. The wells were then incubated with blocking buffer (1% BSA in PBS, pH 7.2) at room temperature for 1 h and washed. Then, the diluted (1/10) serum samples were introduced to the microplate, which was then incubated at room temperature for 2 h, washed, and incubated with Biotin anti-mouse IgE. The samples were followed by the addition of extravidin-peroxidase at room temperature for 30 min and TMB substrate for 15 min. The enzymatic reaction was stopped with 2 M H 2 SO 4 , and the absorbance was read at 450 nm. Units are reported as the optical density (OD) at 450 nm. Determination of Airway Hyper-Responsiveness Mice were anesthetized, and tracheotomy was performed as described [35]. The internal jugular vein was cannulated and connected to a microsyringe for intravenous methacholine administration. Airway resistance (RI) and lung compliance (Cdyn) in response to increasing concentrations of methacholine were recorded using a whole-body plethysmograph chamber (Buxco, Sharon, CT, USA) as described [23]. RI is defined as the pressure driving respiration divided by flow. Cdyn refers to the distensibility of the lung and is defined as the change in volume of the lung produced by a change in pressure across the lung. Results are expressed as the percentage of the respective basal values. Histological Examination Histopathologic evaluation was performed on mice that were not subjected to BALF. Left lungs were removed by dissection and fixed in 4% paraformaldehyde. Lung tissues were sectioned, embedded in paraffin, and cut at 3 μm. Tissue sections were then stained with hematoxylin and eosin (H&E) for general morphology and AB-PAS (alcian blue-periodic acid-Schiff) for the identification of goblet cells in the epithelium. Western Blot Analysis Tissues were harvested and frozen in liquid nitrogen immediately until homogenization. Samples were homogenized in RIPA buffer and lysed for 30 min on ice. Total protein fractionation was performed using a cell lysis buffer for western blot and IP (Beyotime Institute of Biotechnology, China) according to the manufacturer's protocol. Protein concentration was assayed using the Bio-Rad protein kit, and equal amounts of protein were loaded into wells on a 10% sodium dodecyl sulphate (SDS)-polyacrylamide gel. Subsequently, proteins were transferred onto polyvinylidene difluoride (PVDF) membranes, blocked overnight with 5% (wt/vol) nonfat dry milk, and probed according to the method described by Towbin et al. [36]. with specific antibodies against JNK, ERK1/2, p38, β-actin antibodies, phospho-specific antibodies to JNK, ERK1/2, p38 in 5% (wt/vol) BSA dissolved in TTBS. With the use of a peroxidase-conjugated secondary anti-mouse or anti-rabbit antibody, bound antibodies were detected by ECL plus (GE Healthcare Buckinghamshire, UK). Statistical Analysis Data are presented as means ± SEM. One-way ANOVA followed by Dennett test was used to determine significant differences between treatment groups. The critical level for significance was set at p < 0.05. Conclusions Our study demonstrated that GKB may effectively inhibit the increase of Th2 cytokines, such as IL-5 and IL-13 in BALF. In addition, the eosinophil count in BALF was significantly decreased after treatment of GKB compared to the OVA-challenged group. Histological studies demonstrated that GKB substantially inhibited OVA-induced eosinophilia in lung tissue and mucus hyper-secretion by goblet cells in the airway. Taken together, our findings suggest that GKB may effectively inhibit the ERK signaling pathway and may serve as a therapeutic reagent for patients with allergic airway inflammation.
2014-10-01T00:00:00.000Z
2011-09-01T00:00:00.000
{ "year": 2011, "sha1": "0a74861d0be264bd8ded4691d6a0046fe7045cd8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/16/9/7634/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a74861d0be264bd8ded4691d6a0046fe7045cd8", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
244883686
pes2o/s2orc
v3-fos-license
The impact of radiant heat on various types of plasterboard walls The future of the whole world focuses on reducing waste produced by people. As the construction sector is one of the biggest producers of waste, a great deal of effort has been made to introduce this trend in modern construction. The "green" building sector, therefore, draws attention to natural and recyclable building materials. These include natural thermal insulation such as cork, fiberboard, hemp insulation, and even sheep wool insulation. Almost all types of such insulation are made from waste materials which, were they not reused in the form of thermal insulation, would end up as municipal or biodegradable waste. At the same time, we should point out that almost all "green" construction materials are flammable. This feature is not very advantageous from the point of view of fire protection and it can significantly affect the fire safety of a construction. It is for this reason that the main objective of this research was to determine the impact of a radiant heat source on various types of thermal insulation used as plasterboard filling and to evaluate the possibilities of their use in sandwich constructions for fire protection purposes. Introduction Buildings are an integral part of human existence. We spend almost all our lives in the interior: the average person spends about 80% of his life inside buildings. It is therefore important to protect them against fire and such measures are stipulated in the basic requirements with which each building has to comply. Building structures are subjected to various forces during their service life, such as fire or explosion [1,2,3]. The construction of buildings has undergone a long development in human history, from the oldest cave dwellings to modern skyscrapers hundreds of meters high. Major changes have occurred, not only in terms of the nature and geometry of the buildings, but also in terms of the type and characteristics of the materials used for the construction. Traditional "heavy" building materials were used in the past; nowadays there is a trend for brightening and lightening the structures [4,5]. As an example, we can mention the increasing use of plasterboard walls to partition the interior of buildings. The 'green' building industry currently uses many natural or recycled building materials with high fire-heating values, not only for the furnishings but also within the building structures. It follows that the choice of building materials significantly influences the overall behavior of the building in the event of a fire, and it is therefore important to examine the materials and if possible, improve their properties. It is these facts that led us to establish a method and carry out an experimental examination of the impact of a radiant heat source on selected natural thermal insulation built in plasterboard walls [6,7,8,9]. Description of the samples and method of the experiment The first part of the experiment consisted of preparing the samples, 500 x 500 x 120 mm construction. The structure of the samples simulated the composition of a plasterboard dividing wall consisting of one layer of plasterboard on each side. There was thermal insulation filling between the plasterboards. The skeleton structure was made up of CW dry wall steel profiles, 100 mm wide, joined together with screws. Standard plasterboards were used for the test. Mineral wool, wood fibre insulation and sheep wool were used as filling elements. Three sets of samples were created and differed from one another in the type of thermal insulation used. Five samples were tested in each set of samples, and an arithmetic mean was obtained from the results. These types of insulation were chosen because each of them represents a different kind of thermal insulation. Mineral wool is a standard type of building material which is manufactured from natural materials of mineral origin, but is basically impossible to recycle. Wood fiber is an environmentally friendly material of plant origin made from wood waste and can be ecologically disposed of after use. Sheep wool is a natural ecological material of animal origin which is also, in principle, waste material; its disposal is eco-friendly [10,11,12,13]. In addition to their origins, these insulation materials also differ in their properties, such as the thermal conductivity coefficient or reaction to fire class (Table 1). The next part of the experiment was the construction of the measuring apparatus. For this purpose, a reverse T-shaped sample holder was created and a thermal radiator was mounted onto it as shown in Fig. 1. As our task was to determine the influence of thermal stress on building materials enclosed in a building structure, an infrared heater (described below) was chosen as the heat source. Holes were made on the supporting structure in order to mount the heater at different distances from the sample. The heater could also be mounted in different positions using screws. The infrared heater consisted of five heating tubes with a combined output of 1,500 W. The temperature recording on the surface and inside the structure was carried out by means of two thermocouples connected to the AHLBORN ALMEMO 2690 measuring apparatus. The first thermocouple was located on the surface of the sample, and the second one inside the sample at a depth of 60 mm from the surface of the sample. Both thermocouples were placed in the center of the heated area of the sample. The experimental method consisted of the following steps: 1. Specimens were mounted into the sample holder. 2. Thermocouples were positioned on the surface of the sample and inside the sample, 60 mm from the surface of the sample. Infrared heater was mounted onto the stand. 3. Infrared heater was switched on and heated to the steady-state heat flow for 20 minutes. After stabilization of the heat flow, the heater was placed 50 mm from the surface of the sample. 4. Sample was heated for 60 minutes. 5. Infrared heater was moved away from the sample. 6. Sample was removed from the sample holder and the upper layer was uncovered. 7. After removing the gypsum board, the reaction between oxygen and the thermal insulation was monitored. Results of Experimental Measurements When comparing the temperatures of the experimentally studied thermal insulation, we found that in the first three minutes the temperature in each thermal insulation was almost identical. After exposing the sample to a heat load for a longer period of time, a higher increase in temperature was observed between the 3rd and 15th minute of the experiment for sheep wool and mineral wool. From the 15th to approximately the 28th minute, the temperature increases for all types of thermal insulation were almost the same. Between the 28th and 60th minute, a significant increase in temperature was observed in the plasterboard samples insulated with sheep wool. This may have happened due to the fact that the thermal insulation burned away and an air pocket was formed (Fig. 2), which caused the thermocouple recording temperatures inside the thermal insulation to no longer be situated inside the thermal insulation, but in the air pocket. We discuss this in the following paragraph of this contribution. In the experiment, the top layer of the gypsum board changed. The visual changes included: the top layer of plasterboard (paper) burned away and the gypsum started to release bound water, the evaporation of which caused cracks. However, these did not affect the heat transfer inside the structure. After finishing the experiment and uncovering the plasterboard layer on the heated side of the sample, damage to the thermal insulation was detected, which is documented in Fig. 3. Fig. 3. Different types of thermal insulation after exposure to a radiant heat source. As part of the visual monitoring of the impact of radiant heat on different types of thermal insulation, the following conclusions were made. Mineral wool sample was the least degraded, on the other hand, the wood fiberboard sample was the most degraded ( Table 2). In the case of mineral wool, the insulation did not smoulder or burn away. The material was thermally degraded only on the surface. As for the sheep wool sample, the isolation grouting and foul-smelling smoke occurred during the experiment. An air pocket was formed behind the sheathing, which prevented further carbonisation of the insulation. At the end of the experiment, grouting and carbonization did not continue. In the wood fiber insulation experiment, the insulation burned flamelessly and this continued even after the heat source was put away. The burning stopped only after the insulation was immersed in the water. This is particularly significant when considering the possibility of hidden fires, because a fire can spread uncontrollably inside the construction and therefore anywhere throughout a building. Significant smoke was also observed from the wood fiber insulation; in the case of a fire, this could make it difficult for one to navigate out of a building insulated with wood fiber, which would create significant smoke. Conclusion When comparing the thermal properties of synthetic and natural insulating materials, we can see that their properties are at a comparable level and therefore, the choice of insulation material is insignificant from a construction point of view [14,15]. However, this is not true in the context of the fire safety of buildings, as the fire performance of natural and synthetic thermal insulation materials varies significantly. After testing various types of insulation, we conclude that it is possible to use sheep wool as an eco-friendly thermal insulation if it is not a fire separating structure. We do not recommend wood fiber insulation due to the possible uncontrolled hidden spread of a fire. However, if the contractors require wood fiber insulation, in constructions without a fire separation function, it is recommended that separation (insulation) strips be used between the individual wood fiber boards in both the longitudinal and transverse directions so that in the event of a fire, smoldering does not spread among the boards. This work was supported by project 033ŽU-4/2019, "The integration of practical training in the rescue services study."
2021-12-05T16:04:50.976Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "ff1f854e5819873c12e6d32abd78d5cc1e11fd60", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2021/21/matecconf_edes2021_00001.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9d02244a4b64fb84c53f0122fed44e09dccfcedd", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
255810694
pes2o/s2orc
v3-fos-license
Alpha-1-antitrypsin interacts with gp41 to block HIV-1 entry into CD4+ T lymphocytes Study of a clinic case reveals that alpha-1-antitrypsin (AAT) deficiency is related to CD4+ T cell count decline and AIDS progression, suggesting that AAT might be an endogenous inhibitor of HIV/AIDS. Previous study shows that AAT inhibits HIV-1 replication in infected host cells and the C-terminus fragment of AAT, VIRIP, interferes with HIV-1 infection. However, it is still unclear whether and how intact AAT inhibits HIV-1 infection. It is also unknown what the mechanism of AAT is and which critical step(s) are involved. In the present study, the C-terminus of AAT (C) was synthesized. C terminus-truncated AAT (ΔAAT) was also prepared by digesting AAT with metalloproteinase. Primary CD4+ T cells were then co-cultured with HIV-1 with the presence or absence of AAT/C/ΔAAT to detect cis-infection of HIV-1. The interaction between AAT/C/ΔAAT and gp120/gp41 was also measured. Meanwhile, HIV-1 reverse transcriptase activity and viral DNA integration were also detected in these lymphocytes. The results demonstrated that AAT and C, not ΔAAT, inhibited HIV-1 entry by directly interacting with gp41. Meanwhile, AAT, C and ΔAAT could not directly interfere with the steps of viral RNA reverse transcription and viral DNA integration. AAT inhibits HIV-1 entry by directly interacting with gp41 through its C-terminus and thereby inhibits HIV-1 infection. Background Human immunodeficiency virus type 1 (HIV-1) is hard to propagate in infected and unaltered whole blood, unless the blood is diluted and active lymphocytes are present [1,2]. Investigations of HIV-1 replication in patients reveal that HIV-1 proliferation is confined in the lymph nodes where the concentration of specific serum constituents is lower [3,4]. These facts suggest that human body produces endogenous inhibitors for HIV and acquired immune deficiency syndrome (AIDS) progression. Further studies show that various components in human blood and tissues are possible candidates [5][6][7]. Among these components, serine protease inhibitors are promising. Studies report that salivary secretary leukocyte protease inhibitor suppresses HIV-1 replication [8,9]. Researchers also focus on alpha-1-antitrypsin (AAT), a 394 amino acid, 52 kDa glycoprotein synthesized in the liver and secreted into the circulation [10][11][12][13][14][15]. AAT is consistently present in the serum of healthy individuals (1.5~3.5 mg/mL), although its concentration can increase several times upon inflammation [16,17]. Study of a clinic case reveals that pre-existing AAT deficiency is associated with accelerated HIV/AIDS progression, which suggests that AAT might be an endogenous HIV/AIDS suppressor [13,18]. The lifespan of HIV can be divided into two major processes, infection and replication [19,20]. HIV-1 infection begins with the interaction between viral gp120/ gp41 and CD4/co-receptors on host cells and ends with the integration of viral DNA into host genome. This process contains several middle steps, including the entry of viral core, reverse transcription of viral RNA and nuclear translocation of viral DNA [21,22]. HIV-1 replication begins with the transcription and translation of viral genes, which lead to the package, budding and release of new virus, which is then processed to become infection-competent virus [23]. Our previous study reveals that AAT enters the cytosol of infected CD4+ T cells through LRP1-mediated endocytosis process, where it directly interacts with nuclear factor kB (NF-kB) inhibitor (IkBα) and thereby alters its ubiquitinylation pattern to block NF-kB activation and HIV-1 replication [14,24,25]. Munch's study suggests that the C-proximal fragment of AAT (A.A. 353-372), VIRIP, inhibits HIV-1 infection [12]. However, it is unclear whether and how intact 55KD AAT interacts with gp120-covered gp41on the viral membrane to inhibit HIV-1 infection, due to its big size that might shield its VIRIP domain. It is also unknown whether AAT inhibits HIV-1 infection by targeting on single or multiple steps: viral core entry, viral RNA reverse transcription or viral DNA integration. Moreover, it is still unclear whether C-terminus of AAT is the only functional domain. In the present study, we sought to investigate the accurate mechanism of intact AAT inhibiting HIV infection. The results demonstrate that AAT and synthesized C-terminus fragment of AAT (C), not C-terminustruncated AAT (ΔAAT), inhibits HIV-1 entry into CD4+ T cells. The inhibitory effect is mediated through their direct interactions with gp41. Additionally, AAT, C and ΔAAT did not directly affect the steps of viral RNA reverse transcription and viral DNA integration. Thus, our results clarify how AAT inhibits HIV-1 infection in primary CD4+ T cells. Combined with the previous finding that AAT inhibits HIV-1 replication in infected cells [15,24,25], it is clear that AAT can suppress HIV/AIDS pathogenesis through inhibiting both HIV infection and replication in vitro, which might provide some useful information for HIV/AIDS study and drug development. Reagents Human plasma AAT was obtained from Sigma [purity and quality were analyzed by electrophoresis and mass spectrometry following the methods described below (Additional file 1: Figure S1)]. ΔAAT were prepared by digesting AAT. Briefly, AAT and metalloproteinase from S. aureus (Sigma) were co-cultured at 40:1 (molar ratio) for 3 h at 37°C in 50 mM NH 4 HCO 3 . Next, protease was removed by using HiTrap benzamidine column (Amersham) and then passed through 10KD MWCO spin filter (Millipore) to separate truncated AAT. The truncation of AAT was verified by electrophoresis and mass spectrometry (Additional file 1: Figure S1) and also justified by the loss of ability to bind trypsin and elastase (Sigma) [26,27]. C-terminus of AAT (C) (A.A. 345-384: KGTEAAGAMFLEAIPMSIPPEVKFNKPFVFLMIDQNT KSP) was synthesized in Genscript and the purity and accuracy of peptide was also analyzed by electrophoresis and mass spectrometry (Additional file 1: Figure S1). HIV-1 integrase inhibitor raltegravir (RAL), HIV-1 reverse transcriptase inhibitor emtricitabine (FTC) and HIV-1 entry inhibitor enfuvirtide (ENF) were from Santa Cruz Biotechnology. CD4+ T cell isolation and infection PBMCs were extracted from the whole blood of healthy, HIV-negative donors using Ficoll-Paque-Plus (GE Healthcare) as directed. CD4+ T cells were then isolated from these PBMCs using a CD4+ T cell isolation Kit II (Miltenyi Biotech) as directed. Next, isolated CD4+ T cells were activated and maintained as before [15]. To ensure that CD4-and coreceptordependent infection (cis-infection) is not interfered by the endocytosis of viral particle that is an alternative way of HIV infection in some cases (trans-infection) [28], activated CD4+ T cells were cultured for 30 min in conditioned complete medium [complete medium with the endocytosis inhibitor cocktail (5 μg/mL methyl-beta-cyclodextrin, filipin and chlorpromazine)] before and during infection, which was washed off after 2 h' infection and maintained in normal complete medium when prolonged incubation was needed. ELISA assay for HIV-1 p24 detection To detect cytosol p24, CD4+ T cells were cultured with HIV-1. These cells were then suspended in Lysis Buffer [50 mM Tris-HCl (pH7.4), 1 % CHAPS, 250 mM NaCl, 0.5 % Triton X-100, 1 % Igepal CA-630, 1 mM DTT, 1 mM Na 3 VO 4 , 1 mM NaF, 1 mM PMSF, 4 mM EDTA, protease inhibitor cocktail (Roche)] and vortexing for 60 s. The mixture was then incubated on ice for 15 min and homogenized with a small gauge needle by drawing 3 times. After homogenizing, the mixture was centrifuged at 14,000 × g for 10 min at 4°C to collect the supernatant (whole cell proteins containing viral proteins). HIV-1 p24 was detected using the HIV-1 p24 antigen ELISA kit (ZeptoMetrix Corporation) following the direction. For HIV-1 replication, the supernatant fluid of the culture system was collected to detect HIV-1 p24 using the HIV-1 p24 antigen ELISA kit (ZeptoMetrix Corporation) following the direction. HIV-1 RNA detection Supernatant with HIV-1 viral particles was collected for viral RNA quantitation. Viral RNA was isolated using QIAamp viral RNA mini Kit (Qiagen). Isolated viral RNA was reverse-transcribed into cDNA using random primers and Superscript III reverse transcriptase (Invitrogen). Quantitative RT-PCR using TaqMan Universal PCR Master Mix (Applied Biosystems) used the following primers and probe: forward primer: 59-TGGGTACCAGCACACAAAGG-39 (nt 3696 in HXB2); reverse primer: 59-ATCACTAGCCATTGC TCTCCAAT-39 (nt 3850 in HXB2); and probe: ATTGGAGGAAATGAAC-MBG (FAM labeled) at 900 nM (primers) and 250 nM (probe). Quantitative PCR conditions were as follows: 50°C for 2 min (1 cycle), 95°C for 10 min (1 cycle), followed by 60 cycles of: 95°C for 15 s and 60°C for 1 min in an ABI 7500 thermocycler (Applied Biosystems). A standard curve was prepared using known concentrations (i.e., copy numbers) of ACH-2 DNA to determine the number of copies of viral RNA present in the cultures. Detection of HIV-1 reverse transcriptase activity Infected CD4+ T cells were suspended lysis buffer [50 mM Tris (pH 7.4), 500 mM NaCl, 1 % Triton X-100, 2 mM PMSF, 20 % glycerol, 1 mM DTT and protease inhibitor cocktail (Roche)] and vortexed for 60 s. The mixture was incubated on ice for 45 min and homogenized with a small gauge needle by drawing 3 times. After homogenizing, the mixture was centrifuged at 14,000 × g for 10 min at 4°C to collect the supernatant containing HIV-1 reverse transcriptase. Next, the supernatant was treated with DNase I (Ambion, Inc.) to remove contaminating DNA. The activity of HIV-1 reverse transcriptase was detected by EnzChek® Reverse Transcriptase Assay Kit (Invitrogen) following the protocol manufacture provided. Protein extraction For whole cell extraction, cells were collected and washed. For viral protein extraction, HIV-1 particles were collected, washed, and concentrated by centrifuging at 100,000 × g for 2 h at 4°C. RIPA Lysis Buffer was then added to cell or viral pellet and vortexed for 60 s. The mixture was incubated on ice for 45 min and homogenized with a small gauge needle by drawing 3 times. After homogenizing, the mixture was centrifuged at 14,000 × g for 10 min at 4°C to collect the supernatant (whole cell proteins or viral proteins). For membrane protein extraction, cells were washed with cold PBS and membrane proteins were extracted with membrane protein extraction kit (BioVision) following the protocol manufacture provided. Immunoprecipitation assay Extracted proteins were used to carry out immunoprecipitation assay following our previous protocol with a pre-clearing step [24,25]. Flow cytometry assay Antibodies used were: CD4 allophycocyanin (APC), CXCR4 pacific blue, CCR5 fluorescein isothiocyanate (FITC) and antibody isotope controls were from BD Biosciences. Upon analysis, cells were washed and incubated for 20 min at room temperature in PBS containing 2 % BSA and antibodies or antibody isotope controls. Cells were then collected and washed twice in ice-cold PBS. The samples were analyzed on flow cytometer. For CD4+ T cell infection with HIV-1 NL4.3 (HIV-1 Gag-iGFP), activated CD4+ T cells were infected with GFP-labeled HIV-1 NL4.3 and unbound viruses were then removed by washing three times. GFP in CD4+ T cells was analyzed on flow cytometer to determine HIV-1 entry. Membrane receptor interaction identification For HIV-1 membrane receptor identification, HIV-1 was washed and concentrated by centrifuging at 100,000 × g for 2 h at 4°C. Next, viral particles were incubated with AAT and dithiobis (succinimidylpropionate) (DSP, Thermo Scientific) was added to stabilize the interaction between AAT and HIV-1 membrane protein. Next, the reaction was stopped by adding Tris buffer and viral proteins were then extracted following the whole viral protein extraction protocol described above. For cell membrane receptor identification, cells were incubated with AAT. Excess AAT was washed off and DSP was added to stabilize the interaction between AAT and cellular membrane protein. The reaction was also stopped by adding Tris buffer and the membrane proteins were extracted with membrane protein extraction kit (BioVision) following the provided protocol described above. Isolated cell membrane proteins or viral membrane proteins were incubated with AAT-specific antibody to precipitate proteins that interacted with AAT. Precipitated proteins were separated by SDS-PAGE and specific protein bands were cut and digested with a sequencing grade modified trypsin (Promega) to identify the proteins by peptide mass fingerprinting assay using a high-performance liquid chromatography-mass spectrometry system (HPLC-MS) following the protocol described below. In-gel digestion for peptide mass fingerprinting assay The proteins were separated by SDS-PAGE and the appropriate bands were cut for peptide mass fingerprinting assay following our previous protocol [24]. Digested peptides were analyzed on the HPLC-MS system (Applied Biosystem API QSTAR pulsar I LC/ MS system or Thermo scientific LTQ XL Orbitrap LC/MS system). The proteins were identified by searching the specific mass spectrum in the database (Mascot). In-gel filter-aided sample preparation (FASP) for peptide mass fingerprinting assay Proteins were separated by SDS-PAGE and the appropriate bands were collected for FASP assay following the previous protocol [24]. The peptides were also analyzed on Applied Biosystem API QSTAR pulsar I LC/MS system. The proteins were identified by searching the specific mass spectrum in the Mascot database. Surface plasmon resonance (SPR) assay AAT, ΔAAT or C-terminus fragment of AAT binding with gp41 and gp120 was detected on a BIAcore 3000 biosensor system (Pharmacia Biosensor AB) using SPR assay. Briefly, a carboxymethylated CM5 sensor chip was activated with 1:1 mixture of 0.4 M N-ethyl-N -(3-dimethylaminopropyl) carbodiimide and 0.1 M′ N-hydroxysuccinimide. AAT (0.5 g/L in 10 mM NaOAc, pH5), ΔAAT (0.47 g/L in 10 mM NaOAc, pH5) or synthesized C-terminus fragment of AAT (0.05 g/L in 10 mM NaOAc, pH5) were then immobilized on the sensor chip by amine-coupling according to the manufacturer's instructions. Unreacted sites were blocked with 1 M ethanolamine/HCl (pH 8). Control flow cells (blank) were activated and blocked in the absence of AAT, ΔAAT or synthesized Cterminus fragment of AAT. Flow cells were routinely equilibrated with running buffer (PBS, 0.005 % surfactant P20). Analyst gp41 and gp120 were diluted in the running buffer and allowed to interact with the sensor surface by a 250-s injection. Different concentrations of gp120 and gp41 were injected, each at the flow rate of 10 μL/min at 25°C. Data from duplicate assays were modeled for binding equilibrium. Statistical analysis Every experiment was repeated at least three times with different donors. Analysis of data was performed using Student t test. P values of ≤ 0.05 were considered significant. Unless specifically stated, the error bars indicate the standard errors of the means (SEM). Studies using human cells/tissues Studies using blood or tissue-derived cells obtained from humans were reviewed and approved by an appropriate institutional review committee. Results Study shows that the C-proximal fragment of AAT (A.A.: 353-372), VIRIP, inhibits HIV-1 infection [12]. However, it is still unclear whether and how intact AAT inhibits HIV-1 infection and whether VIRIP is the only functional domain. It is also unknown whether AAT inhibits HIV-1 infection by targeting virus entry, viral RNA reverse transcription or viral DNA integration. To address these issues, activated primary CD4+ T cells were pretreated with AAT at different concentrations and then infected with HIV-1 NL4.3 without removing AAT in the conditioned complete medium containing an endocytosis inhibitor cocktail to get rid of the possible trans-infection of HIV-1 [28]. After infection, cells were washed and incubated with AAT again (same condition as before infection) to detect HIV-1 replication. The results demonstrated that CD4+ T cells with AAT pretreatment produced much less virus than those without AAT pretreatment. Meanwhile, both of these cells produced less virus than lymphocytes without AAT treatment (Fig. 1). These results suggest that, besides inhibiting HIV-1 replication in infected host cells, intact AAT might also interfere with HIV-1 infection of uninfected cells. However, it is still unclear which step(s) AAT targets to exert its inhibitory effect on HIV-1 infection. To determine the mechanism of AAT's inhibition, we first investigated whether AAT inhibited HIV-1 infection by blocking viral DNA integration, which is the closest step to HIV-1 replication. CD4+ T cells were therefore incubated in the presence or absence of AAT/ΔAAT/C and then infected with HIV-1 NL4.3 without removing the reagents. Because 0.5-5 g/L AAT inhibits HIV-1 infection without affecting the viability of CD4+ T cells and this concentration is within the range found in the human body ( Fig. 1 and Additional file 2: Figure S2) [15,24], 0.5 g/L AAT was used in the remainder of the study. Moreover, 0.5 mg/mL AAT also had no obvious effect on inducing the precipitation of HIV-1 NL4.3 particles (Additional file 3: Figure S3). Meanwhile, relative equivalent amounts of ΔAAT (0.47 g/L) and C (0.05 g/L) were also included to detect their effect on viral DNA integration (Fig. 2). The results demonstrated that CD4+ T cells with AAT pre-treatment had less integrated viral DNA Fig. 1 AAT inhibited HIV-1 replication. Activated primary CD4+ T cells were divided into two aliquots. One aliquot was treated with AAT (0.5, 1, or 5 g/L) for 1 h and then infected with HIV-1 NL4.3 for 2 more hours without removing AAT. After infection, unbound viruses were removed by washing the T lymphocytes for three times and cells were then cultured with AAT as before infection. The other aliquot was directly infected with HIV-1 NL4.3 for 2 h and unbound viruses were then removed by washing three times. Infected T cells were then treated with the presence or absence of AAT. HIV-1 production was detected by measuring HIV-1 p24 (in the supernatant) at different time point in each group Fig. 2 AAT, C and ΔAAT did not directly target on viral DNA integration. a CD4+ T cells were pretreated in the presence or absence of AAT/C/ ΔAAT and then infected by HIV-1 NL4.3 without removing the reagents. Next, cells were washed to remove unbound viruses and reagents and then incubated for 1, 6, 12, 18, 24, or 30 h in the presence or absence of AAT/C/ΔAAT (the same condition as before infection) to isolate genomic DNA. Viral DNA integration was detected by Alu-PCR. b CD4+ T cells were also infected with HIV-1 NL4.3 without AAT/C/ΔAAT pretreatment. Next, cells were incubated in the presence or absence of AAT, C, ΔAAT or HIV-1 integrase inhibitor raltegravir (10 −4 g/L). After 0, 5, 11, 17, 23, or 29 h incubation, DNA was extracted from these CD4+ T cells to detect viral DNA integration. Genomic beta-globin was also detected as the endogenous control. Ct: control group without reagent treatment; RAL: raltegravir treatment (>60 % decrease; p < 0.01). Meanwhile, C-pretreated CD4+ T cells also had less integrated viral DNA than AAT-pretreated CD4+ T cells (>70 % decrease; p < 0.001) (Fig. 2a). If CD4+ T cells were infected with HIV-1 NL4.3 at first and then co-cultured with AAT, C, ΔAAT or HIV-1 integrase inhibitor raltegravir, the results showed that AAT, C and ΔAAT had no obvious effect on viral DNA integration while positive control raltegravir almost completely blocked viral DNA integration without altering the cell viability (>98 % decrease; p < 0.001) (Fig. 2b and Additional file 2: Figure S2). Additionally, the effect of AAT, C and ΔAAT on viral DNA integration was also tested on different tropism of primary HIV-1 isolates, HIV-1 92US714 and HIV-1 91US054 . The results demonstrated that AAT, C and ΔAAT also had similar effect on these HIV-1 isolates (Additional file 4: Figure S4). Thus, these results suggested that AAT and C, not ΔAAT, could inhibit HIV-1 infection of CD4+ T cells. However, the inhibition was not mediated though directly blocking viral DNA integration. Next, we sought to clarify whether AAT blocked HIV-1 infection by directly interfering with viral RNA reverse transcription. CD4+ T cells were therefore treated with AAT/C/ΔAAT and then infected by HIV-1 NL4.3 without removing the reagents. The activity of HIV reverse transcriptase was then measured in CD4+ T cell lysates (Fig. 3). The results showed that the activity of HIV-1 reverse transcriptase in AAT-pretreated CD4+ T cells was lower than that in untreated CD4+ T cells (>50 % decrease; p < 0.05), which could be caused by the lower amount of HIV-1 reverse transcriptase entry into the host cells (Fig. 3a). Meanwhile, the activity of HIV reverse transcriptase from C-pretreated CD4+ T cells was also lower than that from AAT-pretreated CD4+ T cells (>50 % decrease; p < 0.005) (Fig. 3a). When CD4+ T cells were infected first and then cultured with AAT, C, ΔAAT or HIV-1 reverse transcriptase inhibitor emtricitabine, the results demonstrated that AAT, C and ΔAAT had no obvious effect on the activity of reverse transcriptase, which confirmed that the lower activity of HIV-1 reverse transcriptase in AAT-pretreated CD4+ T cells was due to the lower amount of HIV-1 reverse transcriptase entry into these cells (Fig. 3b). As the positive control, emtricitabine almost completely blocked the activity of HIV-1 reverse transcriptase without altering the viability of CD4+ T cells (>98 % decrease; p < 0.001) (Fig. 3b and Additional file 2: Figure S2). When activated CD4+ T cells were infected with HIV-1 92US714 or HIV-1 91US054, similar results were also obtained (Additional file 5: Figure S5). Thus, these results suggested that AAT inhibited HIV-1 infection by lowering the amount of virus entry into CD4+ T cells. To confirm that AAT did inhibit HIV-1 entry, HIV-1 NL4.3 with a GFP insertion within the gag region (HIV-1 Gag-iGFP) was used to carry out the assay [30]. As the control for gp120/gp41 and CD4/co-receptor interaction, HIV-1 NL4.3 with a GFP insertion within the gag region and VSV-G replacing of Env (VSV-G-pseudotyped HIV-1 Gag-iGFP) was also included. Since HIV-1 fusion with host cell leads to viral core release into the host cell [18], host cells would obtain GFP signal upon HIV-1 Gag-iGFP infection. Meanwhile, host cells could not obtain any GFP signal from VSV-G-pseudotyped HIV-1 Gag-iGFP because VSV-G-pseudotyped HIV-1 Gag-iGFP did not Fig. 3 AAT, C and ΔAAT did not directly target viral RNA reverse transcription. a CD4+ T cells were pretreated with or without AAT/C/ ΔAAT and then infected by HIV-1 NL4.3 without removing the reagents. Infected CD4+ T cells were then washed to remove unbound viruses and reagents and then incubated for 1, 6, 12, 18, 24, or 30 h in the presence or absence of AAT/C/ΔAAT (the same condition as before infection) to isolate whole cell proteins with viral proteins. The activity of HIV-1 reverse transcriptase in normalized whole extracts was detected following the protocol described in the Materials and Method. b CD4+ T cells were also infected with HIV-1 NL4.3 without pretreatment and then incubated with the presence or absence of AAT, C, ΔAAT or HIV-1 reverse transcriptase inhibitor emtricitabine (10 −3 g/L). After 0, 5, 11, 17, 23, or 29 h' incubation, the whole cell proteins with viral proteins were extracted from these CD4+ T cells and normalized to detect the activity of HIV-1 reverse transcriptase. Ct: control group without reagent treatment; FTC: emtricitabine treatment have gp120/gp41 to interact with host cells and the transinfection of HIV was blocked with the presence of endocytosis inhibitor cocktails in the conditioned culture medium. CD4+ T cells were therefore incubated in the presence or absence of AAT, C, ΔAAT or HIV-1 fusion inhibitor enfuvirtide and then cultured with HIV-1 NL4. 3 Gag-iGFP or VSV-G-pseudotyped HIV-1 Gag-iGFP. After infection, resultant cells were collected to detect HIV-1 entry on flow cytometer (the gate information was showed in Additional file 6: Figure S6). As expected, the results revealed that GFP signal was detected only in CD4+ T cells with HIV-1 NL4.3 Gag-iGFP treatment, not VSV-Gpseudotyped HIV-1 Gag-iGFP. AAT significantly inhibited HIV-1 entry (>60 % decrease; p < 0.005) ( Fig. 4a and b). To inhibit HIV-1 entry, AAT could down-regulate CD4, CXCR4 or CCR5 expression on host T lymphocytes. However, after CD4+ T cells were treated with AAT, C or ΔAAT, the expression of these receptors and co-receptors had no obvious change in the whole cell extract (Fig. 5a). Meanwhile, AAT, C and ΔAAT also had no obvious effect on the expression of these receptors on the plasma membrane ( Fig. 5b-d) (the gate information for CD4+ T cells was shown in Additional file 6: Figure S6). As an alternative mechanism, AAT might directly interact with CD4, CCR5 or CXCR4, or other known HIV-1 infection-related proteins on CD4+ T cells to interfere with the interaction between HIV-1 particles and host cells. To test this postulate, CD4+ T cells were incubated with AAT or ΔAAT to precipitate the cytoplasm membrane proteins interacting with AAT or ΔAAT for protein identification by peptide mass fingerprinting assay. The results revealed that precipitated complexes contained low density lipoprotein receptor-related protein 1, ATP-binding cassette protein and solute carrier protein, which are not related to HIV-1 entry in CD4+ T cells [24]. Meanwhile, CD4, CCR5, CXCR4 and other known HIV-1 infection-related proteins were not detected in precipitated complexes (data not shown). As another way to block HIV-1 entry, AAT might directly interact with HIV-1 to block its entry into CD4+ T cells. To test this hypothesis, HIV-1 NL4.3 was incubated with AAT or ΔAAT. Viral membrane proteins interacting with AAT or ΔAAT were then precipitated and identified by a peptide mass fingerprinting assay. The results revealed that precipitated complexes from the AAT/ HIV-1 NL4.3 co-culture system contained gp120 and gp41. However, no viral protein was detected in the precipitated Next, infected CD4+ T cells were collected to detect GFP on flow cytometer. As the negative control, one group of non-infected CD4+ T cells was also analyzed (Non-infec Ct) (a). The mean fluorescence intensity of each group was also plotted from three in-dependent experiments on three different donors (b). Meanwhile, these CD4+ T cells were also collected to extract whole cell proteins. HIV-1 entry into CD4+ T cells were determined by measuring cytosolic HIV-1 p24 (c). Ct: HIV-1 Gag-iGFP-infected CD4+ T cells without reagent treatment. Pseudo: VSV-G-pseudotyped HIV-1 Gag-iGFP-treated CD4+ T cells without reagent treatment; ENF: enfuvirtide treatment; Non-infec Ct: non-infected CD4+ T cells complexes from the ΔAAT/HIV-1 NL4.3 co-culture system (Fig. 6a). Moreover, gp120 and gp41 were also detected in precipitated viral proteins from AAT/HIV-1 NL4.3 by Western blot. However, gp120 and gp41 could not be detected in precipitated viral membrane proteins from ΔAAT/HIV-1 NL4.3 (Fig. 6b). Furthermore, when HIV-1 NL4.3 was co-cultured with AAT or ΔAAT to precipitate virus with gp120 antibody, only AAT was found to directly interact with HIV-1 (Fig. 6c). When AAT, C or ΔAAT was co-cultured with gp120 or gp41 to detect the interaction between AAT/C/ΔAAT and gp120/gp41, the results demonstrated that C and AAT could directly interact with gp41 (Fig. 6d). When AAT, C or ΔAAT was immobilized on a carboxymethylated CM5 sensor chip and gp120 or gp41 was then applied to detect the direct interaction by SPR assay, the results revealed that only gp41 interacted with AAT and C (Fig. 7). Therefore, these results together suggested that the Cterminus of AAT, but not other domains, directly interacted with gp41, which might mediate the inhibition of HIV-1 entry. Discussion The study of a clinic case reveals that pre-existing AAT deficiency is associated with accelerated HIV/AIDS progression [13,18]. Studies reveal that AAT inhibits HIV-1 replication [10,11,14,15]. Constitutive expression of AAT inhibits HIV-1 replication by blocking gp160 and p55 processing in cell lines or primary human lymphocytes [31]. Meanwhile, AAT also inhibits HIV-1 replication by blocking the activation of NF-kB [14,15,24,25,32]. In the present study, we found that viral replication in AAT-pretreated CD4+ T cells was much lower than that in CD4+ T cells without AAT pretreatment, suggesting that AAT might exert its inhibitory effect on both HIV-1 infection and replication. However, several critical issues still need to be addressed. In the present study, our results show that AAT and C, but not ΔAAT, directly interact with gp41, which might then inhibit HIV-1 entry into CD4 + T cells. Moreover, AAT/ΔAAT/C did not directly interfere with the steps of viral RNA reverse transcription and viral DNA integration. To our surprise, the activity of HIV-1 reverse transcriptase decreased with the elongation of incubation time of the cells. This might be caused by the degradation of HIV-1 reverse transcriptase in the cytosol of CD4+ T cells, which is a general bioprocess in the cell and also leaves us a good topic to follow in the future. Therefore, these observations eliminate the concerns that 52KD AAT might be too big to directly interact with gp120-covered gp41 on the membrane of HIV-1 viral Fig. 5 AAT, C and ΔAAT had no obvious effect on CD4, CCR5 and CXCR4 expression on CD4+ T cells. CD4+ T cells were incubated in the presence or absence of AAT/C/ΔAAT and then collected to extract whole cell proteins. The expression of CD4, CCR5 and CXCR4 was detected by Western blot. β-actin was detected as the loading control (a). Meanwhile, the membrane level of CD4 (b), CCR5 (c) and CXCR4 (d) was also analyzed on these CD4+ T cells using flow cytometry. Ct: control group without reagent treatment particle. The results also indicate that AAT Cterminus is the only essential functional domain to inhibit HIV-1 infection. Normally, HIV-1 infection begins with the interaction between gp120/gp41 and CD4/co-receptors, which is then followed by viral core entry, RNA reverse transcription and DNA integration into the host genome [19,22]. In the present study, AAT, C and ΔAAT did not directly target the steps of viral RNA transcription and viral DNA integration, which means that AAT inhibits HIV-1 infection through blocking virus entry. Although studies demonstrate that AAT inhibits HIV-1 replication in infected lymphocytes [10][11][12][13][14][15]31], relatively fewer reports investigate whether AAT inhibits HIV-1 infection of Fig. 6 AAT and C directly interacted with gp41. HIV-1 NL4.3 was incubated in the presence or absence of AAT/ΔAAT and then concentrated by ultracentrifugation (100,000 × g for 2 h at 4°C) to extract viral proteins. AAT antibody was then added to viral proteins to precipitate proteins interacting with AAT/ΔAAT. Next, each specific protein was identified by peptide mass fingerprinting assay (a). Meanwhile, after concentrating, the virus was also divided into two aliquots. One aliquot was lysed to extract whole viral proteins and AAT antibody was then added to precipitate the proteins interacting with AAT/ΔAAT. AAT, gp120 or gp41 in the precipitated proteins were detected by immunoblotting (b). The other aliquot was incubated with gp120 antibody to precipitate proteins interacting with viral particles. Subsequently, precipitated viruses were lysed to detect AAT, gp120 and gp41 by Immunoblotting (c). Moreover, AAT/C/ΔAAT was also cultured with gp120 or gp41. Next, gp120 or gp41 antibody was added to precipitate. The direct interaction between AAT/C/ΔAAT and gp120/gp41 was determined by separating precipitated proteins with SDS-PAGE and proteins were visualized by Coomassie blue staining (d). MW: molecular weight marker; IP: immunoprecipitation; WB: western blotting (immunoblotting) uninfected cells and the mechanism is not fully elucidated yet. Usually, HIV-1 entry into host cells involves the interaction between viral gp120/gp41 and host CD4/ CXCR4 or CCR5 [33]. Some researchers, however, suggest that AAT inhibition of HV-1 entry is related to the interaction between AAT and host cell membrane proteins [34]. In the present study, our results did not provide any indication to suggest that AAT interacts with host membrane proteins, thereby inhibiting HIV-1 entry. AAT also did not alter the expression of CD4, CXCR4 and CCR5. Moreover, AAT also did not directly interact with CD4, CCR5, CXCR4 and other HIV-1 infectionrelated proteins. In contrast, AAT directly interacted with viral gp41. These observations together suggest that intact AAT could interact with viral gp41 to interfere with the interaction between gp120/gp41 and CD4/ CCR5 or/CXCR4 and thereby inhibiting HIV-1 entry, which is consistent with previous studies [35]. Moreover, AAT without the C-terminus could not interact with gp41, which indicates that the C-terminus of AAT is an essential functional domain. Munch et al.'s study also reveals that VIRIP at the C-terminus of AAT inhibits HIV-1 infection [12]. When ΔAAT, AAT or synthesized Cterminus fragment of AAT was cultured with gp41 or gp120, we detected direct interaction between AAT/synthesized C-terminus fragment of AAT, but not ΔAAT, and gp41. Thus, collectively with the findings of Munch's [13], it is clear that the inhibitory effect of AAT on HIV-1 infection is mediated through the direct interaction between the C-terminus of AAT and gp41, which then interferes with the entry of HIV-1 into the host cells. Conclusion Studies have showed that AAT inhibits HIV-1 replication [15,36]. Our previous study reveals that AAT enters the cytosol of infected CD4+ T cells and directly interacts with cytosolic IkBα to alter its ubiquitinylation pattern. The change of IkBα ubiquitinylation pattern Fig. 7 SPR assay of AAT/C and gp41 interaction. gp120 (6.25-200 μg/mL) and gp41 (18.75-600 μg/mL) were analyzed for binding to immobilized AAT, C or ΔAAT. Representative Sensorgrams were displayed and the amount of bound gp120 or gp41 was shown over time. Binding curves revealing saturable kinetics for the interaction were also plotted results in the inhibition of NF-kB activation [15,24]. In infected cells, NF-kB activation is critical for HIV-1 replication [37]. In the present study, our results reveal that AAT inhibits HIV-1 infection not by directly targeting the steps of viral RNA reverse transcription and viral DNA integration. The inhibitory effect of AAT on HIV-1 infection is mediated through the direct interaction between AAT's C-terminus and gp41, which thereby inhibits HIV-1 entry into the host cells. Therefore, these results together indicate that AAT works through multiple ways to exert its inhibitory effects on HIV/AIDS pathogenesis, which may provide useful information for drug development for HIV/AIDS treatment.
2023-01-15T14:45:44.759Z
2016-07-29T00:00:00.000
{ "year": 2016, "sha1": "04a087a63fd73788fe5796e5af4b39f855592577", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12866-016-0751-2", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "04a087a63fd73788fe5796e5af4b39f855592577", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
117851752
pes2o/s2orc
v3-fos-license
Complete Two-Loop Corrections to H ->gamma gamma In this paper the complete two-loop corrections to the Higgs-boson decay, H ->gamma gamma, are presented. The evaluations of both QCD and electroweak corrections are based on a numerical approach. The results cover all kinematical regions, including the WW normal-threshold, by introducing complex masses in the relevant (gauge-invariant) parts of the LO and NLO amplitudes. Introduction In the intermediate mass range of the Higgs-boson, its decay into photons is of great phenomenological interest. At hadron colliders the decay H → γγ provides precious informations for the discovery in the gluongluon production channel [1]. An upgrade option at the ILC will allow for a high precision measurement of the partial width into two photons [2] with a quantitative test for the existence of new charged particles. The QCD corrections to H → γγ have been computed in the past and analytic results at next-to-leading order are available in Ref. [3] and in Ref. [4] (see also Ref. [5]). Electroweak two-loop corrections have been computed by suitable expansions of the two-loop Feynman diagrams [6]. Master integrals for the two-loop light fermion contributions have been analyzed in Ref. [7] and two-loop light fermion contributions to Higgs production and decays in Ref. [8]. In our approach we have generated the full amplitude (up to two-loops and including QCD) in a completely independent way and we have used the techniques of Ref. [9] to produce a numerical evaluation of the partial width Γ(H → γγ). Since we are not bound to rely on expansion techniques, not even in the bosonic sector and in the top-bottom one, we can produce results with very high accuracy for any value of the Higgs-boson mass, taking into account the complete mass dependence of the W -boson, Z -boson, Higgs-boson and topquark. A consistent and gauge-invariant treatment of unstable particles made it also possible to produce very accurate results around the W W -threshold. Method of calculation and technical issues Our calculation builds upon the numerical approach of Ref. [9] where two-loop, two and three point functions have been investigated in the most general case. In this project we have developed a set of routines which go from standard A 0 , . . . , D 0 functions up to diagrams needed for a two-loop 1 → 2 process. This new ensemble of programs will succeed to the corresponding Library of TOPAZ0 [10]. The whole collection of codes also uses the NAG-library [11]. The generation as well as the manipulation of Feynman diagrams has been performed with the use of the FORM [12] code GraphShot [13]. Diagrams are generated, simplified and a FORTRAN interface is created. Furthermore, the code checks for the validity of the relevant Ward identities. Renormalization is performed according to the scheme developed in Ref. [14]. In this paper, we shall follow the same notations and conventions for two-loop diagrams as defined in Ref. [15]. In the following we will give a short outline of the techniques used for the calculation. Before evaluating the two-loop integrals arising after generating the Feynman diagrams, two main simplifications are done recursively. At first, reducible scalar products are removed and secondly, the symmetries of the diagrams are taken into account. The integrals are then assigned to scalar-, vector-and tensor-type integrals, according to the number of irreducible scalar products in the numerator and form-factors are introduced. The cancellation of scalar products is performed by expressing the scalar products in the numerator in terms of their associated propagators. This procedure can lead to removing lines in a diagram, so that each diagram produces a set of daughter-families with at least one line less. Apart from the reduction of scalar products, the consideration of the symmetries of a given diagram is important in order to reduce the number of integrals, which will be evaluated numerically at the end of the calculation. A simple example, showing the exploitation of the symmetries, is given in Fig. 1 for a scalar diagram. We now discuss briefly the extraction of collinear logarithms from Feynman diagrams. It is worth noting that the amplitude for H → γγ is collinear-free and one could adopt the approach where all light fermions are massless, then collinear behavior of single components is controlled in dimensional regularization and collinear poles cancel in the total. We prefer another approach where collinear singularities are controlled by light fermion masses. Although the total amplitude is collinear-free, our procedure of reduction ⊗ symmetrization introduces a sum of several terms, of which some are divergent. Of course, we check that all logarithms of collinear origin cancel and, as a matter of fact, they cancel family by family of diagrams. To be precise, we need some universal representation for the coefficient of the collinear logarithms, which allows us to show their analytical cancellation, and a method to compute the remaining collinear-free parts. The first task is achieved by introducing integrals of one-loop functions and using their well-known properties Figure 1: Symmetries of the V E -family: The first diagram represents the V E -family (a). Its integral remains unchanged by exchanging m1 ↔ m2 (b) as well as if one interchanges m3 ↔ m4 and p2 ↔ −P simultaneously (c). The last diagram (d) is a combination of the first (b) and the second (c) symmetry. One can also perform a total reflection of all external momenta, which is not shown in the figure and leaves the integral also unchanged. to make the cancellation explicit. Using the techniques of Ref. [9] the collinear finite contribution is first written in terms of smooth integrands and then evaluated numerically; an example is shown in Fig. 2. Conceptual issues We will now apply our formalism to the computation of the amplitude for H(−P ) + γ(p 1 ) + γ(p 2 ) → 0, (P = p 1 + p 2 ) which will be written as The form factor F ǫ is absent at O g 3 and only arises at O g 5 but for a decay width with accuracy O g 8 (which includes one-loop ⊗ two-loop) its contribution is again zero. Bose symmetry and Ward identities (doubly-contracted, simply-contracted but with physical sources, simply-contracted with off-shell photons and unphysical sources) allow us to write the amplitude as where the form factors are expanded up to two-loops, To proceed we need to include the relations between renormalized masses (small letters) and experimental, on-shell, ones (capital letters). Finite renormalization is then completed by introducing external wavefunction factors (Z −1/2 H Z −1 A ) and the renormalization of the coupling constants. All needed relations are collected in Eq.(4). where Σ (1) HH and Σ (1) t are respectively the Higgs, W and top quark one-loop self-energies as defined in section 5.3 of the second paper of Ref. [14]; furthermore, The symbols M t , M W , M Z , G F and α denote the mass of the top-quark, the W -boson, the Z -boson as well as the Fermi-coupling constant and the fine structure constant. Collecting all the ingredients we get the corresponding S-matrix completely written in terms of experimental data The subscript "ex" indicates that all masses are the experimental ones and the mass-shell limit (s → M 2 H ) is taken only after the inclusion of finite renormalization. QCD corrections will appear in Eq.(4) and Eq.(6) multiplied by πα . In our calculation we prove the cancellation of the collinear logarithms and then set the light fermion masses to zero; therefore, due to Yukawa couplings, an imaginary part in A (1) arises only if M H > 2 M W . For two-loop terms imaginary parts are always present even for massless fermions. From Eq.(6) the total amplitude for H → γγ can be written symbolically as A µν phys = A µν 1 ⊗ (1 + FR)+A µν 2 . Finite renormalization (FR) amounts to expressing renormalized parameters in the one-loop amplitude in terms of data and in the insertion of the Higgs wave-function factor Z Há la LSZ; both requires the notion of on-shell mass. There are two sources of inconsistency in this approach: the Higgs-boson is an unstable particle and this fact has a consequence which shows up at two-loops. When we compute the doubly-contracted Ward identity for the full two-loop amplitude we obtain The analytical form of W is known and the non-zero result comes from the fact that the pure two-loop contribution to the Ward identity gives W while finite renormalization gives the real part Re W . Therefore, the Ward identity is violated above the W W -threshold. On top of this problem we find a second unphysical feature: let us analyze how the amplitude for H → γγ behaves around a normal-threshold, i.e. for M H = 2 M W , 2 M Z , 2 M t . In particular we are interested in the question of possible square-root or logarithmic singularities. Even if present they are unphysical, although integrable. Both problems can be solved by using complex masses as discussed in subsection 3.3. Square-root singularities It is very simple to prove that derivatives (represented by a dot in Eq. (8)) of one-loop, two-point functions with equal masses (m) develop a square root singularity: The same argument can be repeated for all the one-and two-loop diagrams with any number of external legs where we can cut two and only two m-lines; normal-threshold will be a sub-(sub-. . . )leading singularity, but a 1/β-behavior shows up only if the reduced sub-graph responsible for the singularity can be reduced to ȧ B 0 -function. Therefore the only two-loop vertex giving rise to a 1/β-divergent behavior is the one depicted in Fig. 3. For this diagram it is possible to find a representation where the singular part is completely written in terms of one-loop diagrams, as shown in the figure. The remainder can be cast in a form suited for numerical integration. In the decay H → γγ, the 1/β-singularity (β 2 = 1−4 M 2 W /M 2 H ) arises from the two-loop diagram of Fig. 3, from Higgs-boson wave-function factor (derivative of a B-function) and from finite W -mass renormalization (derivative of a C-function). Our conclusion is that the unphysical 1/β-behavior around some normalthreshold is induced by self-energy like insertion, a fact that is not surprising at all; those insertions, signaling the presence of an unstable particle, should not be there and complex poles should be used instead. Logarithmic singularities Let us consider the two-loop diagram of Fig. 4 with P 2 = −s (s > 0). Writing the corresponding integral in parametric space we introduce the quadratic forms and obtain Since we are interested in the behavior around β → 0, we split V K into a singular and regular part and find Figure 4: The irreducible, scalar, two-loop vertex diagram V K with logarithmic divergency. Solid lines represent a massive particle with mass m, whereas wavy lines correspond to massless particles. The singular part V K sing will be written as [16] V where we have introduced the shorthands a = τ y, with τ = (1 − t) y + t and T = t/(1 − t) > 0. I(t) can be split into two parts, with X 1 = −X, X 2 = 1 − X and B(x,y) is the Euler beta-function. While the second term of Eq. (14) is regular for β = 0, the singularity of the first term follows from the fact that λ ∼ (1 − y) 2 for β → 0; however, we have a singular behavior only if 0 ≤ X ≤ 1 which requires y ≥ y min = max{0 , (t − 1/2)/(t − 1)}. Since we are interested in the leading behavior for β → 0, we can extend the integration domain in the first term to [0, 1], without modifying the divergent behavior of the diagram. The singular part is then given by 0 < Res < 1/2. Here F 1 denotes the first Appell-function. To obtain the expansion corresponding to β → 0 we close the integration contour over the right-hand complex half-plane at infinity. The leading (double) pole is at s = 1/2. Therefore, we obtain Inserting it into Eq.(12) and using 1 If the massive loop in Fig. 4 is made of top quarks the contribution of V K to the amplitude behaves like β 2 V K and, therefore, the logarithmic singularity is β 2 -protected at threshold; the same is not true for a W -loop. Our result, Eq. (17), is confirmed by the evaluation of V K of Ref. [17] in terms of generalized log -sine functions. Starting from Eq.(6.34) of Ref. [17] and using the results of Ref. [18] we expand around θ = π, where x = e i θ = (β − 1) (β + 1), with 0 < θ < π. This gives for the leading behavior of V K below threshold (π 2 /2) ln(θ − π), where ln(−β 2 ) = ln(θ − π) 2 − ln 2. The same behavior can also be extracted from the results of Ref. [4]. Complex masses Our pragmatical solution to the problems induced by unstable particles has been to remove the Re label in those terms that, coming from finite renormalization, give Re W in the Ward identity of Eq.(7). Furthermore, we decompose Eq.(6) according to: reg . (18) and prove that, as expected, A L and A reg satisfy (separately) the Ward identity. The latter fact allows us to -minimally -modify A (2) R,L by working in the complex-mass scheme of Ref. [19], i.e. we include complex masses in the, gauge-invariant, leading part of the two-loop amplitude as well as in the one-loop part. The decomposition of Eq.(18) deserves a further comment. There are three sources of 1/β -terms: a) pure two-loop diagrams of the V M -family, i.e. bubble insertions on the internal lines of the one-loop triangle; b) W -mass renormalization, i.e. on-shell W -self-energy × the mass squared derivative of the one-loop Wtriangle (the latter giving rise to 1/β); c) Higgs wave-function renormalization × lowest order (the former giving rise to 1/β). One can easily prove that only c) survives and a,b) that are separately singular add up to a finite contribution (β → 0); their divergency is an artifact of expanding Dyson resummed propagators. The ln β -term originates from pure two-loop diagrams (the V K -family) and it is a remnant of the one-loop Coulomb singularity of one-loop sub-diagrams. Numerical results The partial width of the Higgs-boson decay into two photons can be written as The relative correction δ induced by two-loop (NLO) effects is given by Γ = Γ 0 (1 + δ), where Γ 0 is the lowest order result. It can be split into electroweak and QCD contributions, δ = δ EW + δ QCD . For the numerical evaluation we use the following set of parameters: All light fermion masses are set to zero and we define the W -boson complex pole [20] by The one-loop H → γγ amplitude, with a complex W -mass, is shown in Fig. 5 around the W W -threshold including a comparison with the real W -mass amplitude. A comparison of the percentage electroweak corrections, with and without complex W -masses, is shown in Fig. 6 for a Higgs mass range below the W W -threshold showing the unphysical growth of the real case and also some sizable difference in a region of about two GeV below the threshold. We have also analyzed the effect of (artificially) varying the imaginary part of the W -boson complex mass; results are given in Fig. 7, showing that our complex result reproduces the real one in the limit Γ W → 0. Fig. 7 clearly demonstrates the large but artificial effects arising at normal-thresholds of unstable particles when their masses are kept real. Finally, in Fig. 8 we show both QCD and electroweak percentage corrections to the decay width Γ(H → γγ), including the region around the W W -threshold. A running α s has been used for the computation of the QCD corrections. The remaining cusp of δ EW at the W W -threshold, whose details are shown in the blow-up of Fig. 8, is due to our minimal scheme where the W -mass is kept real in A (2) reg , the regular part of the amplitude (see Eq. (18)). The relatively small error bars in a region so close to threshold serve as evidence for the efficiency of our numerical algorithms. Our result for δ EW in the region 100 GeV < M H < 150 GeV is in substantial agreement with those of Ref. [6]. In conclusion, we observe a cancellation of the two corrections below the threshold whereas, above containing the 1/β terms of the two-loop amplitude (left) and in the real part of V K of Fig. 4 containing the lnβ terms (right) is shown. We also show the effect of using a real W -boson mass but removing Re -labels in finite renormalization. it, both δ QCD and δ EW are positive leading to a sizable (up to 4.5%) total correction to the decay width. The perturbative expansion for the decay rate, supplemented with the complex-mass scheme, gives reliable and accurate predictions in a wide range of values for the Higgs-boson mass, typically −1% < δ tot < 4% in the range 100 GeV < M H < 170 GeV. Conclusions In this paper we provide a stand-alone numerical calculation of the full two-loop corrections to the decay width Γ(H → γγ). Since no expansion is involved in the calculation we can produce results for all values of the Higgs-boson mass, as shown in Fig. 8, including the W W -threshold. The techniques introduced in this context are general enough to be used for all kinematical configurations of 1 → 2 processes at the two-loop level. To deal with normal-threshold singularities, specifically the W W -threshold, we have introduced complex W -masses in a gauge-invariant manner; our minimal scheme selects gauge-invariant components, typically LO (one-loop) amplitude and divergent parts of the NLO (two-loop) amplitude, and perform the replacement of Eq. (20). Details of our approach will be described in a forthcoming publication. The main result obtained in this paper can be summarized by saying that the NLO percentage corrections to the decay width Γ(H → γγ), δ QCD and δ EW , compensate below threshold leading to a small total correction; however, above the W W -threshold they are both positive, leading to a sizable overall effect of ≈ 4%.
2007-07-10T10:32:22.000Z
2007-07-10T00:00:00.000
{ "year": 2007, "sha1": "60ac2e6d7b10b9c8818f843c513413d5862f78a0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0707.1401", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "60ac2e6d7b10b9c8818f843c513413d5862f78a0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
54085153
pes2o/s2orc
v3-fos-license
CARBOXYLATION OF SILVER NANOPARTICLES FOR THE IMMOBILIZATION OF β-GALACTOSIDASE AND ITS EFFICACY IN GALACTO-OLIGOSACCHARIDES PRODUCTION The present study investigated the carboxylation of silver nanoparticles (AgNPs) by 1:3 nitric acid-sulfuric acid mixtures for immobilizing Aspergillus oryzae β-galactosidase. Carboxylated AgNPs retained 93% enzyme upon immobilization and the enzyme did not leach out appreciably from the modified nanosupport in the presence of 100 mmol L NaCl. Atomic force micrograph revealed the binding of β-galactosidase on the modified AgNPs. The optimal pH for soluble and carboxylated AgNPs adsorbed β-galactosidase (IβG) was observed at pH 4.5 while the optimal operating temperature was broadened from 50 C to 60 C for IβG. Michaelis constant, Km was increased two and a half fold for IβG while Vmax decreases slightly as compared to soluble enzyme. β-galactosidase immobilized on surface functionalized AgNPs retained 70% biocatalytic activity even at 4% galactose concentration as compared to enzyme in solution. Our study showed that IβG produces greater amount of galacto-oligosaccharides at higher temperatures (50 C and 60 C) from 0.1 mol L lactose solution at pH 4.5 as compared to previous reports. INTRODUCTION β-galactosidase (E.C. 3.2.1.23)is an important enzyme that has attracted the attention of the enzymologists due to its dual nature of producing lactose-free dairy products by catalyzing the hydrolysis of lactose into glucose and galactose, and in producing galacto-oligosaccharides by favoring transgalactosylation reaction when lactose acts as an acceptor. 1,2An excellent review has appeared lately in which β-galactosidase from various sources including psychrophilic, mesophilic and thermophilic organisms were utilized by previous researchers for obtaining galacto-oligosaccharides and lactose-free dairy products. 3alacto-oligosaccharides (GOS) are non-digestible food ingredients that are obtained from lactose as a result of transgalactosylation reaction catalyzed by β-galactosidase.][6] Moreover, with the emergence and increase of microbial contamination in the immobilized system and the continuous emphasis on health care costs, many researchers have tried to develop new and effective nanoparticle based β-galactosidase immobilized system that are free of microbial resistance and reduced product inhibition which could facilitate the continuous and long-term processing of the biocatalyst, and ultimately reduce their cost in biotechnology industries. 7,8umerous carriers and technologies have been implemented by researchers for improving the immobilization of enzyme in order to enhance their activity and stability to decrease the enzyme biocatalyst cost in industrial biotechnology. 9,10These include crosslinked enzyme aggregates, microwave-assisted immobilization, click chemistry technology, recombinant enzymes and nanoparticle-based immobilization of enzymes. 11,12ast decade witnessed the importance of AgNPs in the field of chemistry, physics and biology due to their unique optical, electrical and photothermal properties. 13They are used widely for imparting stability to various bioactive substances including peptides, enzymes, antibodies and DNA due to their greater porosity and interconnectivity for enzyme immobilization. 14The fabrication technology has further solved the problems related to the toxicity of nanoparticles, and provides them a shield against harsh environmental conditions like pH variation, temperature alteration and shaking condition. 15,16reviously, AgNPs have been modified by formamide, 17 phosphoryl disulfides, 18 titanium implant surface, 19 glutaraldehyde, 20 carbon nanotube/polyaniline film, 21 carbonate, 22 cysteamine, 23 polyaniline, 24 to name a few, to increase the catalytic efficiency of enzymes for various biomedical and biotechnological applications. GOS production has been obtained in the recent past by immobilizing β-galactosidase from various sources including Bacillus circulans, 2,25 Bifidobacterium longum 26 and Kluyveromyces lactis. 27owever, these systems suffered from drawbacks either in terms of enzyme stability/reusability or exhibiting relatively lesser sensitivity toward glucose and galactose.In some cases, enzymes were unable to convert free galactose for GOS formation.Furthermore, the rate of the reaction was reduced when galactose was added in the lactose solution and hence maximum GOS concentration obtained as a result of partial hydrolysis of newly formed oligosaccharides was not observed with immobilized enzyme. These studies encouraged us to exploit modified AgNPs as an immobilization matrix which can allow multiple and continuous use of enzyme along with minimum reaction time, high stability, improved process control, easy product separation apart from being less labor intensive and more cost effective to the biotechnology industries.Therefore, Aspergillus oryzae β-galactosidase was immobilized on the modified AgNPs.Effect of various physical and chemical denaturants, product inhibition by galactose, kinetic parameters and reusability study was investigated for carboxylated AgNPs adsorbed β-galactosidase (IβG).The potential biotechnological application of IβG has been shown by galacto-oligosaccharides production at 50 o C and 60 o C. EXPERIMENTAL Materials Coomassie Blue Brilliant G-250, buffers of different pH values and commercial grade Aspergillus oryzae β-galactosidase (Activity: 1200 U/gm) were obtained from Sigma Chem.Co. (St.Louis, MO, USA).Nitric acid, sulfuric acid and o-nitrophenyl β-Dgalctopyranoside (ONPG) was obtained from Merck.All reagents were prepared in double distilled water with chemicals of analytical grade. Carboxylation of silver nanoparticles Silver nanoparticles (AgNPs) were prepared as described in our previous study. 20Briefly, 1.0 mmol L -1 silver nitrate solution was magnetically stirred in ice bath for 15 min before adding sodium borohydride (2.0 mmol L -1 ) to it.The transformation of color from transparent to golden yellow indicates the formation of AgNPs.The obtained powder was analyzed by XRD and TEM as discussed in our studies before and was observed to be of 26 nm.The washed AgNPs (1.0 gm) were carboxylated by incubating them in 10 mL of 1:3 HNO 3 /H 2 SO 4 (v/v) mixture in a shaker at 30 o C at 150 rpm for 8 h.The resulting carboxylated AgNPs (cAgNPs) obtained were continuously washed by distilled water and then dried overnight in an oven at 100 °C. β-galactosidase immobilization and leaching of enzyme β-galactosidase (2400 U, equivalent to 2 mg) prepared in assay buffer (0.1 mol L -1 sodium acetate buffer, pH 4.5) was suspended with cAgNPs (1 g) overnight at 32 °C with slow stirring.The unbound enzyme was removed by washing it thrice with assay buffer.In another experiment, immobilized enzyme preparation was suspended in 100 mmol L -1 NaCl in a shaker at 50 o C and collected by centrifugation at 2000 rpm after a gap of 30 minutes.Activity of enzyme and the supernatant was checked according to the procedure discussed below. Enzyme assay Hydrolysis of β-galactosidase was calculated by continuously shaking an assay volume of 2.0 mL containing 1.79 mL of 100 mmol L -1 sodium acetate buffer (pH 4.5), 100 µL suitably diluted β-galactosidase and 0.2 mL of 2.0 mmol L -1 ONPG for 15 min at 40 o C. The reaction was stopped by adding 2.0 mL of 1.0 mol L -1 sodium carbonate solution and product formed was measured spectrophotometrically at 405 nm. 20 Atomic force microscopy and determination of kinetic parameters Tapping mode AFM experiments of cAgNPs adsorbed β-galactosidase was performed using commercial etched silicon tips as AFM probes by exposing the nanomatrix with the same protein--free buffer as the enzyme-contacted surfaces with typical resonance frequency of ca.300 Hz (RTESP, Veeco, Japan).In another experiment, kinetic parameters of soluble and immobilized enzyme was investigated from Line-Weaver Burk plot by measuring their initial rates at varying concentrations of ONPG in 100 mmol L -1 sodium acetate buffer at pH 4.5, 40 o C. Physico-chemical characterization and reusability study of cAgNPs attached β-galactosidase Enzyme activity of soluble and immobilized β-galactosidase (20 µL) was assayed in buffers of different pH (pH 3.0-8.0).The buffers used were glycine-HCl (3.0), sodium acetate (pH 4.0-6.0)and Tris-HCl (7.0, 8.0).Molarity of the buffer was 0.1 mol L -1 .The activity expressed at pH 4.5 was considered as control (100%) for the calculation of remaining percent activity.In another experiment, effect of temperature on soluble and immobilized β-galactosidase (20 µL) was studied by measuring their activity at various temperatures (30-70 °C).The enzyme was incubated at various temperatures in 0.1 mol L -1 sodium acetate buffer, pH 4.5 for 15 min and the reaction was stopped by adding 2.0 ml of 2.0 mol L -1 sodium carbonate solution.The activity obtained at 50 °C was considered as control (100%) for the calculation of remaining percent activity. IβG (20 µL) was taken in triplicates for assaying the activity of enzyme.After each assay, immobilized enzyme was taken out from assay tubes and was washed and stored in 0.1 mol L -1 sodium acetate buffer, pH 4.5 overnight at 4 o C for 6 successive days.The activity determined on the first day was considered as control (100%) for the calculation of remaining percent activity. Effect of product inhibition and production of galactooligosaccharides The activity of free and immobilized β-galactosidase (20 µL) was determined in the presence of increasing concentrations of galactose (1.0-5.0%,w/v) in 0.1 mol L -1 sodium acetate buffer, pH 4.5 at 40 o C for 1 h.The activity of enzyme without added galactose was considered as control (100%) for the calculation of remaining percent activity. The formation of oligosaccharides was analyzed by high performance liquid chromatography (Shimadzu, Japan) which consists of a LC-10AT pump, a SPD-10AVP, PDA detector, phenomenex C18 (250 mm × 4.6 mm, 5 µm) column, a Phenomenex, HPLC grade cartridge system and a class Nuchrom software.The pH of the mobile phase was checked on microprocessor based water proof pH tester while the overall illumination at the point of sample placement was tested by a calibrated lux meter.EDTA calcium disodium (60 mg/L) was dissolved in Milli-Q water and used as mobile phase at a flow rate of 0.6 mL min -1 .The temperatures of the column oven and the detector were maintained at 75 °C and 35 °C, respectively. Estimation of protein Protein concentration was determined by using bovine serum albumin as a standard. 28 Statistical analysis Each value represents the mean for three independent experiments performed in triplicates, with average standard deviations <5%.The data expressed in various studies was plotted using Sigma Plot-9.Data was analyzed by one-way ANOVA.P-values <0.05 were considered statistically significant. RESULTS AND DISCUSSION The present study demonstrates the successful immobilization of Aspergillus oryzae β-galactosidase on a highly efficient and selectively modified nanosupport, carboxylated AgNPs.The resulting carboxylated AgNPs (cAgNPs) exhibited 93% of the enzyme attached to the modified nanosupport (Table 1).Excellent yield may be attributed to the presence of large number of functional groups attained on the support surface after carboxylation step which ultimately provides large surface for enzyme immobilization. Figure 1 showed the schematic representation of the step by step functionalization of AgNPs by the acid mixture and enzyme immobilization.Moreover, integrity of the enzyme on the modified AgNPs has been illustrated by atomic force microscopy (Figure 2).AFM image exhibited covalent attachment of β-galactosidase on the carboxylated AgNPs and showed its efficiency as an immobilization matrix.The modified AgNPs generated excellent supports for enzyme immobilization due to their small size and large surface area.These molecules influence mechanical properties like stiffness and elasticity and reduce diffusion limitations to maximize the functional surface area needed for enzyme immobilization in a uniform manner.Finally, the enzymatic activity measurements for the immobilized enzyme confirmed the suitability of the optimized protocol which was demonstrated by the retainment of greater amount of enzyme on the designed nanomatrix (Table 1). Table 2 suggested that as a result of immobilization, Km was increased to 6.24 mmol L -1 as compared to 2.46 mmol L -1 of soluble β-galactosidase.However, Vmax does not change significantly.It means that the affinity of the immobilized enzyme for its substrate and the velocity of enzymatic reaction decreased which occurred due to the lower accessibility of the substrate to the active site and lower transporting of the substrate and products into and out the modified nanomatrix (Figure 3).These observations are in agreement with Aspergillus oryzae β-galactosidase immobilized on magnetic polysiloxane-polyvinyl alcohol 7 and Kluyveromyces lactis β-galactosidase attached to glutaraldehyde modified multiwalled carbon nanotubes. 29n order to investigate the possibility of leaching of enzyme from the developed nanosystem, we evaluated the activity of IβG in the presence of 100 mmol L -1 NaCl at 50 o C (Figure 4).It showed that only 8% of the enzyme leached out even after 2.5 hours of incubation with the leachant.This might be due to the presence of small quantities of physically adsorbed enzyme on the developed nanosystem.Needless to mention, covalent attachment resulted in strong bond formation between β-galactosidase and cAgNPs, and hence leaching occurs to a negligible extent even at high temperature. The stability of enzyme might increase or decrease as a result of immobilization depending upon the properties of matrix used.Moreover, the catalytic activity of enzyme depends on conformational structure of the protein, even minor alterations in the tertiary structure of the protein resulted in loss of its catalytic activity. 9ence, we studied the effect of various denaturants on the activity of immobilized enzyme.Figure 5 depicts the pH-activity profiles for soluble and cAgNPs bound β-galactosidase.The optimal pH for soluble and immobilized β-galactosidase was observed at pH 4.5.However, greater fractions of catalytic activity were observed for IβG at both lower and higher pH ranges.It should be noted that the optimal operating temperature was broadened from 50 o C to 60 o C for the immobilized enzyme (Figure 6).The probable reason for this may be that covalent binding and crosslinking provided more rigid external backbone for β-galactosidase.Similar results have been achieved earlier for Aspergillus oryzae β-galactosidase immobilized on concanavalin A-cellulose. 30 Each value represents the mean for three independent experiments performed in triplicates, with average standard deviations, < 5%. by IβG at higher temperature might be due to its increased stability obtained as a result of its broadening in temperature-optima.In previous studies, maximum GOS production achieved was 30% by using 241 U of immobilized β-galactosidase at pH 6, 40 o C. 2 In another study, maximum formation of GOS achieved by Aspergillus oryzae β-galactosidase immobilized on magnetic polysiloxane-polyvinyl alcohol was 26% w/v of total sugars at 55% lactose conversion at pH 4.5 and 40 o C. 7 However, a decrease in GOS production was achieved after certain maximum value as a result of galactose mediated product inhibition.Immobilized enzyme was affected less by this inhibition as compared to soluble β-galactosidase because immobilization prevented the dissociation of enzymes from cAgNPs in the presence of various physical and chemical denaturants like galactose and high temperature. The feasibility of regeneration of cAgNPs attached β-galactosidase and consequent reuse of the support has been shown in Figure 8. IβG retained 85% activity even after its sixth repeated use, hence it can provide economic benefits for its industrial application.Galactose acts as a strong competitive inhibitor for Aspergillus oryzae β-galactosidase catalyzed reactions and thus can bring down the process in terms of function and quality of the products obtained. 7ence, it is difficult to achieve complete reaction due to these product inhibitors which decreased or even stops the reaction completely.Our results suggested that cAgNPs bound β-galactosidase showed promising resistance to inhibition mediated by galactose as compared to its soluble counterpart even at higher concentration.The result indicated that soluble enzyme exhibited 34% activity as compared to 70% activity retained by IβG at 4% galactose concentration (Figure 9).Moreover, Ki app value of immobilized β-galactosidase was 326×10 −6 mol L -1 while the soluble enzyme exhibited lower Kiapp value, 163×10 −6 mol L -1 at 3% galactose concentration (Table 3).Thus, it can be concluded that immobilization of β-galactosidase on cAgNPs proved as a versatile approach and exhibited improvement in catalytic property and stability of enzyme for the production of GOS. CONCLUSION The present study describes a simple, inexpensive and novel procedure of modifying silver nanoparticles from HNO 3 /H 2 SO 4 mixture and exploiting it as a nanomatrix for immobilizing Aspergillus oryzae β-galactosidase.Covalently linked enzyme exhibited great immobilization efficiency and markedly improved stabilization against various physical and chemical denaturants.Moreover, immobilized enzyme system was not restricted by diffusional limitations and hence can be exploited in biotechnological process for producing galactosachharides from transgalactosylation of lactose in a convenient and cheaper way. SUPPLEMENTARY MATERIAL Experiment related to immobilization of soluble β-galactosidase on unmodified AgNPs has been mentioned in our previous manuscript 20 .Since the immobilization yield of enzyme was less on unmodified AgNPs (80%) as compared to the functionalized AgNPs (93%), we continued our further studies with the carboxylated AgNPs. Table 3 . Ki app values for soluble β-galactosidase (SβG) and enzyme immobilized on surface functionalized AgNPs (IβG) in the presence of galactose Each value represents the mean for three independent experiments performed in triplicates, with average standard deviations, < 5%.
2018-12-01T16:44:52.257Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "37427bc24879764a24a3c281a78b20b07d94bf06", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5935/0100-4042.20150006", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "37427bc24879764a24a3c281a78b20b07d94bf06", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
247354816
pes2o/s2orc
v3-fos-license
Reading Together: Engaging Undergraduate Writers Through an Online Book Club This teaching reflection examines how “reading together” was fostered in synchronous and asynchronous online environments in two undergraduate creative writing courses through participation in a virtual book club. In the first course, prior to the pandemic, students had the option of meeting in person or via Zoom while we read Daisy Johnson’s Oedipus Rex retelling, Everything Under, for the book club. In the second course, during the pandemic, students had virtual synchronous and written participation choices while we read together Jessica Anthony’s political satire, Enter the Aardvark, with the author visiting in two sessions. In both cases, the goals were consistent: to get students reading as writers; to foster intrinsic motivation through personal choice; and to satisfy students’ desire for community connection while still accommodating personal schedules and geographical location. A virtual book club lets students read on their own schedule and in their own space, but still share their experience and observations with peers over greater distances (and time zones) than would otherwise be possible. a sense of virtual community to facilitate vibrant discussion of current literary works and how they related to students' perceptions of literature and, of course, their own writing. Underlying all of this was a desire to help strengthen my students' reading muscles and help them see reading as part and parcel of their own artistic process. Reading as a creative writer, or "creative reading," to channel Emerson (1837), encompasses a different focus on the text than that of leisure reading-one of noticing what effect is being created by the writer and why they chose to do it that way. It is like looking under the hood of a car to figure out how the engine works or slowing down a video of an elite runner to study foot strikes. Creative reading is certainly akin to close reading but with a different, perhaps more self-serving, goal: to expand one's repertoire of writing strategies, to "steal like an artist," as Austin Kleon (2012) says. The book club model has been used by educators in various disciplines to help students connect course concepts to non-academic or "real world" scenarios, as well as to strengthen reading skills via a "collective literary experience" (Sylvan 2018, 226). Scourfield and Taylor's (2014) undergraduate social work program employs the book club as a co-curricular activity where students discuss program content through the lens of popular fiction, promoted through the university's learning management system (LMS) as well as via a dedicated Twitter feed. In fact, in one semester, club tweets were seen by the book club novel's author, who later arranged to attend a meeting. Wyant and Boen (2018) describe book clubs for sociology students held either during class time (face-to-face) or via online discussion forums, which let students engage with sociological concepts within a fictional "shared reality" (262). Cohen's (2006) book club is organized to help business students develop ethical awareness by reading popular non-fiction. Verran (2019) analyzes how a book club for first-year biomedical students lends insight into how infectious disease is perceived by non-scientists, leading to productive discussions on such diverse topics as the laboratory environment and representations of female scientists. Similarly, Griffard, Mosleh, and Kubba (2013) track how participation in a book club helps pre-med students learn about biology through a historical/cultural lens. Ruzich and Canan (2010) developed a book club for high school seniors to bridge the summer between graduation and the first year of college. In this case, it was an entirely online book club or, as they describe it, "a summer reading assignment that would mimic book club participation" (62, emphasis added). Such a distinction is worth noting, particularly when academic book clubs are compared with non-academic or "real" ones, a.k.a., the "productive adult book club" (Beach and Yussen 2011). The biggest difference, of course, is that the latter's members choose to be there while the former's do not. Some programs avoid this element of compulsion by making the club optional or extra-curricular but then the students who might benefit the most from its "collaborative exchange of ideas" may not sign up due to lack of extra time (see Scourfield and Taylor 2014;Ruzich and Canan 2010, 65). Integrating the club into course curriculum has other advantages-the "shared reality" of reading a common creative work contributes to classroom community as well as to collaborative learning (Wyant and Bowen 2018, 262). Moreover, student learning is enriched when instructors draw on book club discussions in lectures or otherwise connect the book to course concepts in various contexts. The key to successful "mimicking" of book club participation, then, lies in fostering intrinsic motivation ("I'm doing this because it's interesting and worthwhile") rather than extrinsic ("I'm doing this because I want a good grade"). Although it might be difficult to avoid extrinsic motivators entirely, their potentially deleterious impact can be minimized by letting students have as much control over their learning as is feasible (Bain 2014). To that end, I sought to maximize choice in how students participated in the book clubs, as I will describe further. Given the variety of disciplines already using the book club model, its application in a creative writing course seemed like a natural fit. We already did plenty of reading in my classes because, to paraphrase novelist Stephen King (2000), writing well requires reading well (see also Sellers 2017, 35;LaPlante 2007, 40;Kardos 2017, 3). At this point, it is useful to clarify two key differences between the book club model and conventional literature-based discussions, as used within my own online course. It is important to note that the book club does not replace regularly scheduled literature discussions (done asynchronously via learning management software forums in online classes) but, rather, augments them. The book club model offered: one, the ability to introduce a genre not otherwise covered in the curriculum; and two, a flexible structure by which students could have synchronous, face-to-face interaction with me, each other and, in fall 2020, a visiting author. With respect to the former, the book club allowed me to assign a work that was significantly different than anything else students would read that semester and from which they could glean new insights in a way that was more holistic. Most of the readings we do support specific craft elements and short writing assignments focused on helping students practice characterization, dialogue, imagery, and so on. The book club let students tackle a novel, a form we were not otherwise reading that semester. The novel's length, multiple plot lines, cast of characters, and complexity (perfect for sparking discussion), as well as its absence from my curriculum otherwise, made it the perfect candidate for a book club. With respect to the second point, the book club allowed multiple opportunities for face-to-face interaction, which can otherwise be difficult, if not impossible, in a fully online, asynchronous course. I knew, too, that I wanted to share new work with my students, ideally by a young(er), lesser-known or emerging writer. Good writing is ongoing, part of the current cultural milieu, and I wanted students to learn new names, to feel like part of that "shared community" referenced by Pietsch earlier and to interact with each other outside of the ordinary boundary of classroom time. Finally, the onslaught of COVID-19 affected the pedagogical backdrop for all of this planning, as it did so many things, raising the need for social distancing and a fully online modality. Could a book club still work without the face-to-face interaction of a "shared community"? In this reflection, I will describe two different approaches to the book club over two semesters. Although the book club model was generally successful in enriching students' reading skills and in offering productive choice in participation, careful planning was needed to sustain a virtual "shared community" of readers and writers. These courses had been online pre-COVID, and so the challenge for building connection within the virtual classroom was already there. That such connection is critical to learning, not just in the area of creative writing but in general, has been demonstrated repeatedly. Darby and Lang (2019) draw on numerous works-including Vygotsky's zone of proximal development; Garrison, Anderson, and Archer's 1999 research on communities of inquiry; and Joshua Eyler's 2018 How Humans Learn-to argue that while social interaction needs more purposeful planning in an online setting than in traditional classrooms, it is critical to successful cognitive engagement. Social presence-by instructor and students alike-is key to students being intellectually present. Spring Book Club (Pre-COVID) For my first undergraduate book club in spring 2019, I chose Everything Under by Daisy Johnson, shortlisted for the 2018 Man Booker (see Appendix 1). This was an easy choice for a number of reasons. One, as a contemporary adaptation of Oedipus Rex, it offered a storyline with which some upper-level students might already be somewhat familiar. Two, the quality of the writing was just superb, with haunting, unexpected imagery, a gender-fluid protagonist, and a challenging non-linear narrative, all of which made the work appropriate for a 300-level course. And three, since one of my goals was to connect students with living writers, especially those at least somewhat closer to their own age, the age of the author was a positive factor. Johnson is a very young writer, just in her mid-twenties at the time she published the novel, which I hoped would resonate with students. The class was a small (sixteen students), fully online 300-level fiction writing course with a focus on developing three short stories over the course of the semester. Because it was asynchronous, students could not be required to meet at a particular time. However, many lived in the area and so my goal was to offer opportunity for face-to-face meet-ups for those who desired such interaction as well as choice in when to attend. Students had to attend at least two of the five scheduled sessions, either in person or via zoom. Sessions were held at different times throughout the semester, including Saturday afternoons (if a student was unable to attend any of the scheduled times due to work or other obligations, an alternative graded assignment was provided). Each book club week included a short written assignment posted on the course LMS, which everyone would complete prior to the meeting (whether attending or not) and the prompt for which would be the starting point of the discussion. My hope was that having students prepare written responses in advance would encourage verbal participation since they had something to say. Moreover, the assignment prompts were crafted to guide students in their reading, a strategy identified as a best practice by Wyant and Bowen (2018). The reading assignments were fifty-to sixty-page chunks. The reading schedule, along with the Zoom link and the location of the physical meeting-either at a downtown art gallery operated by the university or in my office on campus-were posted at the beginning of the semester. I hoped that the meeting locations would help foster a convivial atmosphere. The art gallery was an airy and welcoming space off campus, without an overly institutional atmosphere. But there was a second advantage to holding sessions there: English faculty led free drop-in creative writing workshops for the community, which were scheduled right after book club meetings. My intention was to offer additional motivation for students to attend-two events, back-to-back, might make a drive even more worthwhile and heighten the sense of being part of a writing community. In this way, the online course could encompass both asynchronous (completing the reading and questions on one's own) and synchronous (real-time discussions), virtual (via Zoom) and face-to-face (for those who could attend in person) choices that supported and amplified each other. Students could finish a book club session with Daisy Johnson's wonderfully jarring imagery reverberating through their minds, then have the opportunity to practice their own writing with a small group of local writers in an informal, non-academic setting. The choice of holding book club meetings at my office was similarly made with atmosphere, as well as pragmatics, in mind. It was quiet, easily accessible for students who were already on campus, and more personal than other public spaces, with family pictures, hot tea, plants, lots of books, and non-fluorescent lighting. The vibe was more of a home space than an institutional one, which I hoped would be more conducive to relaxed discussions. Fall Book Club (with COVID Restrictions) The second book club was held under somewhat different circumstances, not the least of which was the challenge of the pandemic. The class was an online introduction to creative writing that draws students in the creative writing concentration as well as non-majors seeking a general education credit. As an introductory course, the emphasis was on foundational elements of creative writing, exploring poetry, fiction, and creative nonfiction as expressive forms and developing creative reading skills. Restrictions due to COVID meant that all book club meetings would have to be entirely virtual, with no face-to-face option. My goals, however, remained the same-to support reading skills through "reading together," to foster intrinsic motivation through personal choice, and to offer opportunity for community connection while still accommodating student schedules. One of my goals was to spur intrinsic motivation by letting students control as much of their learning as was feasible (Bain 2014). Pre-COVID, students could choose which book club sessions to attend and whether via Zoom or in person. During COVID, since in-person attendance was off the table, students could choose how many sessions to attend and what kind of written or creative response they would correspondingly complete. With respect to this last option, student work ended up including such creative responses as alternative endings to Anthony's novel, a new scene from the point of view of a minor character, and an infographic mapping the journey of the eponymous aardvark. However, one area of preference was missing: book choice. Ideally, book club participants collaborate in choosing club readings rather than the decision being made solely by the instructor (e.g., Sylvan 2018; Beach and Yussen 2011). This approach can promote student buy-in and motivation but can present its own challenges within an academic setting, particularly if required reading lists must be published well in advance of the semester. One alternative is to create a menu of four or five works from which students can choose, as discussed in Sylvan (2018), an idea that I may implement the next time I use this model. Regardless, in both courses discussed here, my choice was driven by fairly simple parameters: effective literary prose that could be used to demonstrate craft strategies; a contemporary writer who was relatively youthful (so that students could feel at least some generational connection); and vibrant and socially relevant plot lines that would provoke discussion. In the case of my fall book club, there was another factor influencing choice: the opportunity for the author to actually participate in book club sessions! Each year, we bring a writer to campus to give a reading and teach a master class. Because of the pandemic, our visiting writer, novelist Jessica Anthony, was unable to attend and had to give her reading virtually. In discussing how we might facilitate the second part of her visit-the master class-I proposed that we try something different. Instead of a ninety-minute online class, might she agree to three thirty-minute chats with my first-year students' book club? Anthony agreed. The book club would read her newest novel, Enter the Aardvark (2020), a political satire alternating between contemporary and Edwardian plot lines, each with a queer protagonist. The book was both funny and timely and I knew students would love it. They would also have the opportunity to get another perspective on craft and process; the prospect of meeting the author would add enrichment and motivation to their reading. Just as importantly, I hoped that engaging with Anthony would help them understand how literary fiction can be socially relevant and culturally engaged. The fully online, synchronous modality, via Zoom, meant that students from different geographical locations could interact in real time with our virtually visiting writer, who would remain at her home in Maine. As with spring, I wanted students to choose to attend book club rather than feel forced; at the same time, I did not want to unnecessarily add to the already heavy academic and personal load carried by many of our students. To that end, I lowered the attendance requirement for the fall book club: students only had to attend one session. I also restructured the written component of the book club. Instead of having to submit a written response prior to each meeting, students were provided with discussion questions to guide their reading and to serve as the starting point of each session's discussion. There was nothing to submit prior to a meeting. Instead, there would be a single assignment due at the end of the semester that encompassed their overall experience reading the novel. Students were able to exercise choice in how they participated: the more book club sessions they attended, the less they had to produce by way of written response. They also had some control over the form of their response. So, for example, if a student only attended one session, the book club assignment at semester's end was either a conventional literary essay or a hybrid response to the novel, using a mix of images, text, video, and/or other modalities (see assignment description in Appendix 2). Of course, attending only a single session meant that a student was not fully benefiting from the social interaction or sense of community, but they were still able to demonstrate their engagement with the novel, their "reading as writers." The essay assignment asked for analysis of Anthony's use of images, energy, tension, and insight (based on our class text, Heather Sellers' The Practice of Creative Writing), and so in that way they could integrate course concepts with their book club reading. If students attended two sessions, their response would be a two-page personal reflection. If they attended three, the response could be brief and creative-a poem, a short scene, a hand-drawn comic strip or, really, any imaginative form that they chose, as long as it demonstrated knowledge of the novel in some way. And if students attended all four sessions? No written response was required. See Appendix 2 for book club assignments. My intention in structuring the written component of the book club in this manner was to incentivize participation via discussion while minimizing written homework, a departure from the spring book club model. "Attendance" meant being present for the whole session and having at least one substantive comment to contribute, a fairly low threshold. My fear that omitting the advanced written response would dampen verbal participation did not, in fact, hold true when comparing the spring with the fall club discussions. There was no real difference; in both semesters, conversation lagged at first but, after a minute or so and some gentle prompting on my part, soon started and flowed well in both classes. And in both classes, it was clear that students, once warmed up, enjoyed talking about their responses to the book and the ideas it spawned for them. A second fear in the fall was that Anthony's presence, as the author, would cause shyness among students; I was wrong on that point as well. In fact, during her first visit, a conversation sprung up among students that was so lively and prolonged that I finally had to respectfully intervene in order to invite her into the conversation. Reflection on Experiences Both Wyant and Bowen (2018) and Verran (2019) find that in-person discussion time is critical to success, a finding with which I agree. Yet, moving the book club online does not necessarily prevent in-person discussion since video conferencing platforms like Zoom allow for organic exchanges that contribute to a dynamic experience and have the potential to create that sense of "reading together." The discussion prompts for each session were developed to foster substantive, self-directed conversation among students and, in the case of the fall book club, with Jessica Anthony, our visiting writer. The challenge, of course, was to provide enough scaffolding so that students understood expectations for participation while, at the same time, offering enough leeway to encourage self-directed and spontaneous expressions. To do this, I employed two different kinds of questions: questions which drew attention to specific moments in the text and exemplified the specificity that students needed to bring to the discussion (e.g., What do you notice about the concept of the Namibians' belief about wearing "skins of the enemy," the practice of taxidermy and Alexander Wilson's own obsession about wearing Ronald Reagan's clothes?) and openended questions that might extend the discussion in some new way, based on what students found personally interesting (e.g., What effect did the opening have on you as a reader? How does Anthony create energy in this section?). In both courses, discussion prompts were such that students needed to read carefully, drawing on both content knowledge and their own aesthetic sensibility as a creative writing student. Prompts were also meant to guide students' reading, to help them better notice both content and craft elements, which is a key part of creative reading (Sellers 2017, 35;LaPlante 2007, 40;Kardos 2017, 3). As another example, one of the prompts for Everything Under was: Consider Sarah as a character. Would you describe her as a memorable character? Does she (so far) come across with some complexity? Ideally, we want a character to be both unexpected and yet convincing. Do you think Johnson has accomplished this? Was a sense of "reading together" created? Yes, particularly with those students who attended the most frequently. The first few minutes of each session in both semesters were usually given over to a group rant about pet peeves with the book or with school, generating a certain camaraderie. But more substantive, self-directed discussion usually followed, especially when students directed questions at each other or, in the case of fall semester, at our visiting writer, instead of at me as instructor. While the prompts were used as starting points, the discussion was not constrained by the prompt and would generally move in a new direction, which is exactly what I wanted to see as an instructor. I repeatedly reminded students that it was their discussion they were holding, not mine. Students' sense of agency became more apparent when author Jessica Anthony was present for book club sessions, and they felt free to ask her about how the novel was initiated and developed. In fact, there was more discussion on process than on the novel's actual content in those sessions. Given that my emphasis throughout the semester was on helping students think of themselves as writers, to take their burgeoning craft seriously, this was a highly positive outcome and one that was mentioned by students in their course comments. There was little, if any, difference in the overall quality of discussion that occurred with spring's hybrid modality compared to the fully online setting in fall. If anything, discussions went more easily when everyone was on Zoom, as if leveling the interpersonal playing field. Students who attended more frequently got to know each other by name, something that rarely happens in an asynchronous online class. A few discovered that they were in other classes together, which introduced a level of connection that did not exist before. In spring semester (pre-COVID), I was pleasantly surprised at how many students chose to attend face-to-face, but disappointed that more did not stick around for the community writing workshop that followed. As an aside, one of the advantages of using a newly published or lesser-known novel is that there is relatively little material available online from which students might draw to avoid actually reading (see Broz 2011 for interesting comparisons). Written responses from both semesters were mostly of good quality, with students not shying away from articulating both struggles and insights, particularly in the case of the Johnson novel, which many found challenging. Wyant and Bowen (2018) identify four best practices for undergraduate book clubs: one, keep groups small; two, offer guiding questions to help students prepare; three, have groups meet multiple times over the semester; and four, reference relevant ideas from the club book in instructor lectures or other aspects of the course. They found that, while having students post responses to an online forum was useful, in-person discussions "facilitated stronger connections between students and allowed them to teach each other" (Wyant and Bowen 2018, 269). This aligns with my own experience, even when using video conferencing platforms to facilitate synchronous, real-time discussions. Although there was inevitably an initial awkwardness and accompanying silence, conversation surged once students warmed up; the more sessions that students attended, the more comfortable they became with the modality. Moreover, keeping cameras on also contributes to conviviality. Guiding questions based on the assigned reading help students to prepare productively, while attendance at more than two sessions allows students to get to know each other more than otherwise occurs in solely online, asynchronous settings. The last aspect to address in this reflection is the question of instructor presence. In the reviewed literature, many book club sessions are students-only but, as in the case of book choice, not all. For example, Cohen (2006) is not only present during his whole-class book club discussions but also has community guest speakers and even provides themed meals, while Columbia's Center for the Professional Education of Teachers recommends that the instructor "lead by example" during book club sessions by demonstrating good listening skills (2017,4). Similarly, social work book club sessions have staff facilitators lead discussion (Scourfield and Taylor 2014). Both pre-COVID and during COVID, I wanted to be present since this was one more way to provide learner-instructor interaction, which is critical to meaningful online learning. Students value instructor participation in online discussion forums; my experience has been that they feel similarly about instructor presence in its virtual, synchronous equivalent (see Darby 2019, 40). As well, I was concerned that sessions might end up overly short or otherwise devolve into unproductive or off-topic time. To address this, a student might be assigned as discussion leader for each session, responsible for keeping conversation on track or leading via discussion questions that they have prepared themselves (see Ruzich and Canan 2010;Scourfield and Taylor 2014). Another option is to assign groups to work on a collective, collaborative project in connection with the book-a group presentation, for example, as Wyant and Bowen (2018) do. Students are able to use club time to work on the presentation, for which they are given a collective grade worth 60 percent of the total book club grade. This is an interesting possibility that I may consider further, though I tend to view the group presentation assignment as posing its own pedagogical challenges, which might distract from the goals of the book club as it is used in my classes. Conclusion The pandemic upended how we teach, but the pervasiveness of new technologies, like Zoom and other video platforms, opened up new pedagogical possibilities. The potential of the undergraduate book club to foster reading skills, introduce students to new writers, and help students engage with new ideas and create connections lies not in whether students are literally present, but intellectually and emotionally present. And that kind of presence can be fostered through flexible and thoughtful book club design, regardless of modality.
2022-03-09T19:01:53.092Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "10f5b1057341f8f563dc8f0d562c0aa7f7afbdfb", "oa_license": "CCBY", "oa_url": "https://kula.uvic.ca/index.php/kula/article/download/238/407", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0dc6ff905e1aeadd60b81d7ca19156190bd3d1b8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
262086371
pes2o/s2orc
v3-fos-license
Comparison of the slow‐pull and aspiration methods of endobronchial ultrasound‐guided transbronchial needle aspiration for next‐generation sequencing‐compatible tissue collection in non‐small cell lung cancer Abstract Background Personalized treatment for non‐small cell lung cancer (NSCLC) has advanced rapidly, and elucidating the genetic changes that trigger this disease is crucial for appropriate treatment selection. Both slow‐pull and aspiration methods of endobronchial ultrasound‐guided transbronchial needle aspiration (EBUS‐TBNA) are accepted methods for collecting samples suitable for next‐generation sequencing (NGS) to examine driver gene mutations and translocations in NSCLC. Here, we aimed to determine which of these two methods is superior for obtaining higher‐quality samples from patients with NSCLC. Methods Seventy‐one patients diagnosed with NSCLC via EBUS‐TBNA using the slow‐pull or aspiration (20‐mL negative pressure) methods between July 2019 and September 2022 were included. A total of 203 tissue samples from the 71 patients were fixed in formalin, embedded in paraffin, and mounted on slides. The presence of tissue cores, degree of blood contamination, and number of tumor cells were compared between the groups. The success rate of NGS, using Oncomine Dx Target Test Multi‐CDx, was also compared between the groups. Results The slow‐pull method was associated with a higher yield of tissue cores, lower degree of blood contamination, and higher number of tumor cells than the aspiration method. The success rate of the NGS was also significantly higher for the slow‐pull group (95%) than for the aspiration group (68%). Conclusion Overall, these findings suggest that the slow‐pull method is a superior technique for EBUS‐TBNA to obtain high‐quality tissue samples for NGS. The slow‐pull method may contribute to the identification of driver gene mutations and translocations and facilitate personalized treatment of NSCLC. | INTRODUCTION In recent years, personalized treatment for non-small cell lung cancer (NSCLC) has advanced rapidly.Targeted drugs for advanced or recurrent NSCLC with driver gene mutations/translocations have improved prognoses, compared to cytotoxic anticancer drugs. 1 In Japan, molecularly targeted drugs for epidermal growth factor receptor (EGFR) mutations, 2 anaplastic lymphoma kinase (ALK) fusion gene, 3 c-ROS oncogene 1 (ROS1) fusion gene, 4 v-raf murine sarcoma viral oncogene homolog B1 (BRAF) gene V600E mutation, 5 mesenchymal-epithelial transition factor gene exon 14 skipping mutation, 6 rearranged during transfection (RET) fusion gene, 7 Kirsten rat sarcoma viral oncogene homolog gene G12C mutation, 8 and neurotrophic tyrosine receptor kinase fusion gene 9 are covered by insurance.Previously, driver gene mutations/translocations were evaluated using individual companion diagnostic systems.Recently, next-generation sequencing (NGS) has made it possible to search for them simultaneously.Furthermore, NGS is a powerful tool for diagnosing and treating NSCLC, as it can be used to monitor the response to treatment and detect resistance development. 10he use of NGS and the treatment of NSCLC are still evolving; however, NGS can potentially revolutionize the way that this cancer is managed. 113][14][15][16] However, the quality and quantity of nucleic acids in the specimen must meet specific standards, and a large number of tumor cells must be collected to ensure sufficient nucleic acid yield for analysis. 12ndobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a safe and minimally invasive procedure that can be used to obtain tissue samples from bronchoscopically accessible hilar and mediastinal lesions.It is a standard method for obtaining tissue samples for lung cancer and has high diagnostic sensitivity and specificity. 1716]18 This is because EBUS-TBNA specimens are often contaminated with blood, which can lead to poor nucleic acid quality and adversely affect the success of NGS analysis. 13,191][22] The slow-pull method applies minimal aspiration force by slowly and continuously pulling a stylet without attaching a syringe; in contrast, the aspiration method applies negative pressure by attaching a syringe to the needle.Wang et al. conducted a meta-analysis of previous studies and reported that the slow-pull method allowed the collection of specimens with less blood contamination and had a better tissue core collection rate than the aspiration method. 23However, limited studies have compared the two techniques in EBUS-TBNA, and it is unclear whether the slow-pull method can be used to collect lung samples suitable for NGS with low blood contamination and high tumor cell counts.Therefore, in this study, we retrospectively compared the quality of specimens collected using the two techniques from patients diagnosed with NSCLC using EBUS-TBNA. | Study design and patients This was a single-center, retrospective study conducted at the Yokohama City University Medical Center, Japan.The study was conducted in accordance with the tenets of the Declaration of Helsinki and approved by the Ethics Committee of Yokohama City University (approval number: F221000009).Patients diagnosed with NSCLC using EBUS-TBNA via the slow-pull or aspiration method between July 2019 and September 2022 were included in this study.This study was retrospective in nature and thus did not require written consent from patients.Nevertheless, we published the study information on the hospital's website and offered patients the option to refuse participation. The following patients were considered eligible for the study: (a) patients who underwent EBUS-TBNA using the slow-pull or aspiration methods between July 2019 and September 2022, (b) patients whose tissue samples were obtained using EBUS-TBNA, and (c) patients diagnosed with NSCLC via tissue diagnosis.For patients who underwent both slow-pull and aspiration procedures, specimens collected using both methods were combined for NGS analysis.Therefore, the success rates of NGS analysis The slow-pull method may contribute to the identification of driver gene mutations and translocations and facilitate personalized treatment of NSCLC. K E Y W O R D S clinical cancer research, next generation sequencing, non small cell lung cancer, oncogenes, target therapy for the two methods could not be compared and were excluded from this study.We collected patient background data including age, sex, smoking history, Eastern Cooperative Oncology Group performance status (ECOG PS), duration of the procedure, number of specimens collected, stage of NSCLC, pathological diagnosis, and programmed death ligand 1 (PD-L1) tumor proportion score (TPS).Additionally, we collected data on the puncture site and the long diameter of the puncture site.Puncture sites were classified based on a lymph node map created by the International Association for the Study of Lung Cancer. 24 | EBUS-TBNA procedure The patients were placed under moderate-to-deep sedation with intravenous anesthesia (propofol and dexmedetomidine hydrochloride).A local anesthetic (2% lidocaine) was administered intratracheally during the examination.Blood pressure, pulse rate, and percutaneous oxygen saturation were monitored during the examination, and oxygen was administered to achieve percutaneous oxygen saturation ≥ 90%.EBUS-TBNA was performed using an ultrasound bronchial fiber videoscope (BF-UC260FW or BF-UC290F; Olympus, Tokyo, Japan) and a 21-gauge needle (NA-U401SX-4021; Olympus).After the delineation of lesions by EBUS, color Doppler ultrasound was used to evaluate the blood flow within and around the lesion to determine the optimal puncture site.A puncture needle was used to perforate the target mass, and a stylet was immediately pressed against the mass.The slow-pull method was performed as follows: Approximately 20-40 strokes were performed while the stylet was slowly and continuously removed.Similarly, the aspiration method was performed as follows: The stylet was removed entirely, a 20-mL negative pressure syringe was attached, and 20-40 strokes were performed.The steps were repeated several times until a sufficient number of specimens were obtained.The specimen in the puncture needle was pushed out with an air-filled syringe and immediately placed in 10% neutral buffered formalin solution (Muto Pure Chemicals Co., Ltd.).All procedures were performed under the supervision of a respiratory physician with at least 8 years of experience in bronchoscopy.Owing to staffing issues in the pathology department, rapid on-site evaluation was not performed in all cases. | Evaluation of specimen pathology Specimens obtained via EBUS-TBNA were fixed in 10% neutral buffered formalin solution for 20-28 h and embedded in paraffin (Thermo Fisher Scientific Inc.).Thin slides were prepared using formalin-fixed paraffinembedded (FFPE) specimens cut to 3-4-μm thickness with a microtome (Yamato Kohki Industrial Co., Ltd.), and histopathological examination was performed using the hematoxylin-eosin staining method.We collected data on histopathological analysis, including the presence of tissue cores, degree of blood contamination, and number of tumor cells on the slides.We defined a tissue core as a contiguous string of lung cancer tissues on a slide 25 (Figure 1A,B).Blood contamination was classified into three levels: low (no or few blood cells affecting diagnosis), moderate (blood cells obscure a part of the specimen, but pathological diagnosis is possible), and high (many blood cells make pathological diagnosis difficult) 19,26 (Figure 1C-E).The tumor cell count was defined as the number of tumor cells on the slide. 12Cells that were crushed and difficult to identify as tumor cells were not counted.Experienced pathologists at Yokohama City University Medical Center performed all pathological examinations.Pathologists were blinded to the EBUS-TBNA method used to collect the specimens. | NGS analysis Patients diagnosed with NSCLC using EBUS-TBNA with the slow-pull or aspiration methods and who subsequently underwent NGS were selected.NGS was conducted using Oncomine Dx Target Test Multi-CDx (ODxTT; Thermo Fisher Scientific Inc., Waltham, MA, USA) or in the clinical trial Lung Cancer Genomic Screening Project for Individualized Medicine in Asia (LC-SCRUM-Asia). 27ODxTT is used to analyze hotspot mutations using DNA derived from tumor samples and fusion genes using RNA derived from tumor samples for 46 genes (Table 1).ODxTT is approved as a companion diagnostic system in Japan that identifies changes in five driver genes: EGFR, ALK, ROS1, BRAF V600E, and RET. 28The ODxTT analysis was performed based on Ion AmpliSeq technology after submitting 5-μmthick slides prepared using FFPE specimens to SRL Laboratories.We collected data on the success or failure of the DNA and RNA analyses.We defined "success" as a case in which all genetic tests for both DNA and RNA were successfully analyzed.NGS of LC-SCRUM-Asia was conducted on fresh-frozen specimens collected separately.Freshfrozen specimens were submitted directly to NGS after collection and freezing, and therefore, the presence of tissue cores, degree of blood contamination, and tumor cell count could not be assessed under a microscope.Hence, we did not collect data on the success or failure of NGS in LC-SCRUM-Asia. | Statistical analysis We used Mann-Whitney U-test to compare numerical data between the groups.We also used the chi-square test or Fisher's exact test to compare the proportions of categorical data between the groups.Significance was determined at p < 0.05 with a two-tailed t-test.Statistical analyses were performed using GraphPad Prism 9 software (GraphPad Software). | Patient characteristics Figure 2 presents the patient selection flowchart.A total of 173 patients underwent EBUS-TBNA between July 2019 and September 2022, and 86 were diagnosed with NSCLC.From these, 71 patients were included in this study after excluding 15 patients who underwent both slow-pull and aspiration procedures.Of the included patients, 32 underwent the slow-pull procedure alone; of these, 19 underwent ODxTT for genetic testing, and 5 submitted fresh-frozen specimens to LC-SCRUM-Asia.The total number of specimens collected from the 32 patients (FFPE slides) was 92.Of the 71 patients, 39 underwent aspiration procedures; from these, 22 patients underwent ODxTT-based genetic testing and 6 submitted fresh-frozen specimens to LC-SCRUM-Asia.A total of 111 specimens (FFPE slides) were collected from 39 patients. We observed no significant differences in patient background between the slow-pull and aspiration method groups (Table 2).Twenty-eight patients (87.5%) in the slow-pull method group and 37 (94.9%) in the aspiration group underwent positron emission tomography (PET)-computed tomography (CT) prior to EBUS-TBNA.The patients who did not undergo PET-CT were examined using contrast-enhanced CT of the chest.The most common puncture sites in both groups were the T A B L E 1 Target genes that can be detected using the Oncomine Dx Target Test Multi-CDx. | Pathological findings of FFPE slides of specimens collected via EBUS-TBNA The total number of specimens (FFPE slides) collected using the slow-pull and aspiration methods was 92 and 111, respectively.The slow-pull method group had a higher tissue core collection rate than the aspiration method group (60.9% vs. 46.9%,p = 0.046; Table 3).Specimens from the slow-pull method group had less blood contamination than those from the aspiration method group (low: 53.3% vs. 25.2%;moderate: 42.4% vs. 67.6%;and high: 4.4% vs. 7.2%; p = 0.0002; Figure 3A).Specimens from the slow-pull method group had a higher number of tumor cells than those from the aspiration method group (328 [IQR: 149-625] vs. 90 [IQR: 20-210], p < 0.0001; Figure 3B).Even after excluding FFPE slides for which tissue cores were not collected, specimens from the slow-pull method group had a higher number of tumor cells than those from the aspiration method group (455 [IQR: 305-800] vs. 230 [IQR: 106-543], p = 0.0002; Figure 3C). | Success rate of ODxTT with FFPE slides of specimens collected via EBUS-TBNA FFPE slides of 41 patients (19 out of 32 in the slow-pull method group and 22 out of 39 in the aspiration method groups) were subjected to ODxTT.DNA and RNA were successfully analyzed in 33 patients (80.5%).The slowpull method was superior to the aspiration method, with successful analysis of both DNA and RNA from 18 patients (94.7%) in the slow-pull method group and 15 patients (68.2%) in the aspiration method group (p = 0.049; Table 4).Note: Data are presented as n (%).EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration.The chi-square test was used to compare the proportions of categorical data between the groups. F I G U R E 3 Comparison of the slow-pull and aspiration methods with respect to the (A) extent of blood contamination, (B) number of tumor cells (all formalin-fixed paraffin-embedded slides), and (C) number of tumor cells (excluding formalin-fixed paraffin-embedded slides for which tissue cores were not collected).We used Fisher's exact test to compare the proportions of categorical data between the groups (A) and Mann-Whitney U-test to compare numerical data between the groups (B, C).Only DNA failed to be evaluated 4 ( Only RNA failed to be evaluated 2 (4.9) 0 Both DNA and RNA failed to be evaluated 2 (4.9) 0 2 (9.1) Note: Data are presented as n (%).EBUS-TBNA, endobronchial ultrasound-guided transbronchial needle aspiration.The chi-square test was used to compare between the groups for the percentage of both DNA and RNA that were successfully evaluated.(2.6%) had bloody phlegm, requiring the administration of hemostatic agents the day after EBUS-TBNA, but it disappeared after 2 days. | DISCUSSION In this study, we demonstrated that the slow-pull method was more effective than the aspiration method in EBUS-TBNA for collecting specimens with a tissue core, less blood contamination, and a higher number of tumor cells. To the best of our knowledge, this is the first study to demonstrate that the slow-pull method increases the success rate of NGS analysis in EBUS-TBNA compared to the aspiration method. Although the syringe aspiration method is commonly used in EBUS-TBNA, the slow-pull method is used as frequently as the aspiration method in EUS-FNA of pancreatic masses. 29Regardless of the needle size or type, the aspiration force of the slow-pull method is remarkably lower than that of the aspiration method (slow-pull method, 0.6-2.9kPa; 10-mL negative pressure, 29.2-30.4kPa; 20-mL negative pressure, 45.6-46.7 kPa). 30his is important because blood contamination in EBUS-TBNA specimens can hamper NGS analysis. 13,19The weak aspiration force of the slow-pull method is expected to suppress blood contamination in the specimens, resulting in a higher success rate of NGS. In the present study, the slow-pull method was found to be superior to the aspiration method in terms of tissue core collection rate, with less blood contamination.This finding is consistent with the results of a meta-analysis of 11 studies, involving 1055 patients who underwent EUS-FNA of pancreatin masses, 23 indicating that the slow-pull method is associated with a higher tissue core collection rate and less blood contamination in EBUS-TBNA.To the best of our knowledge, only one study has compared the quality of specimens collected using the slow-pull and aspiration methods in EBUS-TBNA.A retrospective study involving 86 patients found that the slow-pull method was superior to the aspiration method in terms of tissue core collection rate but not in terms of blood contamination. 19However, the study did not examine the number of tumor cells and the success rate of NGS analysis. In this study, we found the slow-pull method to be superior to the aspiration method for collecting specimens with more tumor cells.The meta-analysis of EUS-FNA for pancreatic masses mentioned above reported no significant difference in tumor cell counts when comparing specimens collected using the slow-pull and aspiration methods. 23However, these studies focused on FFPE-cell blocks, not FFPE-embedded tissue, as in this study.In EBUS-TBNA, no studies have compared tumor cell counts for specimens collected using both methods.In this study, cells that were crushed and difficult to identify as tumor cells were not counted.In specimens collected using the aspiration method, it is possible that tumor cells are crushed by the strong aspiration force and are not counted.Further extensive studies are required to verify the differences in tumor cell counts between the techniques used in this study. The overall success rate of the ODxTT analysis was 80.5%.This is comparable to the 86.5% reported in a meta-analysis of 21 studies, involving 1175 patients. 31n patients with advanced recurrent NSCLC undergoing EBUS-TBNA, it is recommended that additional specimens be collected for genetic mutation/translocation analysis beyond the number of specimens required to confirm the diagnosis; specifically, at least three specimens should be collected. 17In this study, the median number of specimens collected was three for both slow-pull and aspiration method groups.Recently, the number of specimens collected (four or more) has been reported to be an essential factor affecting the success of NGS on EBUS-TBNA specimens. 32In this study, the NGS success rate may have been further improved by increasing the number of specimens collected to four or more.In recent years, the usefulness of the wet suction method, wherein the needle is filled with normal saline solution and a negative pressure is applied, 20 and the combination of the fanning and slow-pull methods, wherein the needle is stroked while changing its direction within the lesion, 33 has been reported for EUS-FNA of pancreatic masses.In EUS-FNA of pancreatic masses, a three-plane symmetric needle with Franseen geometry can improve the tissue core collection rate, 34 and its effectiveness in EBUS-TBNA is currently being assessed. 35urther studies are required to optimize the collection of specimens suitable for NGS in EBUS-TBNA using various methods and puncture needles. The results of this study did not reveal any factor that strongly recommends the use of slow-pull method over the aspiration method when performing EBUS-TBNA.However, as there were no significant differences in the duration of the procedure or rate of complications between the methods, it is recommended that the slow-pull method be performed first and then switched to the aspiration method when specimens cannot be collected with the slow-pull method. However, this study has several limitations.First, this was a single-center retrospective study with a relatively small number of patients.Second, the tumor content rate was not examined in this study because the tumor cells were scattered in the specimens collected using EBUS-TBNA, making it difficult to accurately measure the tumor content rate.Third, only a portion of patients in this study underwent NGS.Despite the small number of patients included in the statistical analysis, the success rate of NGS was higher in the slow-pull method group than in the aspiration method group.Finally, in this study, we excluded patients who underwent both procedures to compare the success rate of NGS analysis.Target lesion location and size were not significantly different between the methods, but biases such as tumor distribution within the target lesion could not be completely eliminated.Studies comparing the two methods in the same patients are necessary to avoid bias due to differences in the puncture lesions. In conclusion, our study demonstrated that real-world EBUS-TBNA can be used to collect sufficient specimens suitable for NGS.The slow-pull method in EBUS-TBNA can quickly increase the success rate of NGS analysis without the need for additional materials.As a result, the slow-pull method may contribute to the identification of driver gene mutations and translocations and facilitate personalized treatment of NSCLC and thus may be worth applying as a standard technique for EBUS-TBNA.Nevertheless, prospective randomized controlled trials are needed to further understand the effectiveness of the slow-pull method for collecting samples suitable for NGS in EBUS-TBNA. 1 Evaluation of tissue core and blood contamination on hematoxylin-eosin-stained slides.(A) Slide with a tissue core, (B) slide without a tissue core, (C) slide with low blood contamination, (D) slide with moderate blood contamination, and (E) slide with high blood contamination. 4 Success rate of the Oncomine Dx Target Test Multi-CDx with formalin-fixed paraffin-embedded slides of specimens collected via EBUS-TBNA. Table 5 Patient characteristics.Data are presented as n (%) or median (interquartile range).The chi-square test and Fisher's exact test were used to compare the proportions of categorical data between the groups.Pathological findings of formalin-fixed paraffin-embedded slides of specimens collected via EBUS-TBNA. shows the complications of EBUS-TBNA.Bacterial pneumonia occurred in one patient (3.1%) in the slow-pull group but was resolved with antibiotic use.Bacterial pneumonia was also observed in one patient (2.6%) in the aspiration method group but was resolved with antibiotic use.In the aspiration method group, one patient T A B L E 2 Note: T A B L E 3 Complications of EBUS-TBNA.
2023-09-22T06:17:31.667Z
2023-09-21T00:00:00.000
{ "year": 2023, "sha1": "6df2bd3b81f32faaca0cc25abd20aa4e66c86dc9", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.6561", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "70bf193b1d3ce65ef8dde806c375c1b61a11a73f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247618797
pes2o/s2orc
v3-fos-license
CollaChain: A BFT Collaborative Middleware for Decentralized Applications The sharing economy is centralizing services, leading to misuses of the Internet. We can list growing damages of data hacks, global outages and even the use of data to manipulate their owners. Unfortunately, there is no decentralized web where users can interact peer-to-peer in a secure way. Blockchains incentivize participants to individually validate every transaction and impose their block to the network. As a result, the validation of smart contract requests is computationally intensive while the agreement on a unique state does not make full use of the network. In this paper, we propose Collachain, a new byzantine fault tolerant blockchain compatible with the largest ecosystem of DApps that leverages collaboration. First, the pariticipants executing smart contracts collaborate to validate the transactions, hence halving the number of validations required by modern blockchains (e.g., Ethereum, Libra). Second, the participants in the consensus collaborate to combine their block proposal into a superblock, hence improving throughput as the system grows to hundreds of nodes. In addition, Collachain offers the possibility to its users to interact securely with each other without downloading the blockchain, hence allowing interactions via mobile devices. Collachain is effective at outperforming the Concord and Quorum blockchains and its throughput peaks at 4500 TPS under a Twitter DApp (Decentralized Application) workload. Finally, we demonstrate Collachain's scalability by deploying it on 200 nodes located in 10 countries over 5 continents. I. INTRODUCTION As the number of misuses of Internet data grows, so does the need for decentralized middleware rewarding individuals for sharing data. These misuses stem from a "centralization" of the web according to Turing-awardee Tim Berners-Lee, who contributed to the design of a decentralized alternative to his 30-year-old Web [67]. At its origin, the Web helped users communicate with each other through their desktop computers. In the 2000s users started sharing data through Google, Facebook, Microsoft, and Amazon. By 2025, the sharing economy-to which users can contribute with their mobile phone-is expected to represent $335 billion [25]. This centralization has severe drawbacks: it exposes data to leaks and hacks [57] and it facilitates user manipulation [74]. Decentralized applications, or DApps for short, are increasingly popular at allowing users to trade services peerto-peer without transferring ownership. In the third quarter of 2020, DApps transaction volume experienced an 8-fold increase reaching $125B [31]. With 96% of this volume occurring on top of the Ethereum blockchain [87] alone, most DApps are, however, plagued by Ethereum capacity capped at ∼15 transactions per second (TPS) [23]. Even without forcing miners to resolve a crypto-puzzle to obtain a proof-of-work, the proof-of-authority alternative of Ethereum delivers ∼80 TPS [63]. Ethereum suffers from (i) an expensive validation during the execution of smart contracts as well as (ii) an incentive for consensus participants to compete into a fierce battle that aims at imposing their block to the rest of the system. As a result, Ethereum already experienced congestion due to DApps [30] and its capacity is inherently too low to support a DApp with the 4000+ TPS throughput of Twitter [64]. In this paper, we propose CollaChain, a new collaborative byzantine fault tolerant middleware designed for decentralizing the sharing economy. It is compatible with the largest ecosystem of DApps as it runs an adapted version of the Ethereum Virtual Machine (EVM), called Scalable EVM (SEVM), and decouples the traditional blockchain design into an (i) SEVM component that divides the validation of EVM nodes by two as the number n of blockchain participants tends to infinity and (ii) a distributed consensus component that potentially decides a number of transactions proportional to n, hence addressing the two aforementioned limitations to scale as n increases. Although a SEVM node also validates transactions eagerly to mitigate denial-of-service (DoS) attacks, we limit the number of nodes receiving this transaction to achieve (i). Consensus participants collaborate to combine their proposal into the same superblock decision, as opposed to the competitive classic consensus approach, to achieve (ii). Perhaps more importantly, a user of CollaChain, who wants to ensure its DApp can access a consistent state of the blockchain without trusting a central entity, does not need to download the blockchain. CollaChain simply requires a webbased or mobile app and leverages the upper-bound f < n/3 on the number of arbitrary (byzantine) failures among n servers to reach consensus [62]. Other blockchains typically require users to either download block headers to verify that the current state is consistent, a process called "synchronizing", before issuing requests [40] or trust a central entity, which defeats the blockchain purpose. Ethereum fast synchronization requires more than 280 GB of free storage space and takes 4 hours on average on an i3.2xlarge AWS EC2 instance with 8 vCPUs, 61 GiB memory and 1.9 TiB NVMe SSD [41], a task nearly impossible for any mobile device. One may think of downloading less information with a "light" synchronization, however, the corresponding validation cannot guarantee that the blockchain is correct due to incomplete blockchain records [42]. CollaChain achieves 2K TPS when deployed on 200 machines spread in 10 countries over five continents. As indicated in Table I, it outperforms the non-sharded blockchains that tolerate byzantine failures. Note that we discuss sharding as an orthogonal optimization in §VI. CollaChain's throughput is two orders of magnitude larger than the Ethereum capacity throughput and, although not reported in Table I, one order of magnitude faster than the 172 TPS of the recent SBFT in a world-scale setting [53]. To the best of our knowledge, the non-sharded blockchains that outperform CollaChain are the ones that only tolerate crash failures. We compare CollaChain performance to Quorum and Concord ( §V-B), show the impact of validation reduction on performance using BlockBench [34], illustrate the scalability of CollaChain to hundreds of machines over 5 continents, and demonstrate a 4500 TPS peak throughput when running the Twitter DApp of the DIABLO benchmarking framework [11]. Blockchain Fault tolerance Smart contract TPS Ethereum v1.x [87] probabilistic Solidity 15 Avalanche [76] probabilistic C-Chain 1.3K Algorand [50] probabilistic TEAL 1K Hyperledger Fabric [6] crash ChainCode 3K FastFabric [51] crash ChainCode 20K Cosmos/Ethermint [18] byzantine Solidity 438 Burrow [61] byzantine Solidity 765 SBFT [53] byzantine Solidity 378 Stellar [65] byzantine SSC 100 CollaChain byzantine Solidity 4.5K CollaChain builds upon various results. The superblock optimization already appeared in a UTXO-based blockchain [29], however, CollaChain applies it to smart contracts by decoupling their execution and persistent storage into sub-tasks to avoid request losses as we will illustrate in §V-D. Its SEVM nodes receive the requests from the clients and batch them into blocks that are sent to consensus nodes. Similar to an Ethereum [87], [17] server (i.e., miner) or a Libra [8] server (i.e., validator), an SEVM server of CollaChain validates eagerly a transaction upon reception and validates lazily the same transaction upon execution (after the block that contains it is agreed upon). In contrast with these blockchains, an SEVM node does not propagate the transactions to other nodes upon reception from the client, hence reducing the number of eager validations. Similar to byzantine fault tolerant (BFT) replicated state machine protocols [13], [89], the consensus nodes decide on a unique batch of transactions without assuming synchrony as long as less than a third of consensus nodes are byzantine, which is resilient optimal [62]. In the remainder of the paper, we present our motivations and the necessary background ( §II), as well as our goals and assumptions ( §III). We then present CollaChain ( §IV) that we prove correct, and evaluate it in a geodistributed setting and compare it against other blockchains ( §V). Finally, we present the related work ( §VI) and conclude ( §VII). A smart contract to reconfigure the nodes of CollaChain is provided in Appendix A. II. BACKGROUND AND MOTIVATIONS a) Decentralized applications rationale: Decentralized applications (DApps) alleviate many problems induced by the centralization of the sharing economy. To mention a few, YouTube exprienced an outage [75] that DTube could have remedied by sharing videos peer-to-peer [36]. Uber drivers feel manipulated by an opaque matching algorithm [70], whereas the DApp counterpart, called Drife, could offer transparency [35]. So what is the performance necessary to implement such a decentralized version of the sharing economy? To answer this question, let us consider Twitter, which is a popular micro-blogging application. Twitter experiences more than 4000 tweets per second on average and its peak demand largely exceeds this number [64]. It is thus crucial for a mainstream decentralized middleware to support thousands of transactions per second. Unfortunately most blockchains cannot (cf. Table I). b) The redundant validations of Ethereum: Ethereum [87] features the Ethereum Virtual Machine (EVM) that was proposed in part to cope with the limited expressiveness of Bitcoin [71] and to execute DApps written in a Turing complete programming language as smart contracts. Go Ethereum, or geth for short, is the mostly deployed Ethereum implementation [49]. In order to check that a request (or transaction) is valid, all of the geth servers (i.e., miners) must validate twice each executed transaction: • Eager validation: This validation occurs upon reception of a new client transaction and checks the nonce value; that the sender account has sufficient balance; that the gas is sufficient to execute the transaction; and the transaction does not exceed the block gas limit, is signed properly and is not oversized. It reduces the effect of denial-of-service (DoS) attacks as an invalid transaction is dropped early. If the transaction is valid, it is propagated to other servers. • Lazy validation: This validation occurs before transactions are executed in a decided block and simply checks the nonce and whether there is enough gas for execution. This lazy validation is necessary to guarantee that transactions in a newly received decided block are indeed valid. The lazy validation is thus less time consuming in geth than the eager validation, this is why we focus on reducing the number of eager validations. This is an overconservative strategy because each to-beexecuted transaction of geth is validated twice by each server. This is unnecessary as an invalid transaction coming from a byzantine node will either be dropped by lazy validation prior to execution or fail execution and the state reversed if there is an invalidity not checked by the lazy validation. It is interesting to note, also, that in a system where few replicas are byzantine, there is no need for all servers to validate all transactions twice. We explain in §IV-C how, without reducing security, we reduce the number k of eager validations per server down to k/n to scale to a large system size n. c) The inefficiency of byzantine fault tolerant consensus: For security reasons, a blockchain must guarantee that nodes agree on a unique block at each index of the chain. To cope with malicious (or byzantine) participants, this requires solving the byzantine fault tolerant (BFT) consensus problem [72], where every non-byzantine or correct node eventually decides a value such that no two correct nodes decide differently. Unfortunately, traditional consensus protocols solve this problem by electing a leader node that tries to impose its value to the other nodes [19], [13], [16], [89]. While these leader-based designs proved effective in local area networks to deploy a secure version of the Network File System [19], it generally cannot scale to large blockchain networks because only one value is decided [15], [84], regardless of the number of proposed values in the system. Recent consensus implementations allowed to commit up to as many proposed values per consensus instance as participating nodes to scale performance [69], [60], [27]. However, some of these variants [69] fail at solving consensus because their binary consensus protocol may not terminate [82]. And the only blockchains that integrate a provably correct consensus implementation that combines block proposals into a superblock support simple transactions but cannot execute arbitrary programs or smart contracts [60], [29]. In §IV-C, we will describe the obstacles we overcome to combine multiple proposals of smart contracts creations and invocations during each consensus execution. III. GOALS AND ASSUMPTIONS We consider an open permissioned blockchain model [69], [29] in that a subset of the distributed machines have the permission to run the current instance of the consensus, or to execute smart contracts and transaction requests as well as to maintain the resulting state. This model is called "open" as permission can be revoked and we do not prevent a particular node from obtaining a permission later on: as opposed to Ethereum we simply prevent all nodes from providing the same service at the same time to avoid resource waste ( §IV-D4). We assume partially synchronous communication and computation in that the upper bound on the time it takes for a step exists but is unknown [37]. For simplicity, we assume that each permissioned participant runs both a consensus node and a state node and that up to f of these participants (and any of their nodes) can fail arbitrarily by being byzantine. In this case, we call such a participant as a blockchain node. a) The Blockchain problem: We refer to the blockchain problem as the problem of ensuring both the safety and liveness properties that were defined in the literature by Garay et al. [47] and restated more recently by Chan et al. [20], and a classic validity property [29]. Definition 1 (The Blockchain Problem): The blockchain problem is to ensure that a distributed set of blockchain nodes maintain a sequence of transaction blocks such that the three following properties hold: • Liveness: if a correct blockchain node receives a transaction, then this transaction will eventually be reliably stored in the block sequence of all correct blockchain nodes. • Safety: the two chains of blocks maintained locally by two correct blockchain nodes are either identical or one is a prefix of the other. • Validity: each block appended to the blockchain of each correct blockchain node is a set of valid transactions (non-conflicting well-formed transactions that are correctly signed by its issuer). The safety property does not require correct blockchain nodes to share the same copy, simply because one replica may already have received the latest block before another receives it. Note that, as in classic definitions [47], [20], the liveness property does not guarantee that a client transaction is included in the blockchain: if a client sends its transaction request exclusively to byzantine nodes then byzantine nodes may decide to ignore it. b) Our goal of a secure and efficient middleware for DApps: Our goal is thus to support DApps, by allowing clients (i) to access consistent data despite f < n/3 byzantine servers through all sorts of devices and (ii) to serve a large demand generated by network effects as follows: 1) Lightweight-security: the users should be able to securely interact with the blockchain from various devices. To access apps, users typically use handheld devices that cannot download blockchain histories due to resource constraints (Ethereum history exceeds 280 GiB [41]). yet they need to interact securely despite unpredictable message delays. 2) Thousands-TPS: the volume of transactions that can be served per second should prevent a backlog of requests that grows and leads to congestion. We know that DApps create congestion on Ethereum [30] and EOS [38], and popular applications, like Twitter, exceed 4000 requests per second [64]. Property (1) alleviates the need for clients to download the blockchain history, but requires them to interact securely, which is enabled by limiting the number of failures to f and querying f + 1 identical copies of the current state to retrieve the correct information as detailed in §IV. Also, we know that to allow users to issue (potentially conflicting) transactions from distinct devices, we need to solve consensus [52]. Property (2) lower bounds the capacity to around 2000 TPS to serve the demand of sharing applications. Although this might be insufficient to run multiple DApps, we explain in §V-G how to shard CollaChain and deploy different DApps to different shards. This would help CollaChain support many DApps smoothly, given its 4500 TPS peak throughput ( §V-F). IV. COLLACHAIN CollaChain is a collaborative blockchain compatible with the largest ecosystem of DApps, it is optimally resilient against byzantine failures. The layered architecture is depicted in Fig. 1 Fig. 1. The architecture of CollaChain. A client sends a transaction to some replica(s), at each replica the web3.js server validates transactions and sends them to the transaction manager that sends a block to the consensus client. The consensus client proposes it to the consensus protocol. Upon reception of a new block from the consensus client, the consensus protocol sends it through the network with a reliable broadcast. Remote replicas start participating in the same instance (if not done yet) upon reliably delivering this proposed block. When the consensus outputs some acceptable blocks, all of these blocks are combined into a superblock and sent to the SEVM. As in the EVM, the SEVM is responsible for executing and storing blocks, except that the SEVM will store multiple blocks per consensus instance. with a Scalable EVM (SEVM) node at the top and a consensus node at the bottom that can be run on the same machine as a single blockchain node. The communication between the consensus node and the SEVM node is event-based and implemented with gRPC. Although this presents an execution overhead as both the consensus node and the SEVM node can typically execute on a single machine, this offers greater modularity. CollaChain offers a secure interface to lightweight clients ( §IV-B) and to scale to a geodistributed network of n blockchain nodes, CollaChain reduces the time spent validating transactions as n grows ( §IV-C) and increases the amount of blocks committed per consensus instance as n grows ( §IV-D). Before discussing these, we present the CollaChain overview through the transaction lifecycle ( §IV-A). A. The transaction lifecyle In the following, we use the term transaction to indistinguishably refer to a simple asset transfer, the upload of a smart contract or the invocation of a smart contract function. The lifecycle of a transaction goes through these subsequent stages: 1. Reception. The client creates a properly signed transaction and sends it to at least one CollaChain node. Once a request containing the signed transaction is received by the JSON RPC server of the SEVM state machine running within CollaChain, the eager validation ( §II) starts. If the validation fails, the transaction is discarded. If the validation succeeds, the transaction is added to a transaction pool. Unlike in Ethereum where the transaction would be propagated to all miners increasing the number of eager validations, CollaChain simply proposes it to the consensus node as follows. If the number of transactions in the pool reaches a threshold, then the transaction manager creates a new proposed block with a number (defined by the threshold) of transactions from the pool . It serializes and sends the proposed block to the consensus client . For the sake of our superblock optimization (cf. §IV-D2) and in contrast with Ethereum, the proposed block does not contain a hash just yet. 2. Consensus. Once the consensus client receives a proposed block, it sends the corresponding byte array to the consensus system by invoking the propose([]byte) method . The consensus system starts a new instance of consensus using the new block if it is not currently part of another consensus instance. Otherwise, it adds the new block to the block queue waiting for the current consensus instance to terminate. Like in classic reductions of the general consensus problem to the binary consensus problem [9], [10], CollaChain's consensus execution consists of an all-to-all reliable broadcast of the blocks among all consensus replicas that trigger as many binary consensus instances whose outputs indicate the indices of acceptable blocks as detailed in §IV-D. The consensus system creates a superblock with all acceptable blocks ( §IV-D2) and sends this superblock to the state machine by invoking the commit([]byte) method . 3. Commit. When the superblock is received by the gRPC server running in the SEVM state machine, the superblock is first deserialized using JSON unmarshalling. The SEVM does the lazy validation ( §II) before committing the deserialized transactions . Note that as opposed to the eager validation, all servers execute the lazy validation for a committed transaction, yet it does not prevent CollaChain from scaling to hundreds of nodes ( §V-E). Once the superblock is decided, each of its blocks are executed, their hash is included, their results are written to persistent storage on the local disk and the lifecycle ends. B. Secure interface for lightweight clients As opposed to classic blockchains, like Ethereum, Col-laChain does not require the client 1 interacting with the service to download the blockchain or its block headers. Instead, CollaChain accepts connections from simple javascript-enabled browsers, as can be found on mobile devices [68]. Similar to geth, CollaChain supports a web3.js API that allows the user to communicate through http, IPC or websocket. Given that CollaChain tolerates f failures, it is sufficient for the client to query the same copy of the world state from f + 1 distinct blockchain servers to guarantee that this copy is consistent. And the client is guaranteed to find this copy at f + 1 blockchain nodes by contacting 2f + 1 blockchain nodes by assumption. As a result, the client interacts securely with CollaChain without any blockchain records whereas an Ethereum "light" client cannot interact securely due to incomplete blockchain records [42]. Hence CollaChain guarantees the Lightweightsecurity property ( §III). C. From the EVM to SEVM Here we present the modifications we made to the original EVM (and in particular geth v1.8.27) in order to obtain the Scalable EVM, or SEVM for short. More specifically, provided that k transactions are received by CollaChain, we reduce the average number k of transactions each SEVM node eagerly validates to k/n. 1) Reducing the transaction validations: As opposed to each Ethereum server that validates eagerly and lazily each of the k transactions of the system, each of the n CollaChain servers eagerly validates on average k/n transactions. Specifically, only one SEVM node needs to eagerly validates each transaction: the first SEVM node receiving the transaction validates it but does not propagate it to other SEVM nodes but simply proposes it to the consensus. As a result, CollaChain limits the redundant validations, which improves performance. More precisely, if the number of SEVM nodes is n, then each SEVM node does 1 + 1/n validations per transaction on average (one lazy validation + 1/n eager validation) compared to the two validations needed in geth. As n tends to infinity, CollaChain servers validate on average half what geth servers validate. In the worst case, where all clients send their transactions to f + 1 = n/3 servers simultaneously, then each server will still eagerly validate only k/3 transactions. Note that, as a result of our optimization, a byzantine SEVM node could propose transactions to the consensus without validating them eagerly, in this case two things can happen: i) The transaction is discarded at the lazy validation if invalid ii) The SEVM attempts to execute the invalid transaction, fails at execution and reverses the state to what it was. Either way, there is no impact on the safety of the blockchain. This is also not a DoS vulnerability of CollaChain, as even a byzantine EVM node in Ethereum can propagate invalid transactions to all EVM nodes, forcing all EVM nodes to unnecessarily eager-validate them. Finally, reducing the validation needed at each SEVM node helps CollaChain reach the Thousands-TPS property ( §III). 2) Reliably storing superblocks: At each index of the blockchain, our SEVM typically executes many more transactions as part of function execute _ transaction (lines 1-16) than Ethereum. This is due to the consensus outputting through the commitChan channel a superblock containing potentially as many blocks as blockchain nodes (line 3). In Ethereum, blocks are created before the consensus, thus geth updates only one block, by setting its state parameters, per consensus instance: updateBlockState points a block to its parent, assigns the block header timestamp and the number of transactions associated with a block. This function should thus be invoked for each block before the transactions of the block are executed and persisted, in order to ensure that the data structures are updated properly. To store multiple blocks at the end of the consensus instance, we modified geth to updateBlockState (line 10) multiple times per consensus instance (one invocation per block) as follows: More specifically, we reordered the transactions (as disordered transactions could be discarded due to their invalid nonces) and changed the original procedure to guarantee that not only one block but all blocks of our superblock were correctly stored in the transaction and reception tries as a batch of n blocks. Like the C++, python and geth software of Ethereum, we reliably store the information in the open source key-value store LevelDB (line 14). 3) SEVM support for fast-paced consecutive blocks: Since our consensus system is fast, it creates and delivers superblocks at high frequency through the commit channel to the SEVM. As geth does not expect to receive blocks at such a high frequency, it raises an exception outlining that consecutive block timestamps are identical, which never happens in a normal execution of Ethereum. This equality arose because geth encodes the timestamp of each block as uint64, not leaving enough space for encoding time with sufficient precision. geth typically reports an error when consecutive timestamps are identical, due to a strict check that compares the parent block timestamp to the current block timestamp in go-ethereum/consensus/ethash/consensus.go: header.Time < parent.Time. We changed the original check to header.Time <= parent, which allowed for fast-paced executions of consecutive blocks. 4) Bypassing the SEVM resource bottlenecks: After the consensus, the SEVM lazily validates many transactions, updates the memory and storage, which consumes high CPU, memory and IO resources. Typically, high CPU usage slows down the SEVM which results in the increase of the pending list of transactions. Once a threshold of pending transactions are reached, we observe transaction drops. This was evident in our superblock implementation. We observed that consuming each resource one after another, for 10 proposed blocks with a total of 15,000 transactions, would lead to losing transactions requests even on our reasonably-provisioned AWS instances featuring 16 GB RAM and 4 vCPUs (Fig. 4). This is why we made SEVM fully process one proposed block of the superblock at a time allowing it to alternate frequently between CPU-intensive (verifying signatures and transaction executions) and memory-intensive (state trie write) and IOintensive (reception/transaction tries writes) tasks. Thanks to this optimized implementation of the superblock, SEVM does not experience bottlenecks as the number of nodes increases (cf. §V-E). D. A BFT Consensus for SEVM As opposed to smart contract blockchains that decide (at most) one of the proposed blocks, CollaChain decides a superblock that results from its consensus system combining multiple proposed blocks into a single decision. In the ideal case, agreeing on a superblock thus allows to commit Ω(n) blocks of distinct transactions at the end of a single consensus instance. 1) Peer-to-peer network: The peer-to-peer (P2P) network of the consensus system is implemented using golang's RPC package net/rpc. The consensus node reads consensus network configuration from a yaml configuration file upon initialization. The configuration file contains the network size n, a port number p and a list of socket addresses specified in ip:port format. The consensus node sets up a gRPC server on port p for consensus messages. The list of socket addresses are gRPC endpoints of consensus nodes. To prevent byzantine nodes from eavesdropping, all communications use TLS. We show that the overhead induced by the encryption layer of TLS is negligible in Fig.5 of §V. 2) Increasing the decision size: The requirement of deciding at most one block is too restrictive to scale with the number n of consensus participants. Whatever n is, the consensus decides at most one single block. As our goal is to scale with the number n of consensus participants, we allow CollaChain to decide a combination of all the Ω(n) proposed blocks to make a superblock (line 30). This helps ensuring the Thousand-TPS property ( §II) as we explained before. Note that the same optimization was shown effective for Red Belly Blockchain [28] to scale to hundreds of consensus participants, but Red Belly only supports the Bitcoin scripting language and not smart contracts. The drawback of this superblock is that its size increases with the number n of participants, and so does its propagation time. To cope with arbitrary delays, we build our consensus upon DBFT [27] that is partially synchronous [37] and was recently proved correct via model checking [82], [12]. More specifically, this consensus protocol remains safe whatever delay it takes to deliver a message and when messages are delivered in a bounded (but unknown) amount of time the consensus protocol terminates. 3) The consensus protocol: The protocol is divided in two procedures, start _ new _ consensus at lines 24-30 that spawns a new instance of (multivalue) consensus by incrementing the replicated state machine index, and consensus _ propose at lines 37-51 that ensures that the consensus participants find an agreement on a superblock comprising all the proposed blocks that are acceptable. The idea of consensus _ propose builds upon classic reduction [9], [10] by executing an all-to-all reliable broadcast [14] to exchange n proposals, guaranteeing that any block delivered to a correct process is delivered to all the correct processes: any delivered proposal is stored in an array proposals at the index corresponding to the identifier of the broadcaster. The main difference is that these reductions use a probabilistic binary consensus algorithm while our binary consensus is deterministic. A binary consensus at index k is started with input value true for each index k where a block proposal has been recorded (line 35). To limit errors, CollaChain uses the formally verified deterministic binary consensus of DBFT [27], we omit the pseudocode for the sake of space and refer the reader to the formal verification of the protocol [82], [12]. As soon as some of these binary consensus instances return 1, the protocol spawns binary consensus instances with proposed value false for each of the non reliably delivered blocks at line 44. Note that this invocation is non-blocking. As the reliable broadcast fills the block in parallel, it is likely that the blocks reliably broadcast by correct processes have been reliably delivered resulting in as many invocations of the binary consensus with value true instead. Once all the n binary consensus instances have terminated, i.e., decidedCount == n at line 46, the superblock is generated with all the reliably delivered blocks for which the corresponding binary consensus returned true (lines [47][48][49][50][51]. At the end of start _ new _ consensus, if the superblock of the consensus contains the block proposed, then this block is removed from the blockQueue at lines 28 and 29 to avoid reproposing it later. To cope with bribery attacks, CollaChain relies on the proofof-stake (PoS) design common to other blockchains [50], [43] that assumes that users who have stake are more likely to behave correctly. Initially, the blockchain is setup with a membership smart contract that accepts a rotate method that outputs a random sample of n consensus participants among all potential blockchain participants with a preference for participants with the most assets, similar to a sortition [50]. Initially and periodically, the correct participants invoke the rotate function that outputs new consensus participants for the subsequent blocks. Traditional SSL authentication guarantees that the byzantine participants are ignored. As in Eth2 [43], neither does the system start nor does the rotate method is invoked until sufficiently many participants exist. To incentivize participants, CollaChain can reward consensus participants just like Bitcoin's miners [71], however, this reward has not been implemented. E. Proofs of correctness In this section, we show that CollaChain solves the blockchain problem (Def. 1). Note that the proofs that Col-laChain also guarantees Lightweigth-Security and Thousands-TPS ( §III) follows directly from the protocols ( §IV-B and §IV-C1) and the experimental results ( §V). For the sake of simplicity in the proofs, we assume that there are as many nodes playing the roles of consensus nodes and state nodes, and one state node and one consensus node are collocated on the same physical machine. Lemma 1: If at least one correct node consensus-proposes to a consensus instance i, then every correct node decides on the same superblock at consensus instance i. Proof. If a correct node p consensus-proposes, say v, to a consensus instance i, then p reliably broadcast v at line 40. By the reliable broadcast properties [14], we know that v is delivered at line 33 at all correct nodes. By assumption, there are at least 2f + 1 correct proposers invoking the reliable broadcast, hence all correct proposers eventually populate their block array with at least one common value. All correct proposers will thus have input true for the corresponding binary consensus instance at line 35. Now it could be the case that other values are reliably-broadcast by byzantine nodes, however, reliable broadcast guarantees that if a correct proposer delivers a valid value v, then all correct proposers deliver v. By the validity and termination properties of the DBFT binary consensus [27], the decided value for the binary consensus instance at line 41 is the same at all correct nodes. It follows that all correct nodes have the same bit array of decBlocks values at line 49 and that they all return the same superblock at line 51 for consensus instance i. The next three theorems show that CollaChain satisfies each of the three properties of the blockchain problem (Definition 1). Theorem 1: CollaChain satisfies the safety property. Proof. The proof follows from the fact that any block B at index of the chain is identical for all correct blockchain nodes due to Lemma 1. Due to network asynchrony, it could be that a correct node p 1 is aware of block B +1 at index + 1, whereas another correct node p 2 has not created this block B +1 yet. At this time, p 2 maintains a chain of blocks that is a prefix of the chain maintained by p 1 . And more generally, the two chains of blocks maintained locally by two correct blockchain nodes are either identical or one is a prefix of the other. Theorem 2: CollaChain satisfies the validity property. Proof. By examination of the code at line 8, only valid transactions are executed and persisted to disk at every correct node. It follows that for all indices , the block B is valid. Theorem 3: CollaChain satisfies the liveness property. Proof. As long as a correct replica receives a transaction, we know that the transaction is eventually proposed by line 27. The proof follows from the termination of the consensus algorithm [12] and the fact that CollaChain keeps spawning new consensus instances as long as correct replicas have pending transactions. V. EVALUATION OF COLLACHAIN In this section, we present the experimental evaluation of CollaChain, compare it against other blockchains ( §V-B), evaluate it when running across different continents ( §V-E) and with a Twitter DApp ( §V-F). A. Experimental setup We use up to 200 AWS virtual machines from 10 regions located in separate countries across 5 continents: Ohio, Mumbai, Seoul, Singapore, Sydney, Tokyo, Canada, Frankfurt, London, Paris, Stockholm, São Paulo. All machines run Ubuntu v18.04.3 LTS, golang v1.13.1. When not specified otherwise, the experiments consist of having clients sending 1500 distinct transactions to each SEVM nodes that exchange with their respective consensus node to spawn a consensus instance. The client machines are of type c5.xlarge with 4 vCPUs and 8 GiB of memory, the SEVM nodes are of type c5.2xlarge with 8 vCPUs, 16 GiB of memory, and the consensus nodes are of type c5.4xlarge with 16 vCPUs, 32 GiB of memory. As CollaChain is compatible with Ethereum, we reuse libraries of the JavaScript runtime environment Node.js: we create wallet addresses with ethereumjs-wallet, pre-sign transactions to transfer assets, upload or invoke a smart contract with ethereumjs-tx, and serialize these transactions before saving them to a JSON file. The client iterates through the serialized JSON file and sends the transactions to the SEVM using web3.eth.sendSignedTransaction through the web3.js javascript API using http. Hence, this offloads the encryption time from the performance measurement. All presented data points are averaged over at least 3 runs. B. Comparison with other blockchains Here we compare the performance of CollaChain to Quorum [22] from JP Morgan/Consensys and Concord [83] from VMware that both support Ethereum smart contracts. While evaluating all blockchains is out of the scope of this paper, Fig. 2. Comparison of throughput and latency between CollaChain (CC) and Quorum against sending rate. Fig. 3. Comparison of throughput and latency between CollaChain (CC) and Concord against sending rate. note that Table I provides a comparison of CollaChain to nonsharded blockchains while sharded blockchains are discussed in §V-G and §VI. We have also explored Burrow [61] that is unstable [78] and Ethermint whose open issues [2] prevented us from evaluating it. Figure 2 reports the latency and throughput of both Col-laChain and Quorum. CollaChain outperforms Quorum both in terms of latency and throughput because Quorum does not use superblocks. Interestingly, we also observe that the performance of Quorum starts decreasing as the sending rate increases whereas the performance of CollaChain keeps increasing, this is seemingly due to the growing backlog of requests in Quorum that induces congestion. Unfortunately, we could not test a sending rate of 800 TPS and above as Quorum would start losing requests, which confirms previous observations [78]. Figure 3 compares the throughput and latency of Concord and CollaChain. As Concord suffers from known configuration issues [24] that prevented us from running it on a distributed system, we ran both Concord and CollaChain on a single c5.9xlarge machine with 4 client machines. Concord slightly outperforms CollaChain with low sending rates, however, as the sending rate increases, CollaChain outperforms Concord significantly. C. Effect of validation reduction using the SmartBank DApp In order to assess the impact of the validation optimization ( §IV-C) of the SEVM on the performance, we measured the time spent validating eagerly when running the smart bank DApp that is part of BlockBench [34]. To this end, we instrumented its writeCheck function to measure both the total time ∆ n SEV M spent treating k calls and the average time δ n SEV M spent by each server of SEVM validating eagerly these calls on n nodes, to deduce the rest of the treatment time not affected by the validation optimization β = ∆ n SEV M −δ n SEV M . Based on this measurement, we could deduce the time δ EV M the EVM would spend validating eagerly without the validation optimization: δ EV M = n · δ n SEV M . In particular, regardless of n, we know that the EVM would spend ∆ EV M = β +δ EV M to treat the function calls. By contrast, depending on n, the SEVM would spend ∆ n SEV M = β + δ n SEV M . As δ n SEV M = δ EV M /n, we know that lim n→∞ (δ n SEV M ) = 0. This means that, with n servers, the EVM slowdown compared to the SEVM is: As n tends to infinity, we thus have a slowdown of: Our measurement obtained with k = 6000 transactions and n = 4 revealed that δ n SEV M = 0.61 seconds and ∆ n SEV M = 5.66 seconds. Hence, we have β = 5.66 − 0.61 = 5.05. As n = 4, we have δ EV M = 4 × 0.61 = 2.44 so that ∆ EV M = 5.05 + 2.44 = 7.49. This means that the EVM would take S = 32% more time than the SEVM to treat these DApp requests. Finally, as n tends to infinity, the slowdown of the EVM over the SEVM would become 48%. D. Storing the superblock efficiently To measure the impact on performance of storing each block separately, we implemented a naive method that stores the whole block at once. More precisely, we changed the storing loop ( §IV-C2) by simply removing the inner for loop at lines 5-15 that persisted one (sub-)block at a time, in order to persist the superblock once and for all. In this experiment, we setup a network of 2 clients machines, 10 consensus machines and 10 SEVM machines where clients send 1500 distinct transactions to each EVM node for a total of 15000 transactions. Fig. 4. Performance difference when processing each block of a superblock at a time (optimized) and when processing the entire superblock at once (non-optimized) Figure 4 compares the performance obtained with CollaChain (superblock optimized) and with the naive approach (superblock non-optimized). The throughput of CollaChain (superblock optimized) is 44% higher than the throughput of the naive approach. This is because trying to persist a large superblock that comprises 10 blocks leads to I/O congestion. One might argue that multi-threading block writes and transaction executions could also solve this issue. However, this is not possible as the execution and writing of blocks should happen sequentially. Interestingly, In addition, we observed that 3000 transactions get dropped, which represents 20% of all transactions, when executing the naive approach. This is due to CPU overload: executing a superblock of 10 blocks within a single loop iteration is more CPU intensive than executing one block per iteration because between two block executions the CPU resource can be allocated to other tasks. If the clients keep sending transactions while the CPU usage of the SEVM node reaches 100%, the SEVM starts dropping incoming transactions as soon as it cannot hold any more transactions. These results show the importance of optimizing the superblock storage for CollaChain to not suffer transaction drops. E. World-wide scalability To evaluate the scalability of the performance of CollaChain, we deployed CollaChain in 10 regions spanning 5 continents: Canada, London, Mumbai, Oregon, Paris, São Paulo, Singapore, Stockholm, Sydney and Tokyo. As previously mentioned, we consider for simplicity that each participant is running both a consensus node and an SEVM node so that we can consider each participant as a single entity, out of all of which at most a third can be faulty. Figure 5 depicts the throughput without end-to-end ecryption (w/o TLS) and with encryption (with TLS) of CollaChain as we run CollaChain on more and more machines: We start our experiment with 20 machines spread evenly in the 10 countries and add machines by group of 20 evenly spread in the 10 countries until we reach 200 machines. We observe that the throughput increases as we increase the number of nodes from 1100 TPS at 20 machines to 2038 TPS at 200 machines, demonstrating the scalability of CollaChain even in a geo-distributed setting. The curve flattens out at large scale between 140 and 200 nodes, indicating that the gain obtained in throughput by adding more machine becomes lower and lower. This is due to the numerous machines consuming the available bandwidth. Finally, we observe, as expected, that the TLS encryption comes at a cost, however, this overhead is negligible in comparison of the overall performance as the peak throughput with TLS (1960 TPS) is only 4% lower than the peak throughput without TLS (2038 TPS). Figure 6 shows the latency of transactions of CollaChain in the aforementioned geo-distributed environment as the number of nodes increases. We can observe that the latency increases with the number of nodes. We observe similar minimum latencies across all system sizes but the 99 th percentile indicates that some requests can take much longer especially at large scale: the transactions take less than 10 seconds to execute on up to 40 nodes while they take less than 40 seconds to execute at 200 nodes. It is important to note that these latencies can be viewed as time for a transaction to become final: thanks to our deterministic byzantine fault tolerance consensus ( §IV-D), transactions are committed (and thus final) as soon as the consensus ends and the superblock is selected. This differs from classic blockchains [88], [55] whose consensus is reached after the block is appended and after more "block confirmations" occur. Interestingly, this increasing latencies do not prevent the throughput from scaling with the number of machines as we discussed earlier (Fig. 5). This is precisely due to the superblock optimization: As more machines participate, more blocks get proposed and running consensus takes more time, which increases the latency, however, the number of transactions decided per consensus instance also increases, which guarantees scalability. F. Twitter DApp evaluation To evaluate how fast CollaChain can treat smart contract invocations under a realistic workloads, we ran the Twitter DApp of the DIABLO framework [11] on top of 4 consensus nodes and 4 SEVM nodes and report on the performance as time elaspses. DIABLO is a benchmark suite for blockchains that features DApps written in different smart contract programming languages. It features a Twitter DApp written in Solidity whose smart contract sends 140-character messages following a burst workload experienced during the release of the Castle in the Sky anime. Figure 7 depicts the performance results obtained while running this Twitter DApp on top of CollaChain. To achieve the high burst Twitter workload of 143,000 TPS, we had to deploy as many clients as SEVM nodes. We can observe that G. Linear speedup with sharding As we explain in the related work, sharding proved instrumental in boosting performance of blockchains. The idea of sharding stems from distributed database research where a database table gets split into sub-tables, each sub-table is stored on a partition of machines called a shard. When a request on a entry of the table is issued, the shard managing this entry is responsible for handling this request. Hence, requests on distinct entries can execute in parallel on distinct shards, allowing performance to increase (ideally linearly) with the number of shards. In blockchain, there exists various ways of implementing sharding. With traditional blockchain sharding, each shard is responsible of a subset of the transactions and runs an independent consensus instance to agree upon the ordering of these transactions. This is the approach taken by Dfinity [55], Elastico [66], RapidChain [90] and Omniledger [59]. With verification sharding, every shard participates in the same consensus instance and stores the global state, however, each shard is responsible of verifying a different set of transactions. This is the approach taken by Red Belly Blockchain [29]. We implemented sharding on top of CollaChain using a traditional blockchain sharding. Shards can be spawned on demand by a dedicated built-in smart contract, hence resulting in a beacon chain with shard chains structure similar to the upcoming Ethereum 2 [43]. Participants can deposit some assets on the beacon chain in order to spawn a new shard, which creates a new blockchain instance where the participants have an account with a balance corresponding to the assets they deposited on the beacon chain-this typically allows participants to transact within the shard without having to transact on the beacon chain. Table II presents the performance obtained on up to 32 small machines with 2 vCPU and 8 GiB memory running Ubuntu 20.04 when we increase the number of shards. The beacon chain (1 shard) delivers 470.79 TPS but when coupled with two other shards (3 shards) it delivers 2.98× higher performance, which demonstrates a speedup very close to linear. This is no suprise given that each shard runs on separate machines and use distinct resources. Although sharding makes the implementation of cross-shard transactions quite complex, we could offer cross-shard transactions by decoupling these transactions into separate withdrawals an credits, as was previously suggested in Prism [85], for applications where the atomicity of transactions is not a requirement. VI. RELATED WORK To decentralize the computation from large data stores [32], [21], [26], various work focused on user/edge-centric computing [48]. Solid [67] distributes private data into pods whose user manages permissions. Lightweight middleware [58] exploit WebRTC to avoid downloading a blockchain. These solutions do not offer the execution transparency of blockchains, which is key to prevent user manipulations [75]. a) Payment blockchains: Some blockchains are designed for high transactions throughput at large scale, but were not designed to support DApps [50], [28], [80], [65], [54]. This is the case of ResilientDB [54] that exploits topology-awareness to parallelize consensus executions, the Red Belly Blockchain [28] that shares our superblock optimization or Mir [80] that deduplicates transaction verifications. Stellar [65] is an inproduction blockchain running in a geodistributed setting while Algorand [50] introduced the sortition our membership change builds upon. Although progresses are being made towards smart contract support, these blockchains do not run DApps. b) Fast smart contract executions: Solana [88] builds upon Proof-of-History (PoH) to reduce message overhead. Solana provides high performance by offering optimistic consensus, hoping that a single block gets notarized at each index, and thanks to the vertical scaling of its validator nodes: validator nodes feature 1TB SSD disk and 2 Nvidia V100 GPUs for benchmarking [1] and 128 GiB memory is required [4], which is twice as much as the most powerful machines we used in our experiments. c) Towards byzantine fault tolerant blockchains: Upperbounding the number f of byzantine failures allow to solve consensus to avoid forks. Ethereum comes with proof-ofwork and proof-of-authority (PoA) in the two mainstream Ethereum programs, called parity and geth. The idea of proofof-authority is to have a set of n permissioned validators, among which f can be malicious or byzantine [72], that generate new blocks [44], [73], [56]. Unfortunately, both proofof-authority protocols in parity and geth have recently been shown vulnerable to the attack of the clone when messages take longer than expected [39]. d) Tolerating unpredictable bounded delays: To cope with unpredictable message delays, blockchains cannot rely on synchrony. Cosmos [18], sometimes referred to as the Internet of Blockchain, is a network of interoperable blockchains that builds upon the Tendermint state machine replication [15]. Ethermint [45] is a blockchain that combines the partially synchronous Tendermint consensus protocol [16] with the EVM. Ethermint is still under active development [3] and we could not benchmark it. In particular, we found some issues that prevented us from deploying it like a nonce management limitation, which resulted in rejecting consecutive transactions sent in a short period of time [2]. Other researchers who managed to deploy an older version of Ethermint, reported a peak throughput of 100 TPS obtained with a single validator node [33], however, Tendermint reached 438 TPS [18]. Zilliqa [91] is a blockchain that supports smart contracts and reaches consensus with PBFT [19]. We are not aware of any performance evaluation of Zilliqa but its state machine, Scilla, executes non Turing complete programs but slower than the EVM when the state size increases [77]. Therefore, it is unlikely that it would yield higher throughputs than our CollaChain for large state sizes. Chainspace [5] introduced a distributed atomic commit protocol termed S-BAC for smart contract transactions. Coupled with the BFT-SMaRt [13] consensus protocol, Chainspace can support trustless use of DApps. However, it has only been able to achieve up to 350 TPS, offering a limited support for DApps. e) Evaluations of BFT blockchains: Quorum [22] is a blockchain that supports Ethereum smart contracts and reaches consensus with the Istanbul Byzantine Fault Tolerant (IBFT) consensus algorithm. Just like CollaChain, the byzantine fault tolerance of Quorum makes it well-suited for mobile devices to interact wth DApps securely without downloading the blockchain. Moreover, it seems that few optimizations could help it treat a large number of transactions per second [7]. Unfortunately, Quorum loses requests ( §V-B). SBFT [53] is a byzantine fault tolerant consensus algorithm that exploits threshold signatures to reduce the communication complexity of PBFT but commits, like PBFT, at most one proposed block per consensus instance. It was shown to reach consensus on 378 smart contract requests per second when deployed within one continent and 172 requests per second across multiple continents. Concord [83] is a blockchain that combines a lightweight C++ implementation of the EVM with SBFT, however, its publicly available version has open issues [24] that prevent it from being deployed on distinct physical machines but we showed that Concord, although slower than CollaChain, reached the encouraging throughput of 1000 TPS on 4 nodes within the same physical machine. It could be the case that future versions will scale. f) Sharding: As we presented in §V-G, one can multiply the performance of a blockchain, including CollaChain, by adding more shards. Dfinity [55] coined as the Internet Computer is an open permissioned blockchain. Dfinity scales horizontally thanks to its committees, with an assumed majority of correct members, that act like shards. It achieves high block production throughput thanks to concurrent execution of cannisters, isolated pieces of code compiled to WASM that act as smart contracts and offer low latency to read requests. The difference with CollaChain, is that a block produced is not necessarily final: a verifiable random function is used to rank block proposers and if an adversarial one is rank highest, it could propose conflicting blocks that are notarized, hence leading to a fork. Additional assumptions are needed for the nodes to agree on the chain with the highest block weight. CollaChain solves consensus before appending blocks. The move approach [46] moves accounts and computation from one smart contract enabled blockchain to another. The smart contract of the first blockchain is locked before any participant creates it in the second blockchain. This allows to scale the throughput of the congested DApp CryptoKitties with the number of shards. Eth2 [43] relies on a beacon chain and will feature 64 shard chains to improve the scalability of Ethereum. The uniqueness of the beacon chain guarantees a consistent view of current state but cannot handle accounts and smart contracts. The validators of the shard chain do not need to download and run data for the entire network. PRISM [86] is a proof-of-work blockchain that shards the blockchain into m voter chains and exploits three types of blocks in a block tree. The voter blocks are used to vote for proposer blocks grouped per level in the block tree. Once a proposer block is elected, transaction blocks that are pointed to by the proposer block are committed. Prism peaks at 19K TPS by ignoring the eager validation completely, which exposes it to DoS attacks. To ensure the copy of the blockchain state is not corrupted, a user needs first to download the block headers, a time-and space-consuming task ill-suited for running DApps on handheld devices. VII. CONCLUSION CollaChain is a collaborative blockchain compatible with the largest ecosystem of DApps that treats thousands of requests per second and scales to hundreds of machines world-wide. It builds upon recent advances in deterministic byzantine fault tolerance (BFT) consensus algorithms to avoid forks and offers finality without having to wait for block confirmations. Its key novelties lie in (i) having smart contract execution nodes collaborating to minimize validations and in (ii) having consensus nodes collaborating to combine their block proposals into a committed superblock of smart contracts. Our experiments demonstrate that CollaChain is an appealing BFT middleware for individuals to exchange in a fully distributed fashion. We showed that one instance of CollaChain handles a peak throughput of 4500 TPS under a Twitter DApp. We also showed how to interconnect different shard instances of CollaChain to scale almost linearly, to support potentially as many DApps as shards. This, combined with the ability of one instance (or shard) to scale to hundreds of nodes spread in 5 continents makes CollaChain an appealing BFT middleware for DApp services. This new model departs from the centralization trend of the sharing economy services to offer more transparent and fault tolerant services to individuals.
2022-03-24T06:47:33.458Z
2022-03-23T00:00:00.000
{ "year": 2022, "sha1": "43c3a25ac20750152948490db1d7c3fc461ffdd2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "43c3a25ac20750152948490db1d7c3fc461ffdd2", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
221397138
pes2o/s2orc
v3-fos-license
Observational Constraints on the Physical Properties of Interstellar Dust in the Post-Planck Era We present a synthesis of the astronomical observations constraining the wavelength-dependent extinction, emission, and polarization from interstellar dust from UV to microwave wavelengths on diffuse Galactic sightlines. Representative solid phase abundances for those sightlines are also derived. Given the sensitive new observations of polarized dust emission provided by the Planck satellite, we place particular emphasis on dust polarimetry, including continuum polarized extinction, polarization in the carbonaceous and silicate spectroscopic features, the wavelength-dependent polarization fraction of the dust emission, and the connection between optical polarized extinction and far-infrared polarized emission. Together, these constitute a set of constraints that should be reproduced by models of dust in the diffuse interstellar medium. INTRODUCTION Interstellar dust is manifest at nearly all wavelengths of astronomical interest, scattering, absorbing, and emitting radiation from X-ray to radio wavelengths. Embedded in this diversity of phenomena are clues to the nature of interstellar grains-their size, shape, composition, and optical properties. A combination of astronomical observations, laboratory studies, and theoretical calculations has informed a picture of interstellar dust that consists of, at minimum, amorphous silicate and carbonaceous materials (see Draine 2003a, for a review). However, many questions remain as to the details of these components, e.g., their optical properties, porosity, purity, size distributions, shapes, and alignment, including whether the silicate and carbonaceous materials exist as distinct components or whether they are typically found in the same interstellar grains. The astronomical data which constrain models of interstellar dust are extensive and ever increasing in detail. Determinations of solid phase abundances define the elemental makeup and mass of interstellar dust grains per H atom. Interstellar extinction has been measured from bhensley@astro.princeton.edu the far-UV (FUV) through the mid-infrared (MIR), including a number of spectral features suggesting specific materials. Emission from dust grains heated by the ambient interstellar radiation field has been observed from the near-infrared (NIR) through the microwave. Additionally, anomalous microwave emission (AME), thought to arise from rapidly rotating ultrasmall grains, is seen at radio frequencies while extended red emission (ERE), attributed to fluorescence, is observed in the optical. Polarization has been detected in both extinction and emission, including in some spectral features, placing additional constraints on the shapes, compositions, and alignment properties of interstellar grains. With high-sensitivity far-infrared (FIR) imaging and polarimetry, the Planck satellite measured the properties of submillimeter polarized dust emission in unprecedented detail (Planck Collaboration Int. XIX 2015). The very high submillimeter polarization fractions and the observed characteristic ratios between polarized FIR emission and polarized extinction at optical wavelengths have posed serious challenges to pre-Planck dust models (Planck Collaboration Int. XIX 2015;Planck Collaboration XII 2018). It is imperative that these new findings guide the development of the next generation of models. When presenting a new dust model, it has become customary to detail the set of observations that constrain it (e.g., Mathis et al. 1977;Draine & Lee 1984;Zubko et al. 2004;Draine & Fraisse 2009;Compiègne et al. 2011;Siebenmorgen et al. 2014;Guillet et al. 2018). Given the now vast array of observations that can be employed in calibrating and testing models, and given also the heterogeneity of the observations in terms of wavelengths covered and region observed, synthesizing a coherent set of model constraints can be as challenging as construction of the model itself. It is therefore the goal of this work to summarize the current state of observations constraining the properties of dust in the diffuse interstellar medium (ISM) and to establish a set of benchmark constraints against which models of interstellar dust can be tested. This paper is organized as follows: we first derive the solid phase abundances of the primary elemental constituents of dust in Section 2; then, we combine various observational data to derive the wavelength dependence of dust extinction (Section 3), polarized extinction (Section 4), emission (Section 5), and polarized emission (Section 6) for a typical diffuse, high-latitude sightline. Finally, we present a summary of these constraints in Section 7. ABUNDANCES The heavy elements that make up the bulk of the mass of grains are produced in stars which return material to the ISM via winds or ejecta. Some of the atoms remain in the gas while a fraction get locked in grains. Comparison of stellar and gas phase abundances of metals is thus an important observational constraint on grain models. The elements C, O, Mg, Si, and Fe are depleted in the gas phase and compose most of the interstellar dust mass. In addition, Al, S, Ca, and Ni are also depleted and constitute a minor but non-negligible fraction of the dust mass. A dust model should account for the observed depletions of each of these elements. Other elements (e.g., Ti) are also present in the grains, but collectively account for < 1% of the grain mass, and will not be discussed here. While gas phase abundances are determined directly from absorption line spectroscopy, inferring the solid phase abundances from these measurements requires determination of the total abundance of each element in the ISM. This is often done starting from the wellconstrained Solar abundances and applying a correction for Galactic chemical enrichment (GCE) during the ∼4.6 Gyr since the formation of the Sun. The protosolar values are presumed to reflect the abundances in the ISM at the time of the Sun's formation 4.6 Gyr ago. Present-day interstellar metal abundances are likely enhanced relative to these protosolar values. The chemical evolution model of Chiappini et al. (2003, Model 7) predicts the C, O, Mg, Si, S, and Fe abundances to be enriched by 0.06, 0.04, 0.04, 0.08, 0.09, and 0.14 dex, respectively, relative to the protosolar values. Bedell et al. (2018) estimated the chemical enrichment as a function of time by determining the elemental abundances in Solar twins of various ages. If we assume ∆[Fe/H] = 0.14 (Chiappini et al. 2003), their results imply present-day enrichments of 0.05, 0.11, 0.10, 0.08, 0.11, 0.09, 0.16, and 0.09 dex for C, O, Mg, Al, Si, S, Ca, Ni respectively, where in the case of C, we have taken the weighted mean of the determinations based on C i and CH. These results are summarized in Table 1. We apply the latter values to our reference protostellar abundances to define our reference ISM abundances, listed in the first column of Table 2. Interstellar abundances can also be inferred from observations of young stars. Studies of young (< 1 Gyr) F and G stars (Bensby et al. 2005;Bensby & Feltzing 2006) have yielded fairly concordant numbers for O, Mg, Al, Si, S, Ca, Fe, and Ni (see Lodders et al. 2009, for review). The C abundance, however, appears somewhat lower than would be predicted from the solar abundances. On the other hand, Sofia & Meyer (2001) report C, O, Mg, Si, and Fe abundances obtained from young (≤ 2 Gyr) F and G stars that are in good agreement with the protosolar abundances plus enrichment, including the C abundance. Photospheric abundances have also been determined for B stars with mostly consistent results, as summarized in Table 1. However, the Si abundances determined from B stars are somewhat lower, with reported values of 18.8 ± 8.9 ppm (Sofia & Meyer 2001) and 31.6 ± 2.6 ppm (Nieva & Przybilla 2012). Likewise, the Fe abundances are lower than those based on solar abundances by ∼10 ppm. Different determinations of the interstellar metal abundances are not yet fully concordant, and the uncertainties quoted by any study using a specific class of objects may under-represent the underlying systematic uncertainties particular to that method. For the purposes of this work, we adopt abundances based on solar abundances plus enrichment as representative. Once the baseline interstellar abundances have been determined, absorption line spectroscopy can be employed to determine the quantity of each element missing from the gas phase due to incorporation into grains. Compiling data over a large number of sightlines and gas species, Jenkins (2009) defined a parameter F * that quantifies the level of depletion of all metals along that sightline. F * = 0.5, roughly the median depletion in the Jenkins (2009) sample, corresponded to sightlines with mean n H 0.3 cm −3 , appropriate for diffuse H i. Therefore, we adopt the gas phase abundances for F * = 0.5 as representative for the diffuse sightlines of interest in this work. In Table 2, we list the gas phase abundances of C, O, Mg, Si, S, Fe, and Ni corresponding to F * = 0.5 and the relations for each element derived by Jenkins (2009). For Al and Ca, we assume the level of depletion is the same as for Fe. With the ISM and gas phase abundances constrained, we take the difference to determine the solid phase abundances, which we list in Table 2. We estimate the error bars by adding in quadrature those from Table 1 and the errors on the gas phase abundances inferred from Jenk ins (2009). Models of interstellar dust should account for the solid phase abundances presented here to within the observational and modeling uncertainties. Introduction Interstellar dust attenuates light through both scattering and absorption. "Extinction" refers to the sum of these processes, and the wavelength dependence of interstellar extinction forms a key constraint on the properties of interstellar grains. Because interstellar dust preferentially extinguishes shorter wavelengths in the optical, the effects of extinction are often referred to as "reddening." Extinction is typically measured in one of two ways. In the "pair method," the spectrum of a reddened star is compared to an intrinsic spectrum derived from a set of standard unreddened stars (e.g., Trumpler 1930;Bless & Savage 1972). Alternatively, the stellar spectrum and the interstellar extinction can be modeled simultaneously with the aid of theoretical stellar spectra (e.g., Fitzpatrick & Massa 2005;Schultheis et al. 2014;Fitzpatrick et al. 2019). However, neither method readily yields the total extinction A λ , where F int λ = L λ /4πD 2 is the intrinsic (i.e., unreddened) flux, and F obs λ is the observed flux. However, if the wavelength dependence of the luminosity L λ is presumed to be known, the differential extinction between two wavelengths is independent of distance D. Most empirical extinction curves are thus expressed as the "selective" extinction relative to a reference bandpass or wavelength λ 0 and written as To remove the dependence on the dust column, this is often then normalized to selective extinction between two reference bandpasses or wavelengths, classically the Johnson B and V bands, e.g., is commonly used to parameterize the shape of the extinction curve. As noted by many authors (e.g., Blanco 1957;Maíz Apellániz et al. 2014;Fitzpatrick et al. 2019), the use of bandpasses rather than monochromatic wavelengths to normalize extinction curves becomes problematic at high precision because the measured extinction in a finite bandpass depends not just on the interstellar extinction law but also on the intrinsic spectrum of the object. We therefore focus where possible in this work on spectroscopic or spectrophotometric determinations of the interstellar extinction law. Because we are principally interested in connecting observations to the properties of interstellar grains, we express our synthesized representative extinction law in terms of optical depth τ λ rather than A λ , which are related by X-Ray Extinction Although measurement of absolute extinction is usually not possible at X-ray energies, the differential extinction associated with X-ray absorption features can be determined spectroscopically. Such spectroscopic measurements have been made across the O K edge at 530 eV (e.g., Takei et al. 2002), Fe L edge at ∼ 700-750 eV (e.g., Paerels et al. 2001;Lee et al. 2009), Mg K edge at 1.3 keV (e.g., Rogantini et al. 2020), and Si K edge at 1.84 keV (Schulz et al. 2016;Zeegers et al. 2017;Rogantini et al. 2020). Chandra and XMM-Newton both have sufficient spectral resolution to distinguish gasphase absorption from extinction contributed by dust. X-ray spectra have been interpreted as showing that interstellar silicates are Mg-rich (Costantini et al. 2012;Rogantini et al. 2019), andWestphal et al. (2019) conclude that most of the Fe is in metallic form. While the absorption profile of the 10 µm silicate feature has been interpreted as giving a 2% upper limit on the crystalline fraction (see Section 3.7), X-ray observations of the Mg and Si K edges have been interpreted as showing that 11−15% of the silicate material is crystalline, (Rogantini et al. 2019(Rogantini et al. , 2020. Efforts to identify the specific minerals hosting the solid-phase C, O, Mg, Si, and Fe remain inconclusive because of not-quite-sufficient spectral resolution, limited signal-to-noise, and limited laboratory data. Scattering contributes significantly to the extinction (Draine 2003b), and therefore model comparisons depend not only on the composition of the dust, but also on the size and shape of the grains (Hoffman & Draine 2016). Future measurements of X-ray extinction and X-ray scattering (see Section 3.11.1) offer the prospect of mineralogical identification. The key will be to interpret the observations using dust models together with all available observational constraints. UV Extinction Spectroscopy from the International Ultraviolet Explorer (IUE) has been one of the primary datasets for characterizing the interstellar extinction law in the UV since the 1980s (e.g., Witt et al. 1984a;Fitzpatrick & Massa 1986, 1988Cardelli et al. 1989;Valencic et al. 2004). Other notable measurements of UV extinction have been made by the Copernicus satellite (e.g. Cardelli et al. 1989), the Orbiting and Retrievable Far and Extreme Ultraviolet Spectrometer (ORFEUS, Sasseen et al. 2002), the Hubble Space Telescope (HST; e.g., Clayton et al. 2003), and the Far Ultraviolet Spectroscopic Explorer (FUSE; e.g., Gordon et al. 2009). Extinction in the UV is characterized by a steep rise to short wavelengths, a prominent broad spectral feature at 2175Å (see Section 3.8.1), and a notable lack of other substructure Gordon et al. 2009). Spectroscopic characterization of interstellar extinction from UV to optical wavelengths was recently undertaken by Fitzpatrick et al. (2019), who used HST Space Telescope Imaging Spectrograph (STIS) spectroscopy extending from 290-1027 nm to complement IUE UV data. Additionally, JHK photometry from the Two-Micron All Sky Survey (2MASS) was used to extend the analysis into the near-infrared. On the basis of these data toward a curated sample of 72 O and B stars, they derived a mean extinction law having A(5500Å)/E(4400Å − 5500Å) = 3.02, corresponding approximately to R V = 3.1. Because of the narrow-band observations, the resulting extinction curve is monochromatic and normalized using the extinction at 4400 and 5500Å rather than the Johnson B and V bands, respectively. We illustrate this curve in Figures 1 and 2. On the basis of UV, optical, and NIR data toward a sample of 45 stars studied in the UV by Fitzpatrick & Massa (1988), Cardelli et al. (1988) presented an analytic parameterization for the extinction between 3.3 and 8 µm −1 as a function of R V . This law was extended to the range 0.3 to 10 µm −1 by Cardelli et al. (1989). Combining IUE spectroscopy and 2MASS data along 417 lines of sight, Valencic et al. (2004) further refined this parameterization in the 3.3 to 8.0 µm −1 range 1 . We note, however, that the extinction law in this range was not formulated to join smoothly with the adjacent sections of the extinction law parameterized by Cardelli et al. (1989). Finally, Gordon et al. (2009) used the functional form 2 presented in Cardelli et al. (1989) to 1 Note corrected numbers in Valencic et al. (2014). 2 Note corrected numbers in Gordon et al. (2014). fit 75 extinction curves measured with FUSE data from 3.3 to 11 µm −1 . We include the extinction laws of Cardelli et al. (1989), Valencic et al. (2004), and Gordon et al. (2009) in Figure 1. These extinction laws were derived in terms of E(λ − V )/E(B − V ) rather than monochromatic equivalents. Applying the correction factors to account for the finite bandpasses suggested by Fitzpatrick et al. (2019) (their Equation 4) results in curves that deviate more substantially from unity at 4400Å and zero at 5500Å than applying no correction. It is also the case that the R V = 3.1 curve using the Cardelli et al. (1989) parameterization does not precisely have A V /E(B − V ) = 3.1 (see discussion in Maíz Apellániz 2013). Given these issues, we simply assume that E(B − V ) corresponds exactly to E(4400Å − 5500Å) to convert the R V = 3.1 curves to monochromatic reddenings. Sasseen et al. (2002) made a determination of the mean FUV (910-1200Å) extinction law using observations of eleven pairs of B stars with the ORFEUS spectrometer. This curve is also plotted in Figure 1 where, as with several of the other curves presented, we do not apply any corrections to translate from the reported E(λ−V )/E(B −V ) to monochromatic reddenings. While the shape of this curve is in general agreement with that of Cardelli et al. (1989) and Gordon et al. (2009), there is significantly less FUV extinction per E(B − V ). As Figure 1 illustrates, there is general agreement among extinction curves in the UV. The Fitzpatrick et al. (2019) and Gordon et al. (2009) curves correspond closely between 3.3 and 6 µm −1 , while that of Valencic et al. (2004) agrees better with Fitzpatrick et al. (2019) between 5 and 8 µm −1 . For our representative extinction curve, we therefore employ the Fitzpatrick et al. (2019) curve from 5500Å to 8 µm −1 , and then match onto the curve of Gordon et al. (2009) to extend to 11 µm −1 . This is accomplished by using the Cardelli et al. (1989) curve between 8 and 10 µm −1 . Optical Extinction While the extinction curve of the diffuse ISM has been well determined from UV to optical wavelengths over decades of observations, it is only recently that spectrophotometric observations have enabled detailed characterization at optical wavelengths. In this section, we compare determinations of the mean Galactic extinction curve from 500 nm to 1 µm. In addition to Fitzpatrick et al. (2019), a recent determination of the optical extinction law using spectroscopy is that of Maíz Apellániz et al. (2014) Cardelli et al. 1989Sasseen et al. 2002Valencic et al. 2004Gordon et al. 2009Fitzpatrick et al. 2019 Adopted Cardelli et al. (1989). The resulting curve is presented in Figure 2 alongside that of Fitzpatrick et al. (2019). While there is general consistency between the two extinction laws, there are also significant departures. As 30 Doradus is located in the Large Magellanic Cloud (LMC), the extinction has a contribution from the LMC dust which may differ systematically from that of the Galaxy. We therefore seek comparisons with other observations. Schlafly et al. (2016) determined the extinction toward 37,000 APOGEE stars in ten photometric bands from g (503.2 nm) to WISE 2 (4.48 µm). This wavelength coverage does not extend far enough blueward to apply the normalization used in Figure 1, and indeed Schlafly et al. (2016) note that different methods of extrapolating their extinction law to the B band yield R V s that differ by a few tenths. Thus, Figure 2 presents a different comparison using E(5500Å − 6410Å) as the normalization factor, roughly equivalent to E(V − R). Because of the explicit treatment of the bandpasses, the Schlafly et al. (2016) extinction curve is defined with respect to monochromatic wavelengths. From 500 to ∼ 800 nm, the Maíz Apellániz et al. (2014) and Schlafly et al. (2016) curves are in close agreement. We note that the Maíz Apellániz et al. (2014) extinction law defaults to that of Cardelli et al. (1989) at wavelengths longer than ∼ 800 nm. Indeed, Schlafly et al. (2016) note that the Cardelli et al. (1989) parameterization provides a poor fit to the infrared data for the full range of R V studied while the Maíz Apellániz et al. (2014) law is an excellent fit in the optical. Wang & Chen (2019) employed Gaia parallaxes for a sample of more than 61,000 red clump stars in APOGEE to overcome the distance/attenuation degeneracy and derive a mean interstellar extinction law in 21 photometric bands. When expressed as color excess ratios E(λ − λ 1 )/E(λ 2 − λ 1 ), their mean curve agrees with that of Schlafly et al. (2016) to within a few percent over the full 0.5-4.5 µm wavelength range. Given these corroborating studies, we adopt the extinction law of Schlafly et al. (2016) from 550 nm to the IR. However, converting from E(λ−5500Å)/E(4400Å− 5500Å) to a quantity like A λ /A(5500Å) requires a measurement of the absolute extinction at some wavelength. This is because a single reddening law is consistent with a family of extinction laws that differ by an additive offset common to all wavelengths over which the reddening has been measured. The classic R V = 3.1 is relatively well-determined from the fact that the infrared extinction is much smaller than the optical and UV extinction, and so measurement of reddening relative to a NIR band, e.g., E(V − L)/E(B − V ), constrains any component common to all bands sufficiently well, i.e., E(V − L)/E(B − V ) ≈ R V . As determinations of the extinction curve are made at increasingly long wavelengths, the sensitivity to the size of this common component increases. We explore this issue in more detail in the following section. NIR Extinction The NIR extinction law from ∼1-5 µm is often approximated as a power law A λ ∝ λ −α . A foundational analysis of NIR extinction was made by Rieke & Lebofsky (1985), who measured extinction toward o Sco (A V = 2.7, Whittet 1988 Humphreys 1978;Torres-Dodgen et al. 1991), and several heavily reddened sources toward the Galactic Center (A V between 23 and 35). The widely-used extinction law of Cardelli et al. (1989) relies on the extinction curve determined by Rieke & Lebofsky (1985) at wavelengths longer than J band (λ J 1.23 µm), employing α = 1.61 for 0.91 < λ/µm < 3.3. Many other early determinations of α likewise found values in the range ∼ 1.6-1.85 (see the reviews of Draine (1989b) and Mathis (1990)). However, an analysis by Stead & Hoare (2009) demonstrated that the value of α derived from fits to extinction in the JHK photometric bands depends sensitively on how the bandpasses are treated, particularly for highly reddened sources. Accounting explicitly for these bandpass effects in sources of different intrinsic spectra and levels of reddening, and using photometry from both the United Kingdom Infrared Deep Sky Survey (UKIDSS) and 2MASS, they recommend a mean value of α = 2.15 ± 0.05, significantly larger than most earlier determinations. Recently, a similar study using 2MASS photometry found α = 2.27 with an uncertainty of ∼ 1% (Maíz Apellániz et al. 2020). While the power law approximation is both simple and effective, Fitzpatrick & Massa (2009) demonstrated that extinction between the I (λ I 0.798 µm)and K s (λ Ks 2.16 µm) bands is better represented by a mod-ified power law in which α increases between 0.75 and 2.2 µm. They proposed instead a function of the form with λ 0 = 0.507 µm. The fit values of γ varied considerably from sightline to sightline, ranging from ∼ 1.8-2.8, and the constant of proportionality was found to depend on R V . Schlafly et al. (2016) found excellent agreement in the NIR between this parameterization with γ 2.5 and their mean extinction law. While this functional form captures flattening of the extinction law at the shortest wavelengths in this range, other studies have noted an apparent flattening of the NIR extinction law at the longest wavelengths as well, particularly in comparing the slope of the extinction curve between J and H (λ H 1.63 µm) to the slope between H and K s (e.g., Fritz et al. 2011;Hosek et al. 2018;Nogueras-Lara et al. 2020). Such behavior is not unexpected given indications of a relatively flat MIR extinction curve (see Section 3.6). The assumption of a power law can have dramatic effect on the conversion from reddening to extinction. If With a sample of 37,000 stars, Schlafly et al. (2016) made precise determinations of interstellar reddening in these bands. Inserting their measured reddenings into Equation 7 yields α = 2.30. As discussed in Section 3.5, however, a single reddening law is consistent with a family of extinction laws that differ by an additive constant. One method of placing a limit on this constant is to require the extinction in the longest wavelength band to be positive. Another more constraining method is to find the additive constant such that the ratio of the extinction in two bands agrees with a measured value. We find that α 2.30 between J and K can be achieved by employing the Schlafly et al. (2016) reddening law and imposing A H /A K = 1.87. However, this same reddening law is consistent with a (wavelength-dependent) logarithmic slope of ∼ 1.7 in the NIR when instead requiring A H /A K = 1.55 (as determined by Indebetouw et al. 2005). It is therefore unclear whether the large values of α found by Stead &Hoare (2009) andMaíz Apellániz et al. (2020) are indeed more physical due to the more careful treatment of the bandpasses or whether they are biased toward higher values of α by forcing extinction in the JHK bands to conform precisely to a power law. An independent constraint on the absolute extinction is needed to break this degeneracy. The default curve put forward by Schlafly et al. (2016) employs A H /A K = 1.55 as determined by Indebetouw et al. (2005). In that study, the absolute extinction was constrained along a diffuse sightline in the Galactic plane with = 42 • by measuring the extinction toward K giants, which are well-localized in color space, under the assumption that extinction per unit distance is constant in the Galactic plane. Wang & Chen (2019) used Gaia parallaxes to measure the reddening as a function of distance modulus toward a sample of more than 60,000 red clump stars. They found A H /A Ks = 1.75, noting agreement with Chen et al. (2018) who used 55 classical Cepheids to measure distance to the Galactic Center and derived A H /A Ks = 1.717. Photometry of red clump stars toward the Galactic Center has yielded relatively concordant values of ∼ 1.69 ± 0.03 (Nishiyama et al. 2006;Nagatomo et al. 2019), 1.76 ± 0.10 (Schödel et al. 2010), and 1.84 ± 0.03 . The steep NIR extinction laws implied by large values of A H /A K are difficult to reconcile with relatively flat extinction between 4-8 µm and comparisons between visual extinction and extinction in the 9.7 µm feature, as we discuss in the next section. The NIR extinction is sensitive to relative abundance of the largest interstellar grains, and so sightlines passing through molecular gas, where grains grow to larger sizes through coagulation, may have systematically different properties. It is unclear if this effect is responsible for the discrepancy between the observations of Indebetouw et al. (2005) on a relatively diffuse sightline and those toward the Galactic Center. Ultimately, on the basis of the observed properties of the MIR extinction, we adopt as our representative NIR extinction curve the reddening law of Schlafly et al. (2016) with A H /A K = 1.55 to convert to extinction. We present the resulting extinction law in Figure 3, where we compare it to the same reddening law derived assuming A H /A K = 1.75 instead. Further studies of NIR extinction along diffuse sightlines are needed to clarify the steepness of the interstellar extinction curve and its variations with environment. MIR Extinction The MIR extinction is dominated by continuum extinction between 3-8 µm and by the 9.7 and 18 µm silicate features longward of 8 µm. We focus here on the former, deferring discussion of the silicate features to Section 3.7. Carbonaceous MIR extinction features are discussed in Section 3.8.2. Some early determinations of the MIR extinction suggested a continuation of the NIR power law with a sharp minimum at 7 µm (e.g., Rieke & Lebofsky 1985;Bertoldi et al. 1999;Rosenthal et al. 2000;Hennebelle et al. 2001). However, a growing body of work suggests that the MIR extinction is relatively flat between ∼ 4 and 8 µm across a diversity of sightlines and values of R V . Sightlines toward the Galactic Center have been wellmeasured in extinction and were the first to suggest, via observation of hydrogen recombination lines, a flattening of the extinction law in the MIR (Lutz et al. 1996;Lutz 1999). Subsequent broadband and spectroscopic observations toward the Galactic center (Nishiyama et al. 2006(Nishiyama et al. , 2008(Nishiyama et al. , 2009Fritz et al. 2011) and the Galactic plane (Jiang et al. 2003(Jiang et al. , 2006Gao et al. 2009) have proven consistent with a relatively flat extinction law. Likewise, Flaherty et al. (2007) found good agreement with the Lutz et al. (1996) extinction curve when measuring the extinction toward nearby star-forming regions where the extinction was dominated by molecular gas. Observing in the dark cloud Barnard 59 (A K ∼ 7, A V ∼ 59), Román-Zúñiga et al. (2007) measured a 1.25-7.76 µm extinction law consistent with that of Lutz et al. (1996). We seek the properties of dust in the diffuse ISM, which may be systematically different from these more heavily extinguished sightlines. However, the relatively flat extinction law between ∼3 and 8 µm appears fairly universal. Combining Spitzer and 2MASS observations on an "unremarkable" region in the Galactic plane centered on = 42 • , b = 0.5 • , Indebetouw et al. (2005) derived a extinction curve in agreement with Lutz et al. (1996). Zasowski et al. (2009) derived an average extinction curve over 150 • in the Galactic midplane also using Spitzer and 2MASS photometry, finding excellent agreement with Indebetouw et al. (2005). Further, they note consistency between their result and extinction curves in low extinction regions in molecular clouds measured by Chapman et al. (2009). Wang et al. (2013) measured the IR extinction law in regions of the Coalsack nebula that sampled a range of environments from diffuse to dark, finding a relatively universal shape of the MIR extinction across environments. Xue et al. (2016) derived a relatively flat MIR extinction curve toward a sample of G and K-giants in the Spitzer IRAC bands, in agreement with recent studies and sharply discrepant with a deep minimum in the extinction curve at ∼ 7 µm. The Spitzer Infrared Spectrograph (IRS) enables spectroscopic determination of the extinction law from ∼ 5-37 µm. Employing IRS spectra toward a sample of five O and B stars, Shao et al. (2018) derived a relatively flat extinction curve between 5 and 7.5 µm. Also using IRS data, Hensley & Draine (2020) determined a nearly identical extinction curve toward Cyg OB-12 in the 5-8 µm range. On the basis of these data, we conclude that a relatively flat extinction curve between ∼ 4-8 µm is universal and typical of even of the diffuse ISM having R V ≈ 3.1, not just sightlines with large values of R V . We summarize a selection of these observations in Figure 4. It must be cautioned, however, that the conversion from reddenings to extinction in many of these studies was accomplished by assuming a power law form in the NIR, and thus uncertainty still remains in both the precise shape and amount of 4-8 µm extinction relative to the NIR. To create a composite extinction law, we join the Schlafly et al. (2016) curve (with A H /A K = 1.55) described in Section 3.5 to the extinction measured toward Cyg OB2-12 by Hensley & Draine (2020). The latter study presented a synthesized extinction curve by joining the measured 6-37 µm extinction inferred from Spitzer IRS measurements to the Schlafly et al. (2016) extinction law likewise assuming A H /A K = 1.55. As illustrated in Figure 4, this provides a good representation of other studies of extinction in the 4-8 µm range. As discussed in Section 3.5, A H /A K = 1.55 is low relative to several recent determinations, which favor a value of 1.75. On the other hand, the Schlafly et al. (2016) extinction law having A H /A K = 1.75 shows no evidence for flattening even out to 4.5 µm and implies lower 4-8 µm extinction relative to K band than inferred from a number of studies (see Figure 4). As we discuss in the following section, our adopted extinction curve has a value of A V /∆τ 9.7 = 20.0, at the upper end of the observed range (∼ 18.5 ± 2, Draine 2003a). Joining a representative MIR extinction profile to an NIR extinction law with a higher value of A H /A K would result in a larger A V /∆τ 9.7 , exacerbating this tension. More work is needed to fully reconcile the existing observations of NIR and MIR extinction, and we thus present our synthesized curve as only our current best estimate of the true interstellar extinction. Finally, we note that Schlafly et al. (2016) determined the interstellar extinction only in broad photometric bands and thus their resulting extinction curve does not contain spectral features. In contrast, Hensley & Draine (2020) used spectroscopic ISO-SWS data to determine the profile of the the 3.4 µm spectroscopic feature to- The agreement between these profiles suggests a universality of the silicate feature throughout the ISM. Residual differences between the profiles may be attributable to different treatments of the underlying continuum extinction. ward Cyg OB2-12, which can be seen in Figure 4. We discuss this and other other spectroscopic features in greater detail in the following sections. Silicate Features In addition to smooth continuum extinction provided by the ensemble of interstellar dust grains, there are well-studied extinction features attributable to specific grain species. Prominent among these are features at 9.7 and 18 µm that have been identified with silicate material, the former arising from the Si-O stretching mode and the latter from the O-Si-O bending mode. The 9.7 µm feature was discovered as a circumstellar emission feature (Gillett et al. 1968;Woolf & Ney 1969). Woolf & Ney (1969) demonstrated that the feature was consistent with the expected behavior of silicate material, a claim strengthened by the discovery of a second feature at 18 µm (Forrest et al. 1979). Subsequent observations have revealed that these features are not only found in circumstellar emission, but are also ubiquitous in absorption in the diffuse ISM (see, e.g. van Breemen et al. 2011). The sightline to the Galactic Center has enabled detailed study of both the 9.7 µm Smith et al. 1990;Kemper et al. 2004) and 18 µm features (McCarthy et al. 1980) by virtue of its substantial dust column. found that the V band extinction relative to the optical depth ∆τ 9.7 of the silicate feature at 9.7 µm has a value of A V /∆τ 9.7 = 9 ± 1. Kemper et al. (2004) employed ISO observations toward two carbon-rich Wolf-Rayet stars located toward the Galactic Center to derive the profile of the 9.7 µm silicate feature ∆τ λ /∆τ 9.7 µm , which we plot in Figure 5. With heavy visual extinction (A V 10.2 mag Humphreys 1978;Torres-Dodgen et al. 1991) and yet a lack of ice features, the sightline toward the blue hypergiant Cyg OB2-12 is ideal for studying extinction arising from the diffuse atomic ISM (Whittet 2015). The 9.7 µm silicate feature on this sightline was first observed by Rieke (1974), and subsequent observations have produced detailed determinations of the both the 9.7 and 18 µm silicate features (Whittet et al. 1997;Fogerty et al. 2016;Hensley & Draine 2020). In Figure 5, we compare the Cyg OB2-12 feature profile determined by Hensley & Draine (2020) to that of the Galactic Center (Kemper et al. 2004) and a sample of O and B stars (Shao et al. 2018). The agreement between these profiles corroborates other studies noting a relatively universal silicate feature profile in the diffuse ISM (e.g., Chiar & Tielens 2006;van Breemen et al. 2011). Interstellar dust models should therefore be compatible with this profile, which has FWHM 2.2 µm. As noted by Chiar & Tielens (2006), this average feature profile is narrower than the profile seen in emission toward the Trapezium region (FWHM 3.45 µm, Forrest et al. 1975), which was used to calibrate some models (e.g., Draine & Lee 1984). Dust models should also be able to reproduce the observed strength of the feature. The extinction curve we synthesize in this work has A 5500Å /∆τ 9.7 = 20.0. Comparing a variety of measurements toward Wolf-Rayet stars and toward Cyg OB2-12, Draine (2003a) suggested a mean value A V /∆τ 9.7 = 18.5 ± 2, consistent with our composite curve. Determination of the 18 µm feature profile is made difficult by uncertainty in the underlying continuum extinction (see discussion in van Breemen et al. 2011;Hensley & Draine 2020). ∆τ 18 /∆τ 9.7 is typically found to be of order 0.5 (Chiar & Tielens 2006;van Breemen et al. 2011;Hensley & Draine 2020). In performing model fits to the emission from Cyg OB-2 and its stellar wind, Hensley & Draine (2020) required that the extinction longward of 18 µm extrapolate to values estimated from the FIR emission with a functional form approximating the dust opacity law also inferred from FIR emission. Thus, while the 18 µm feature itself is difficult to isolate from the total extinction, the long wavelength behavior of the extinction curve synthesized here is both physically and empirically motivated and serves as a reasonable best estimate. Just as the presence of the 9.7 and 18 µm silicate features constrains grain models, the absence of certain features likewise informs our understanding of the composition of interstellar dust. The 11.2 µm feature arising from silicon carbide (SiC) is not observed to low detection limits, which appears to constrain the amount of Si in SiC dust to less that about 5% ). However, the SiC absorption profile is highly shape dependent, and irregularly shaped SiC grains could be abundant despite the non-detection at 11.2 µm. If the observed "shoulder" of the 9.7 µm feature is attributed to irregular SiC grains, as much as 9-12% of the interstellar Si could be in the form of SiC . Little substructure has been detected in the 9.7 µm silicate feature, indicating that the feature arises predominantly from amorphous rather than crystalline silicates. Toward Cyg OB2-12, Bowey et al. (1998) found minimal evidence for fine structure between 8.2 and 11.7 µm except a possible weak feature at 10.4 µm that may be attributable to crystalline serpentine. Measuring silicate absorption toward two protostars and finding a lack of fine structure, Demyk et al. (1999) determined that at most 1-2% of the mass of the silicates giving rise to the feature in star-forming clouds could be crystalline, whereas Kemper et al. (2005) estimated that at most 2.2% of the silicate mass in the diffuse ISM could be crystalline. On the basis of detections of the 11.1 µm feature from crystalline forsterite in many interstellar environments, Do-Duy et al. (2020) concluded that ∼ 1.5% of the silicate mass in the diffuse ISM is crystalline, which is consistent with previously derived upper limits. To the extent that the weak, broad 11.1 µm feature is present in the extinction toward Cyg OB2-12, it is implicitly included in the representative extinction curve we derive in this work. Carbonaceous Features The presence of extinction features arising from carbon bonds is well-attested in the diffuse ISM. We review here the extinction "bump" at 2175Å, the infrared extinction features, and the diffuse interstellar bands (DIBs). The 2175Å Feature As evidenced in Figure 1, a striking feature of the interstellar extinction curve is the "bump" at 2175Å. This feature was first discovered by Stecher (1965) and quickly identified with extinction from small graphite particles (Stecher & Donn 1965), although this identification is not universally accepted. As the backbone of a PAH is in many ways analogous to a graphite sheet, the 2175Å feature may be attributable to PAHs (Donn 1968;Draine 1989a;Joblin et al. 1992;Draine 2003a). Regardless of the carrier of the feature, a number of observational facts appear clear. First, the feature appears ubiquitous in the ISM, found over a wide range of E(B − V ) (Bless & Savage 1972;Savage 1975). Second, the feature is quite strong and therefore its carrier must be composed of one of the more abundant elements in the ISM-C, O, Mg, Si, or Fe (Draine 1989a). Third, the central wavelength of the feature is nearly invariant across many sightlines, though the width can vary dramatically (FWHM between 360 and 600Å) across environments (Fitzpatrick & Massa 1986). Finally, this feature is weaker, and in some cases absent, in sightlines toward the LMC (Fitzpatrick 1985;Clayton & Martin 1985;Fitzpatrick 1986;Misselt et al. 1999) and SMC (Rocca-Volmerange et al. 1981;Prevot et al. 1984;Thompson et al. 1988;Gordon et al. 2003). The consistency of the central wavelength across environments suggests that the feature is relatively insensitive to the grain size distribution, while its weakness in the SMC and LMC lends credence to the idea that it is associated with a specific carrier which may be underabundant in those environments. While graphite-like sheets, such as those found in PAHs, provide perhaps the most attractive explanation for the feature at present, it is not without difficulties. In particular, Draine & Malhotra (1993) demonstrated that graphite has difficulty explaining the observed width of feature by variations in the size and shape of the grains while simultaneously preserving the constant central wavelength. Alternative hypotheses, such as transitions in OH − ions in amorphous silicates (Steel & Duley 1987), onion-like carbonaceous composite materials (Wada et al. 1999), and hydrogenated amorphous carbon (Mennella et al. 1998;Duley & Hu 2012), provide ways to account for the feature without invoking graphite, though most of these models still attribute the feature to carbonaceous bonds. As of yet, no hypothesis offers a clear explanation for the simultaneous near-invariance of the central wavelength and substantial variation in the feature's width. Infrared Features An interstellar absorption feature at 3.4 µm was first discovered by Soifer et al. (1976) toward the Galactic Center source IRS7, though it was not until the nondetection of emission features at 6.2 and 7.7 µm along the same line of sight that its interstellar origin was appreciated (Willner et al. 1979). Wickramasinghe & Allen (1980) detected a pronounced 3.4 µm feature to-ward IRS7 as well as toward the M star OH 01-477, which they attributed to the CH stretch band. Detection of this feature toward Cyg OB-12 suggests that it is a generic feature of extinction from the diffuse ISM (Adamson et al. 1990;Whittet et al. 1997). Subsequent observations of the 3.4 µm feature revealed a complex profile, including a number of "subpeaks" at 3.39, 3.42, and 3.49 µm (Duley & Williams 1983;Butchart et al. 1986;Sandford et al. 1991). Sandford et al. (1991) demonstrated consistency between these features and C-H stretching in =CH 2 (methylene) and -CH 3 (methyl) groups in aliphatic hydrocarbons. These results were supported by the more extensive observations of Pendleton et al. (1994), who determined that diffuse ISM has a characteristic CH 2 to CH 3 abundance of about 2.0-2.5. Detailed comparison of the 3.4 µm feature to laboratory measurements of a range of materials yielded a close match with hydrocarbons with both aliphatic and aromatic characteristics (Pendleton & Allamandola 2002). A key prediction of the aliphatic hydrocarbon origin of the 3.4 µm feature is the presence of a 6.85 µm CH deformation mode. Tielens et al. (1996) identified this feature in an IR spectrum of the Galactic Center, confirming this hypothesis. Additionally, they identified features at 5.5 and 5.8 µm with C=O (carbonyl) stretching and a feature at 5.5 µm with metal carbonyls such as Fe(CO) 4 . Subsequently, Chiar et al. (2000) detected a 7.25 µm feature ascribed to a methylene deformation mode toward the Galactic Center. The 6.85 µm feature has been observed toward Cyg OB2-12 with the same strength relative to the 3.4 µm feature as seen toward the Galactic Center (Hensley & Draine 2020). Thus, the 6.85 µm feature also appears generic to extinction from the diffuse ISM. On the other hand, the 7.25 µm feature was not detected toward Cyg OB2-12, although a weak feature could not be completely ruled out. The hydrocarbon feature profiles toward the Galactic Center and Cyg OB2-12 are compared in Figure 6. The 3.47 µm subfeature of the 3.4 µm complex has been attributed to bonds between H and sp 3 bonded (diamond-like) C (Allamandola et al. 1992). This feature appears to be present in the spectrum of the Galactic Center (Chiar et al. 2013) and absorption in the vicinity of this feature is even stronger toward Cyg OB2-12 (Hensley & Draine 2020), as illustrated in Figure 6. While this suggests diamond-like C may be ubiquitous in both the dense and diffuse ISM, it is in conflict with the finding of Brooke et al. (1996) that the strength of the 3.47 µm feature is better correlated with the 3.1 µm H 2 O ice feature (absent toward Cyg OB2-12) than with the 9.7 µm silicate feature. Observations of these features Galactic Center Cyg OB2-12 (Adopted) Figure 6. We compare determinations of the hydrocarbon feature profiles toward the Galactic Center based on ISO-SWS spectroscopy (Chiar et al. 2000(Chiar et al. , 2013 and toward Cyg OB2-12 which employed both ISO-SWS and Spitzer IRS spectroscopy (Hensley & Draine 2020). Both sets of profiles have been normalized to the maximum optical depth in the 3.4 µm feature. Although these sightlines probe very different interstellar environments, the agreement is excellent aside from the 7.25 and 7.7 µm features, where the determinations are most uncertain. on more sightlines are needed to clarify the evolution of hydrocarbons in the ISM. The attribution of the strong IR emission features to PAHs (see Section 5.2) implies the presence of aromatic features in the interstellar extinction curve as well as the observed aliphatic features. Observing eight IR sources, including two Galactic Center sources and Cyg OB2-12, with ISO-SWS spectroscopy, Schutte et al. (1998) detected a 6.2 µm absorption feature associated with aromatic hydrocarbons, which has a well-known corresponding emission feature. Subsequently, both the 3.3 and 6.2 µm aromatic features were detected in absorption toward the Quintuplet Cluster (Chiar et al. 2000(Chiar et al. , 2013, and Hensley & Draine (2020) reported detections of the 3.3, 6.2, and 7.7 µm aromatic features in absorption toward Cyg OB2-12. While there is a feature in the extinction curve toward the Galactic Center in the vicinity of 7.7 µm, Chiar et al. (2000) attributed it to the 7.68 µm feature from methane ice. Because of the detection on the iceless sightline toward Cyg OB2-12, we include it in Figure 6, but note that there are substantial observational uncertainties on the depth and width of the feature in both the Galactic Center and Cyg OB2-12 determinations. The strength of the 7.7 µm feature detected toward Cyg OB2-12 is, however, consistent with predictions of models for interstellar PAHs (Draine & Li 2007). As with the silicate features, carbonaceous features not observed in the diffuse ISM also constrain dust composition. Polycrystalline graphite is expected to have a lattice resonance in the vicinity of 11.53 µm (Draine 1984. Such a feature was not observed towards Cyg OB2-12 (Hensley & Draine 2020), though the weakness of the feature allowed only an upper limit of <160 ppm of C in graphite to be set. More stringent upper limits will require more sensitive data and possibly a sightline without contaminating H recombination lines. Laboratory data suggest the presence of NIR features at 1.05 and 1.23 µm associated with ionized PAHs having 40-50 C atoms (Mattioda et al. 2005b,a). These wavelengths may be too short for even ultrasmall grains to produce strong emission features, but if present they should be observable in extinction (Mattioda et al. 2005b). However, we are unaware of any existing observational constraints on the presence or absence of these features. The Diffuse Interstellar Bands The diffuse interstellar bands are a set of numerous, relatively broad (hence "diffuse") interstellar absorption features that likely arise from molecular transitions. The first two DIBs λ5780 and λ5795 were noted as unidentified stellar absorption features (Heger 1922a,b), but their interstellar nature was not confirmed until Merrill (1936) found that the lines remained at fixed wavelength in a spectroscopic binary while the stellar lines exhibited the expected time-dependent oscillation. Subsequently, over five hundred DIBs have been identified, the vast majority of which have not been identified with a specific molecular carrier (Herbig 1995;Hobbs et al. 2009;Fan et al. 2019). The first definitive identification of a DIB carrier did not occur until 2015 when laboratory measurements demonstrated that C + 60 can reproduce the absorption features at 9632 and 9577Å (Campbell et al. 2015). Subsequent detection of the predicted 9428Å band has confirmed C + 60 as the carrier (Cordiner et al. 2019). Based on the observed DIB strength, it is estimated that C + 60 accounts for only ∼ 0.1% of the interstellar carbon abundance (Berné et al. 2017). The correlation between DIB strength and total reddening is non-linear (Snow & Cohen 1974) and varies among DIBs, suggesting that the various DIB carriers preferentially reside in different interstellar environments, e.g., atomic versus molecular gas (Lan et al. 2015). It is in principle possible to construct a representative spectrum for DIBs in diffuse H i gas assuming the empirical relations between DIB equivalent widths and N H i derived by Lan et al. (2015) for the set of 20 DIBs between 4430 and 6614Å considered in their study, but we do not pursue such an undertaking in this work. Other Features Although we have discussed a number of extinction features associated with specific materials found in diffuse interstellar gas, this inventory is incomplete, particularly as we push to weaker features. Indeed, Massa et al. (2020) recently presented evidence of "Intermediate Scale Structure," i.e., extinction features a few hundred to 1000Å wide, in the spectrophotometric extinction curves of Fitzpatrick et al. (2019). They identified two features at 4370 and 4870Å which both showed correlation with the strength of the 2175Å feature, and one feature at 6300Å which did not. Further, they argue that the reported "Very Broad Structure" (Whiteoak 1966) is actually a minimum between the 4870 and 6300Å features. These features affect the optical extinction at the 10% level, and we include them in our representative extinction curve only insofar as they are inherent in the mean extinction curves of Schlafly et al. (2016) and Fitzpatrick et al. (2019), which we employ over this wavelength range. It is expected that the amount of extinction on a given sightline scales linearly with the dust column density and, to the extent that dust and gas are well mixed, with the gas column density. This scaling is borne out observationally and is typically summarized by the quantity N H /E(B − V ), which appears roughly constant for the diffuse ISM. Using Lyα absorption measurements made by the Copernicus satellite for 75 stars within 3400 pc, Bohlin et al. (1978) derived a value of N H /E(B − V ) = 5.8 × 10 21 H cm −2 mag −1 . They noted that very few of their sightlines differ from this relation by more than a factor of 1.5. Lyα absorption studies with IUE by Shull & van Steenberg (1985) and Diplas & Savage (1994) derived similar N H i /E(B − V ) values of 5.2 and 4.9×10 21 H cm −2 mag −1 , respectively. Finally, Rachford et al. (2009) where both H i and H 2 were measured directly. Measuring the H i column density toward globular clusters using the 21 cm line, Knapp & Kerr (1974) and Mirabel & Gergely (1979) found N H i /E(B − V ) of 5.1 and 4.6×10 21 H cm −2 mag −1 , respectively. These values are also consistent with data from a similar study using RR Lyrae (Sturch 1969), all of which corroborate the values from H i absorption studies. However, employing 21 cm data from the Leiden-Argentina-Bonn (LAB) H i Survey (Kalberla et al. 2005) and the Galactic Arecibo L-band Feed Array (GALFA) Hi Survey (Peek et al. 2011) in conjunction with the reddening map of Schlegel et al. (1998), Liszt (2014) This is a factor of 1.4 higher than that found by Bohlin et al. (1978). Liszt (2014) noted that some previous determinations using H i emission are in good agreement with this higher value, particularly for E(B − V ) < 0.1. For instance, Heiles (1976) found E(B − V ) = (−0.041 ± 0.012) + N H i /(4.85 ± 0.36) × 10 21 cm −2 mag −1 , consistent with the higher value of Liszt (2014) when E(B − V ) < 0.1 due to the negative intercept. Likewise, Mirabel & Gergely (1979) required a negative intercept to fit their data, suggesting a change in behavior at low reddening. In a subsequent analysis, Lenz et al. (2017) correlated N H i measurements from the HI4PI Survey (HI4PI Collaboration et al. 2016) and maps of interstellar reddening as determined by Schlegel et al. (1998) over the diffuse, high-latitude sky. They found a characteristic N H i /E(B − V ) = 8.8 × 10 21 cm −2 mag −1 on these sightlines, with a systematic uncertainty of about 10%. Comparing 21 cm observations to stellar extinction along 34 sightlines with little molecular gas, Nguyen et al. (2018) found a compatible N H /E(B−V ) = (9.4 ± 1.6)× 10 21 cm −2 mag −1 (95% confidence interval) and that this relation persists to N H as high as 3 × 10 21 cm −2 . Using X-ray absorption to infer N H , Zhu et al. (2017) found a mean value of N H /A V = (2.08 ± 0.02) × 10 21 toward a sample of supernova remnants, planetary nebulae, and X-ray binaries across the Galaxy. For R V = 3.1, this corresponds to N H /E(B−V ) = 6.45×10 21 cm −2 mag −1 , intermediate between the Bohlin et al. (1978) and Lenz et al. (2017) values. The striking difference between these different determinations of N H /E(B −V ) is consistent with systematic variations of the dust-to-gas ratio in the Galaxy, with more dust per H atom in the Galactic plane and less at high Galactic latitudes. As we focus here on high latitude sightlines where the dust emission per H atom is best determined (see Section 5), we adopt the value N H /E(B − V ) = 8.8 × 10 21 cm −2 mag −1 of Lenz et al. (2017) as our benchmark. Scattering Extinction is the sum of two processes-absorption and scattering. The scattering properties of dust can be constrained by studying the surface brightness profile of scattered light around point sources and the spectrum of the diffuse Galactic light. However, both of these constraints involve simultaneous modeling of both the dust optical properties and the scattering geometry and are therefore difficult to incorporate self-consistently into the present analysis. We provide a brief overview below, but do not incorporate these observations into our final set of model constraints. X-ray Scattering Interstellar grains scatter X-rays through small angles (Overbeck 1965;Hayakawa 1970;Martin 1970), which can be observed as a "scattering halo" in X-ray images of point sources with intervening interstellar dust (Catura 1983;Mauche & Gorenstein 1986). The scattering is sensitive to both dust composition and size distribution, providing additional observational constraints that a grain model should satisfy. The angular extent of the scattering halo also depends on the location of the dust between us and the source. For Galactic sources (e.g., low-mass X-ray binaries), this introduces uncertainty when comparing models to observations. The best-studied case is GX 13+1 (Smith 2008). Valencic & Smith (2015) surveyed 35 X-ray scattering halos, and concluded that most could be satisfactorily fit by one or more dust models with size distributions having few grains larger than ∼ 0.4 µm. Extragalactic sources with intervening Galactic dust, the exact distance to which would be unimportant, would be optimal for testing dust models, but high signal-to-noise imaging of X-ray halos around reddened AGN is lacking. The scattering cross section for the dust grains is expected to show spectral structure near X-ray absorption edges (Draine 2003b). If this could be observed, it would provide a means to detect or constrain variations of grain composition with size. Costantini et al. (2005) reported spectral structure in the scattering halo around Cyg X-2. Features appear to be present near the O K, Fe L, Mg K, and Si K absorption edges, although the interpretation remains unclear. Future X-ray telescopes may enable more sensitive spectroscopy of scattering halos. A population of aligned, aspherical grains can produce observable asymmetries in an X-ray scattering halo (Draine & Allaf-Akbari 2006). Seward & Smith (2013) employed Chandra observations of Cyg X-2 to search for these asymmetries, but found the X-ray halo to be uniform in surface brightness to at least the 2% level. A detection of halo asymmetry has yet to be reported. Because X-ray scattering is sensitive to grain structure on small scales, X-ray halos can also provide constraints on grain porosity. Analyzing the Chandra observations of the Galactic binary GX13+1 of , Heng & Draine (2009) found that the small angle scattering from grains with porosity greater than 0.55 overpredicts the observed surface brightness in the core of the scattering halo. As the degree of compactness of interstellar grains remains a major unresolved question, ancillary data and analysis is needed to test the conclusions of Heng & Draine (2009). Diffuse Galactic Light Even in a dark patch of sky far from point sources, there is still light from emission in the ISM and from starlight that has been scattered off of dust grains. This "diffuse Galactic light" (DGL) was first detected in the photoelectric measurements of Elvey & Roach (1937), who derived a surface brightness of 5.6 mag per square degree at λ ≈ 4500Å. These results were corroborated by the photometric observations of Henyey & Greenstein (1941), who concluded that dust grains must have a relatively large albedo ω (0.3 < ω < 0.8) and be rel- atively forward scattering, having anisotropy parameter g ≡ cos θ (where θ is the scattering angle) greater than 0.65. Particles in the Rayleigh limit (i.e., small compared to the wavelength) have g ≈ 0, i.e., isotropic scattering, indicating that the scattering in the ISM is dominated by larger grains (radius a 0.1 µm). The conversion of measurements of the intensity of scattered light into constraints on the scattering properties of interstellar dust is challenging as it requires assumptions on the distribution of both sources and scatterers. Nevertheless, observations of the DGL from the optical to the UV have been used to constrain the wavelength dependence of both ω and g. Employing 1500-4200Å photometric observations from the Orbiting Astronomical Observatory (OAO-2) in 71 fields at varying Galactic longitude, Lillie & Witt (1976) found good agreement with earlier ground-based measurements of the DGL. They constrained ω and g through a radiative transfer analysis on an axisymmetric plane-parallel galaxy in which both dust and stars decrease exponentially with height above the disk, finding 0.3 < ω < 0.7 with indications of a minimum near 2200Å, coincident with the extinction bump (see Section 3.8.1). Except in this minimum where g attained values as high as 0.9, they found 0.6 < g < 0.7. The UV spectrometers aboard the two Voyager spacecraft were used to study dust scattering in the Coalsack Nebula by Murthy et al. (1994). They employed a simple scattering model assuming fixed g and single scattering only to infer the wavelength dependence of the dust albedo. Fixing ω = 0.5 at 1400Å, they computed the relative albedo at other wavelengths, finding little wavelength dependence aside from a modest increase toward shorter wavelengths. A follow-up analysis by Shalima & Murthy (2004) using a more sophisticated Monte Carlo model for the dust scattering determined the FUV dust albedo to be 0.4 ± 0.2. The Far Ultraviolet Space Telescope (FAUST) measured the diffuse UV continuum between 140 and 180 nm. Employing the 156 nm flux measurements from this experiment and a radiative transfer model that accounted for non-isotropic radiation fields and multiple scatterings, Witt et al. (1997) derived a FUV dust albedo of 0.45 ± 0.05 and g = 0.68 ± 0.10. The rocketborne Narrowband Ultraviolet Imaging Experiment for Wide-Field Surveys (NUVIEWS) measured the diffuse UV background at 1740Å. Using a 3D Monte Carlo scattering model based on that described in Witt et al. (1997), Schiminovich et al. (2001) constrained the dust albedo to be ω = 0.45 ± 0.05 and g = 0.77 ± 0.1. By correlating the spectra of SDSS sky fibers (i.e., spectra of the "blank" sky taken for calibration purposes) against the 100 µm dust emission measured by IRAS, Brandt & Draine (2012) measured the spectrum of the DGL between 3900 and 9200Å. Modeling the DGL scattering geometry with a plane-parallel exponential galaxy, they compared the observed spectrum to predictions from dust models. Their formalism could in principle be used to place constraints directly on ω and g, but we do not pursue such analysis here. We summarize these constraints on the dust albedo and asymmetry parameter in Figure 7. Given the modeling uncertainties inherent in translating the DGL intensity to the scattering properties of interstellar dust, we do not at this time incorporate these data into our set of constraints. These limitations notwithstanding, it is clear that interstellar dust must have a UV/optical albedo of order 0.5 and be relatively forward scattering (g > 0.5). Spatial Variation of the Extinction Curve It is well established that there is not a single universal extinction curve that describes all regions of the ISM, but rather a variety of extinction curves typically parameterized by R V (Johnson & Borgman 1963;Cardelli et al. 1989). For instance, measurements of extinction toward the Galactic Bulge have indicated R V ≈ 2.5 (Udalski 2003;Nataf et al. 2013). Schlafly et al. (2016) found large scale gradients in R V , with a follow-up study indicating a possible dependence on Galactocentric radius such that the outer Galaxy has systematically higher R V than the inner Galaxy (Schlafly et al. 2017). The magnitude of the variations in R V , however, was relatively small (σ R V = 0.18, Schlafly et al. 2016). Extinction in dark clouds differs systematically from the diffuse ISM due to the growth of grains by coagulation and the formation of ice mantles. We do not attempt to summarize the observed range of variations in this work, instead restricting our focus to the extinction curve of the local diffuse ISM having an average R V ≈ 3.1 (Morgan et al. 1953;Schultz & Wiemer 1975;Sneden et al. 1978;Koornneef 1983;Rieke & Lebofsky 1985;Fitzpatrick et al. 2019). POLARIZED EXTINCTION Following the discovery that starlight is polarized (Hiltner 1949a,b,c;Hall 1949;Hall & Mikesell 1949, 1950, it was quickly realized that the origin of this polarization was due to selective extinction by aligned dust grains rather than inherent polarization of the stars themselves. Davis & Greenstein (1951) proposed a physical model of grain alignment whereby aspherical dust grains preferentially aligned with the local magnetic field. Our understanding of the alignment processes of dust grains has since undergone significant evolution (see Andersson et al. 2015, for a review), though it remains clear that observations of polarized extinction constrain the size, shape, composition, and alignment properties of interstellar dust. In this section we summarize observations of the polarized extinction, focusing upon its wavelength dependence, spectral features, and amplitude per unit reddening. Wavelength Dependence Initial observation of the polarized extinction from UV to NIR wavelengths (e.g., Behr 1959;Gehrels 1960;Coyne et al. 1974;Gehrels 1974;Serkowski et al. 1975) established a characteristic wavelength dependence of Figure 8. We plot the wavelength dependence of the polarized extinction, normalizing to the peak polarization. We employ the Serkowski Law in the UV and optical with λmax = 0.55 µm, and we match this smoothly onto a power law in the IR such that p λ ∝ λ −1.6 . The solid line corresponds to a Serkowski Law parameter K = 0.87, while the shaded region illustrates the effects of varying K between 0.82 and 0.92, corresponding to the UV-and IR-optimized forms of the Wilking Law described by Whittet (2003). the polarized extinction that is often parametrized by the "Serkowski Law" (Serkowski 1973): where p λ is the polarization fraction of the two linear polarization modes and p max is the maximum value of p λ occurring at wavelength λ max . Serkowski (1971) prescribed the values K = 1.15 and λ max = 0.55 µm. Subsequent observations of polarized extinction revealed that the polarization peak becomes narrower (i.e., K increases) as λ max increases (Wilking et al. 1980(Wilking et al. , 1982. This relation, known as the "Wilking Law," is parametrized by the linear relationship where c 1 and c 2 are constants to be fit. Analyzing the polarized extinction from the U to K band, Whittet et al. (1992) derived values of c 1 = 1.66 µm −1 and c 2 = 0.01. Employing UV polarimetry from the Wisconsin Ultraviolet Photo-Polarimeter Experiment (WUPPE), Martin et al. (1999) fit values of c 1 = 2.56 µm −1 and c 2 = −0.59. As the former determination is a better fit to the observations from the optical to IR, and the latter a better fit from the UV to the optical, Whittet (2003) recommended a "compromise fit" employing the mean of the two determinations, i.e., c 1 = 2.11 µm −1 and c 2 = −0.29, yielding K = 0.87 for λ max = 0.55 µm. For λ max = 0.55 µm, all three parameterizations produce a similar polarized extinction law, as shown in Figure 8. Constraints on the polarized extinction law in the UV come almost entirely from WUPPE and the Faint Object Spectrograph on Hubble, and so while the Serkowski Law appears to describe interstellar polarization down to λ 1300Å (Somerville et al. 1994), extrapolations to wavelengths shorter than were accessible by these instruments are uncertain. We therefore adopt 1300Å as the shortest wavelength for our polarized extinction curve. Although the Serkowski Law (Equation 8) describes well the polarized extinction in the UV and optical, it underestimates the observed polarization in the infrared, particularly between ∼ 2 and 5 µm (Nagata 1990;Jones & Gehrz 1990). Compiling determinations of the IR polarized extinction along the lines of sight to a number of molecular clouds observed by Hough et al. (1989), determined that the IR polarized extinction could be fit with a power law p λ ∝ λ −β with indices ranging from β = 1.5 to 2.0. With polarimetry extending from optical wavelengths to 5 µm, Martin et al. (1992) found the ∼1-4 µm extinction was well-fit by a power law with index β = 1.6. Between 4 and 5 µm, however, the power law systematically underpredicted the observed polarization. The behavior of the IR polarized extinction is relatively robust to variations that exist at optical and UV wavelengths as demonstrated by . Silicate Features If the features in the interstellar extinction curve arise from aspherical, aligned grains, then these features should also produce polarized extinction. The polarization, or lack thereof, of interstellar extinction features therefore constrains the shape and alignment properties of dust of a specific composition. The 9.7 µm feature was first detected in polarization on the sightline toward the Becklin-Neugebauer (BN) Object in the Orion Molecular Cloud (Dyck et al. 1973;, with a detection made toward the Galactic Center soon after . Subsequent observations of the BN Object have probed the frequency-dependence of the polarization, including de- Figure 9. A composite polarized extinction profile of the 9.7 µm silicate feature derived by Wright et al. (2002) from observations toward two Wolf-Rayet stars (WR 48a and WR 112). The extinction on these sightlines appears dominated by the diffuse ISM. termination of the polarization profile of the 18 µm feature (Dyck & Lonsdale 1981;Aitken et al. 1985Aitken et al. , 1989. Although the BN Object is well-studied, its molecular environment does not likely typify the diffuse ISM. Smith et al. (2000) presented an atlas of spectropolarimetry for 55 sources between 8 and 13 µm, and, for six of these, additional spectropolarimetric observations between 16 and 22 µm. Drawing on these data, Wright et al. (2002) constructed a typical polarization profile of the 9.7 µm silicate feature in polarization based on observations of the Wolf-Rayet stars WR 48a and WR 112. These sightlines were selected because the polarization appears dominated by interstellar absorption. However, both sightlines have H 2 O ice features at both 3.1 and 6.0 µm (Marchenko & Moffat 2017) and so may differ in detail from purely diffuse sightlines. We present this composite polarization profile in Figure 9. We are unaware of any published observations that might typify the diffuse ISM along which both 10 µm and optical polarimetry have been obtained. Thus, we are unable to normalize the Wright et al. (2002) polarization profile relative to our polarized extinction curve discussed in Section 4.1. Carbonaceous Features Unlike the silicate features, the extinction features associated with carbonaceous grains have, with few exceptions, not been detected in polarization. The 3.4 µm feature is the strongest of the infrared extinction features associated with carbonaceous grains (see Section 3.8.2), and as such it is a natural observational target for assessing whether carbonaceous grains give rise to polarized extinction. Low-resolution spectropolarimetric observations of five Galactic Center sources by Nagata et al. (1994) yielded no discernible polarization feature near 3.4 µm, nor did high-resolution spectropolarimetric observations of GC-IRS7 by Adamson et al. (1999). A subsequent search for the 3.4 µm feature in polarization toward the young stellar object IRAS 18511+0146 likewise provided only upper limits (Ishii et al. 2002). However, the 9.7 µm silicate feature had not been measured along any of these sightlines, leading to ambiguity as to whether the lack of polarization was due to the carbonaceous grains themselves or the magnetic field geometry along the line of sight. This ambiguity was settled by who performed spectropolarimetric observations along two lines of sight in the Quintuplet Cluster which had existing polarimetric measurements of the silicate feature. Finding no evidence of polarization in the 3.4 µm feature, they concluded that the carbonaceous grains responsible for the feature are much less efficient polarizers than the silicate grains. Subsequent spectropolarimetric observations of the Seyfert 2 galaxy NGC 1068 yielded no detectable feature at 3.4 µm (Mason et al. 2007), supporting the conclusions of in a markedly different interstellar environment and further challenging dust models invoking grains with silicate cores with carbonaceous mantles (see discussion in Li et al. 2014). On the basis of the nondetections reported by , it appears that ∆p 3.4 /∆p 9.7 < 0.03. The 2175Å feature is a second natural candidate to examine for dichroic extinction arising from carbonaceous grains. Initial WUPPE results suggested excess polarization between 2000 and 3000Å on several sightlines, with more detailed modeling suggesting that the excesses toward HD 197770 and HD 147933-4 (ρ Oph A and B) did in fact arise from the 2175Å feature (Clayton et al. 1992;Wolff et al. 1997). However, if the 2175Å feature had the same strength relative to the continuum polarized extinction along all lines of sight, then other detections should have been made, e.g., toward HD 161056. The sightlines toward HD 197770 and HD 147933-4 do not betray any unusual behavior in other respects (e.g., the wavelength dependence of the polarization, the extinction curve, etc.), leading Wolff et al. (1997) to conclude that there are sightline-to-sightline variations in the polarizing efficiency of the grains responsible for the 2175Å feature. It is difficult to draw definitive conclusions on the basis of two detections (and ∼ 30 non-detections), emphasiz-ing the need for more observations of UV polarization on more sightlines. Particularly now that synergy is possible with observations of FIR polarized emission, this effort promises to enhance our understanding of both grain composition and alignment. Interstellar dust grains rotate rapidly with angular momentum preferentially parallel to the local magnetic field. The short axis of each grain tends to align with the angular momentum, and hence is preferentially parallel to the magnetic field. When the line of sight is parallel to the magnetic field, grain rotation eliminates any net polarization. In contrast, the polarization is greatest when the magnetic field is in the plane of the sky. Dust models should reproduce the intrinsic polarizing efficiency of dust grains, and so we focus here on the case of maximal polarization. For dust extinction, this has typically been quantified as the maximum V -band polarization per unit reddening, i.e., [p V /E(B − V )] max . Serkowski et al. (1975) used a sample of 364 stars of various spectral types to derive [p V /E(B − V )] max = 9% mag −1 . While individual stars and regions were occasionally found to have p V /E(B − V ) exceeding this upper envelope (e.g., Whittet et al. 1994;Skalidis et al. 2018), it was ambiguous whether dust on these sightlines was atypical or whether the upper envelope had been underestimated. With full-sky polarimetric measurements of dust emission, the Planck satellite facilitated a detailed comparison between polarized emission in the FIR and polarized extinction in the optical, finding a remarkably linear relation between the submillimeter polarization fraction p S and p V /E(B − V ) (Planck Collaboration Int. XXI 2015; Planck Collaboration XII 2018, see Section 6.2). Given this relationship, the observed p S 20% in some regions implies 13% mag −1 , leading Planck Collaboration XII (2018) to conclude the classic envelope of 9% mag −1 should be revised. Panopoulou et al. (2019) employed R-band RoboPol observations of 22 stars in a region with p S 20% to find that, indeed, the starlight was polarized in excess of p V /E(B − V ) = 9% mag −1 , perhaps even exceeding 13% mag −1 . Further, UBVRI polarimetry of six of the 22 stars indicated a typical Serkowski Law in this region, suggesting that the dust on these sightlines is not atypical. Given these recent observational results, we require that dust models reproduce p V /E(B−V ) = 13% mag −1 , and we normalize our polarization profile to this value. In this section we review observations of emission from interstellar dust from the infrared to microwave, focusing in particular on the emission per unit H column density characteristic of typical diffuse sightlines. IR Emission In radiation fields typical of the diffuse ISM, the bulk of the dust grains are heated to ∼20 K and therefore emit thermally in the far-infrared. These wavelengths are largely inaccessible from the ground, necessitating balloon-and space-based observations. The DIRBE and FIRAS instruments aboard the Cosmic Background Explorer (COBE) constrained the spectrum of the diffuse ISM from 3.5 to 1000 µm. In addition to confirming the presence of PAH emission near 3.5 and 4.9 µm, Dwek et al. (1997) derived the H i-correlated SED of dust in the diffuse ISM. We plot this SED in Figure 10. We note that these data were color corrected assuming a source spectrum with constant λI λ across the band. Prior to the release of the Planck dust maps, several studies synthesized the existing data from COBE and WMAP to produce self-consistent dust SEDs. Paradis et al. (2011) extracted an area of the sky with |b| > 6 • and a FIRAS 240 µm intensity greater than 18 MJy sr −1 , corresponding to a sky fraction of 13.7%. Compiègne et al. (2011), also seeking a composite dust SED in which the emission in each band was determined over the same region of the sky, combined DIRBE, FI-RAS, and WMAP observations at high Galactic latitudes (|b| > 15 • ) and low H i column densities (N H i < 5.5 × 10 20 cm −2 ). The differences between these SEDs and that of Dwek et al. (1997) are minor at their overlapping wavelengths. The Planck satellite made sensitive measurements of the FIR-submillimeter dust emission over the full sky. Combining the Planck data with WMAP and DIRBE and correlating with H i emission measured by the Parkes 21 cm survey, Planck Collaboration Int. XVII (2014) constructed a mean SED of the diffuse ISM (N H i ∼ 3×10 20 cm −2 ) from infrared to microwave wavelengths, which we plot in Figure 10. Following Planck Collaboration Int. XXII (2015), we apply a correction of 1.9, -2.2, and -3.5% to the 353, 545, and 857 GHz bands, respectively, due to updates in the Planck bandpass determinations subsequent to the work of Planck Collaboration Int. XVII (2014), and an additional 1.5% upward correction to the 353 GHz band following Planck Collaboration XI (2018). We color correct these data using the tables in Planck Collaboration Int. XVII (2014) to express the SED in terms of monochromatic intensities at the reference frequencies and thus facilitate direct comparison to models. Recently, Planck Collaboration Int. LVII (2020) correlated the Planck 545 GHz dust amplitude maps from the NPIPE data processing pipeline with H i4PI maps (HI4PI Collaboration et al. 2016) filtered to retain only H i velocities between ±90 km s −1 (Lenz et al. 2017). They found λI λ /N H = 7.74 × 10 −27 erg s −1 sr −1 H −1 at 545 GHz, slightly higher than but consistent with the value from Planck Collaboration Int. XVII (2014) quoted in Table 3. The use of H i correlation to separate the Galactic dust emission from other components becomes increasingly unreliable at low frequencies where these other components, such as free-free and synchrotron, can have non-zero correlation with H i. Planck Collaboration Int. XXII (2015) derived a microwave dust SED by correlating emission in the lower frequency Planck bands with the 353 GHz emission. We plot this SED in There is evidence that the shape of the dust SED is not uniform across the sky and indeed varies systematically with the strength of the radiation field that heats the dust. Planck Collaboration Int. XXIX (2016) explored this relationship by fitting the dust model of Draine & Li (2007) to full-sky maps of infrared dust emission. They then normalized these SEDs to the observed optical extinction based on SDSS observations of more than 250,000 quasars. At 353 GHz, the median SED has an intensity per A V of 0.92 MJy sr −1 mag −1 , while Planck Collaboration Int. XVII (2014) measured a 353 GHz intensity per hydrogen of 3.9 × 10 −22 MJy sr −1 cm 2 H −1 . Taking these at face value implies A V /N H = 4.2 × 10 −22 mag cm 2 . In contrast, from our adopted N H /E(B − V ) = 8.8 × 10 21 cm −2 mag −1 (see Section 3.10) and R V = 3.1 (see Section 3.4), we compute A V /N H = 3.5 × 10 −22 cm 2 mag. Green et al. (2018) found that the Planck Collaboration XI (2014) reddening map calibrated on SDSS quasars overpredicted stellar reddenings by a factor of ∼ 1.25 at intermediate latitudes, suggesting these discrepancies are rooted in the reddening calibration. We therefore correct the SEDs per A V of Planck Collaboration Int. XXIX (2016) upward by 25% when comparing them to other determinations. In Figure 10, we plot the range of dust SEDs over different values of the radiation field strength from Planck Collaboration Int. XXIX (2016). While there are expected systematic variations in the individual SEDs, the range is consistent with the other determinations within the uncertainties. The systematic variations of the dust SED with the radiation field may encode information about the evolution of dust properties in different environments (Fanciullo et al. 2015). We adopt as a dust model constraint the dust SED of Planck Collaboration Int. XVII (2014) based on H i correlation from the 100, 140, and 240 µm DIRBE bands, which overlap with the SED of Dwek et al. (1997), down to the 353 GHz Planck band. Given the known issues with H i correlation at low frequencies, we adopt the SED of Planck Collaboration Int. XXII (2015) from the Planck 217 GHz band to the WMAP 23 GHz band, normalizing to the measured 353 GHz intensity per H atom derived by Planck Collaboration Int. XVII (2014). At the lowest frequencies, the dust emission is dominated by the anomalous microwave emission (AME), which we discuss in Section 5.3. Our adopted dust SED is pre- Figure 11. In violet we plot the Spitzer IRS spectrum of the translucent cloud DCld 300.2-16.9 (B) as determined by Ingalls et al. (2011), where we have noted the locations of rotational H2 lines. In black we plot the combined Spitzer and Akari spectrum of the star-forming SBb galaxy NGC 5992 (Brown et al. 2014), which has been corrected for starlight emission by subtraction of a 5000 K blackbody. We also indicate the strong emission lines present in the spectrum. In red, we plot the H i-correlated dust emission as seen by DIRBE (Dwek et al. 1997), which we use to normalize the PAH emission spectra. Table 3, where we have color corrected all data to facilitate direct comparisons to models. Infrared Emission Features The mid-IR emission from dust is characterized by prominent emission features at 3.3, 6.2, 7.7, 8.6, 11.3, 12.0, 12.7, and 13.55 µm (see Figures 10 and 11). First observed in the 1970s (e.g., Gillett et al. 1973;Merrill et al. 1975), these features were subsequently identified as vibrational modes of PAHs (Leger & Puget 1984;Allamandola et al. 1985). As grains must be heated to quite high temperatures in order to excite these modes (T 250 K), the carriers must be small enough to be heated through the absorption of a single photon. This process can bring small grains to temperatures in excess of 1000 K. The width and ubiquity of these emission features make it implausible that they are due to a single species of PAH. Rather, they represent the aggregate emission from a diverse population of PAH-like molecules. The 3.3 µm feature, also observed in extinction (see Section 3.8.2), has been identified with the aromatic C-H stretching mode; C-C stretching modes account for the 6.2 and 7.7 µm features, which have also been observed in extinction (see Section 3.8.2); the C-H in-plane bend-ing mode gives rise to the 8.6 µm feature, while the C-H out-of-plane bending mode produces the 11.3, 12.0, 12.7, and 13.5 µm features depending on whether one, two, three, or four hydrogen atoms are adjacent to the bond, respectively. A detailed summary of the features and their corresponding modes can be found in Allamandola et al. (1989) and Tielens (2008). The strength of these features suggests that a substantial amount of interstellar carbon must reside in the grains giving rise to this emission. The dust model of Draine & Li (2007) required 4.7% of the total dust mass to reside in PAHs with fewer than 10 3 carbon atoms, which accounted for ∼ 10% of the total interstellar carbon abundance. In addition to aromatic features associated with PAHs, the aliphatic 3.4 µm feature has also been observed in emission (e.g., Geballe et al. 1985;Sloan et al. 1997), though it is typically much weaker than the 3.3 µm aromatic feature. Comparing the strengths of these two features and assuming the 3.4 µm feature arises solely from aliphatic carbon, Li & Draine (2012) concluded that no more than about 10% of the carbon in grains giving rise to these emission features can be in an aliphatic bond. However, it should be noted that anharmonicity in the aromatic 3.3 µm C-H stretching mode may also contribute to the emission at 3.4 µm (Barker et al. 1987;Li & Draine 2012), further reducing the abundance of the aliphatic component. Using Spitzer IRS, Ingalls et al. (2011) made spectroscopic measurements between 5.2 and 38 µm of several regions in the translucent cloud DCld 300.2-16.9. In addition to detecting IR H 2 transitions, these measurements provide a reasonable proxy for the PAH emission in the diffuse ISM. We plot the spectrum of their sightline "B" in Figure 11, where we have noted the observed H 2 lines. Combining spectroscopy from Spitzer and Akari, along with ancillary data from the UV to the IR, Brown et al. (2014) presented an atlas of 129 galaxy SEDs spanning a range of galaxy types. We focus on their 2.5-34 µm spectrum of NGC 5992, a star-forming SBb galaxy. To remove the continuum emission from starlight in this spectrum, we subtract a 5000 K blackbody component. We also note the presence of some emission lines in the spectrum arising from H ii regions: [Ne ii] at 12.81 µm and [S iii] at 12.81 and 18.71 µm. In Figure 11, we compare the MIR spectra of DCld 300.2-16.9 (B) and NGC 5992, finding excellent agreement between 5-12 µm. If a column density of 3.9 × 10 21 cm −2 is assumed, the bandpass-integrated SED agrees well with the H i-correlated DIRBE SED of the diffuse ISM as determined by Dwek et al. (1997) (see Section 5.1). 12 CO observations of this cloud suggest N (H 2 ) ∼ 2 × 10 21 cm −2 (Ingalls et al. 2011), and so this column density appears reasonable. As the AKARI data constrains the PAH emission in NGC 5992 at short wavelengths, we adopt this SED as our benchmark between 3 and 12 µm. Given the uncertainty of the starlight subtraction, we do not employ the data at wavelengths less than 3 µm. The spectra of NGC 5992 and DCld 300.2-16.9 diverge beyond 12 µm likely due to the more intense starlight heating, and consequently higher temperature grains, in NGC 5992. The spectrum of DCld 300.2-16.9 is more likely to typify the diffuse ISM and is in good agreement with the shape of the DIRBE SED, and thus we adopt it as our benchmark from 12-38 µm. However, we excise portions of the spectrum in the vicinity of the S(0), S(1), and S(2) H 2 rotational transitions at 28.2, 17.0, and 12.3 µm, respectively. In addition to the hydrocarbon features discussed above, weak mid-infrared emission features from the C-D stretching modes of deuterated aromatic and aliphatic hydrocarbons are expected near 4.5 µm, given that in the diffuse ISM D is often substantially depleted from the gas phase (Linsky et al. 2006). Detections of such emission features have been reported (Peeters et al. 2004;Doney et al. 2016), but interpretation remains uncertain. Anomalous Microwave Emission The anomalous microwave emission (AME) was discovered as a dust-correlated emission component in the microwave, both in COBE maps at 31.5, 53, and 90 GHz (Kogut et al. 1996;de Oliveira-Costa et al. 1997) and observations of the North Celestial Pole made with the Owens Valley Radio Observatory 5.5 m telescope at 14.5 and 32 GHz (Leitch et al. 1997). While these studies suggested free-free emission as a possible explanation, Draine & Lazarian (1998) argued against this interpretation on energetic grounds and suggested instead that electric dipole emission from spinning ultra-small grains was the responsible mechanism. For a recent review of AME, see Dickinson et al. (2018). The Perseus Molecular Cloud is perhaps the beststudied AME source and the excellent frequency coverage near the AME peak helps constrain the underlying SED. It exhibits a pronounced emission peak near 30 GHz with a sharp decline to both higher and lower frequencies (see Génova-Santos et al. 2015, for a compilation of low-frequency observations of Perseus). The AME of the diffuse ISM appears systematically different than what has been observed in specific clouds. For instance, the AME SED derived from all-sky WMAP and Planck maps does not exhibit a lowfrequency turnover but rather has a spectrum that appears to rise through the lowest frequency band (WMAP 23 GHz;Miville-Deschênes et al. 2008;Planck Collaboration X 2016). However, C-BASS observations in the North Celestial Pole region indicate no presence of diffuse AME at 5 GHz (Dickinson et al. 2019). More data between 5 and 23 GHz is required to place constraints on the AME SED of the diffuse ISM, in particular its peak frequency. The SED of dust-correlated emission derived by Planck Collaboration Int. XXII (2015) and presented in Table 3 includes an AME component at microwave frequencies, as can be seen in Figure 10. However, the 353 GHz emission is not perfectly correlated with AME in general (e.g. Planck Collaboration Int. XV 2014; Hensley et al. 2016;Planck Collaboration XXV 2016;Dickinson et al. 2019), and so a correlation analysis may underestimate the amount of AME relative to the submillimeter dust emission. Additionally, the other low-frequency foregrounds like free-free and synchrotron emission are also dust-correlated (Choi & Page 2015;Krachmalnicoff et al. 2018), which may bias the shape of the derived AME SED. Parametric component separation with the Commander code has yielded full-sky maps of AME (Planck Collaboration X 2016) and mitigates some of the concerns with a correlation-based approach. Employing these maps over the full sky, Planck Collaboration XXV (2016) found the ratio of specific intensities I ν of the 22.8 GHz AME to the 100 µm and 545 GHz dust emission to be (3.5 ± 0.3) × 10 −4 and (1.0 ± 0.1) × 10 −3 , respectively. When instead restricting to |b| > 10 • , consistent results are obtained to within the uncertainties. This agrees reasonably well with the Planck Collaboration Int. XXII (2015) SED, which has corresponding ratios of 2.6×10 −4 and 1.1 × 10 −3 , respectively. Given this agreement, we take the SED of Planck Collaboration Int. XXII (2015) as representative even at AME-dominated wavelengths. However, we note that the AME varies both in intrinsic strength and peak frequency from region to region (Planck Collaboration Int. XV 2014;Planck Collaboration XXV 2016), so comparisons between dust models and an average SED should be made with care. Luminescence In addition to scattering optical light, dust grains also luminesce-emit optical photons following absorption of a higher energy photon. This can be the result of fluorescence-radiative deexcitation of the excited electronic level produced by absorption. Alternatively, in-ternal conversion may lead to excitation of a different electronically-excited state that then deexcites radiatively, a process termed "Poincaré fluorescence" (Leger et al. 1988). Luminescence at extreme red wavelengths (6000-8000Å, corresponding to 1.5 hν 2.1 eV) has been observed in a number of reflection nebulae, including the well-studied objects NGC 2023 and NGC 7023 (Witt & Boroson 1990). Because the emission is spatially extended, it is referred to as "extended red emission" (ERE; Witt et al. 1984b). ERE is also seen in some planetary nebulae (Furton & Witt 1990, and in some unusual systems such as the Red Rectangle (Cohen et al. 1975;Schmidt et al. 1980), where it was first discovered. The dust in reflection nebulae is presumed to be interstellar dust that happens to be illuminated by a nearby star, and so we expect ERE to be a property of the general interstellar dust population. ERE is present in carbon-rich planetary nebulae, but has not been observed in oxygen-rich planetary nebulae. This strongly suggests that carbonaceous material is responsible for the ERE . In reflection nebulae, ERE is seen only when the exciting star has T eff > 10, 000 K, hot enough to provide ample far-UV radiation (Darbon et al. 1999). From the spatial distribution in IC59 and IC63, Lai et al. (2017) argue that ERE is excited by 11 < hν < 13.6 eV far-UV photons. Observed ERE intensities in reflection nebulae indicate overall photon conversion efficiencies (ERE photons emitted per UV photon absorbed) of 1% (Smith & Witt 2002). A number of authors have reported detection of the ERE from dust in Galactic cirrus clouds in the general ISM (Guhathakurta & Tyson 1989;Szomoru & Guhathakurta 1998;Gordon et al. 1998;Witt et al. 2008). Gordon et al. (1998) with a required quantum yield 10 ± 3% if the ERE is excited by absorbed photons in the 2.25-13.6 eV range. While certain materials do indeed have high quantum efficiencies (e.g., multilayer structures of SiO 0.9 /SiO 2 luminesce at ∼ 0.9 µm with a quantum yield ∼ 45%; Valenta et al. 2019), an overall yield of 10% would strongly constrain candidate grain materials. Furthermore, if the ERE is actually primarily excited by 11-13.6 eV photons, as concluded by Lai et al. (2017), then the ERE intensities reported by Gordon et al. (1998) would require an overall quantum yield approaching 100%. This would require that (1) the ERE must originate from a major grain component, one accounting for a substantial fraction of the far-UV absorption, and (2) this component must have a quantum efficiency of order 100% for emitting an ERE photon following a FUV absorption. We are not aware of any candidate grain materials that could meet this requirement while complying with elemental abundance constraints, and the observed extinction properties of interstellar dust. On the other hand, measurement of the 4000-9000Å spectrum of the diffuse Galactic light using SDSS blank sky spectra found that the shape of the diffuse light spectrum was consistent with the scattered light expected for standard grain models (Brandt & Draine 2012). Brandt & Draine (2012) estimated that no more than ∼ 10% of the dust-correlated diffuse light at ∼ 6500Å could be ERE. This upper limit is inconsistent with the claimed detections toward individual cirrus clouds (Guhathakurta & Tyson 1989;Szomoru & Guhathakurta 1998;Gordon et al. 1998). Additional observations will be needed to resolve this conflict. We will assume that dust in both reflection nebulae and the general ISM produces ERE when illuminated by 11 eV hν < 13.6 eV photons, with an overall photon conversion efficiency ∼ 1% as seen in bright reflection nebulae. This conversion efficiency could either be the result of a low conversion efficiency for a major dust component or high conversion efficiency emission from a minor dust component (e.g., elements of the PAH population). In addition to the ERE, there is evidence for luminescence in the blue, peaking near ∼ 3750Å, in the Red Rectangle ) and in four reflection nebulae (Vijh et al. 2005). Vijh et al. (2004) suggested that the emission is fluorescence in small, neutral PAHs, containing 3-4 rings, such as anthracene (C 14 H 10 ) and pyrene (C 16 H 10 ). It is not clear what abundance would be required to account for the blue luminescence. POLARIZED EMISSION In this section we review observations of polarized infrared emission from interstellar dust and its connection to the observed polarized extinction. Infrared Emission Just as aligned, aspherical grains polarize the starlight they absorb, the infrared emission from this same population of grains will be polarized. The balloon-borne Archeops experiment (Benoît 2002) provided a first look at polarized dust emission from the diffuse ISM in the Galactic plane. The 353 GHz observations indicated polarization fractions of 4-5%, with values exceeding 10% in some clouds (Benoît et al. 2004), suggesting substantial alignment of the grains providing the submillimeter emission. WMAP produced full-sky polarized intensity maps from 23-93 GHz. Utilizing the final 9-year WMAP data, Bennett et al. (2013) found that the polarized dust emission P ν in the WMAP bands is well-fit by a power law P ν ∝ ν 2+β with β = 1.44. With polarimetric observations extending from 30 to 353 GHz, the Planck satellite provided unprecedented constraints on the frequency dependence of the polarized emission. Planck Collaboration Int. XXII (2015) found that the full-sky average of the polarized intensity of the dust emission from 100 to 353 GHz is consistent with a modified blackbody having power law opacity κ ν ∝ ν β with β = 1.59 ± 0.02 in contrast with β = 1.51 ± 0.01 for total intensity over the same frequency range. This would imply a decrease in the polarization fraction between 353 and 70.4 GHz with a significance greater than 3σ. Subsequently, Planck Collaboration XI (2018) employed the SMICA component separation algorithm to derive a global polarized dust SED. Despite making no assumptions on the parametric form of the dust SED, they found excellent agreement with a modified blackbody having T d = 19.6 K and β = 1.53 ± 0.02. Following updates to the Planck photometric calibration, they revised the β for total intensity to 1.48. With these changes, the β determined for intensity and polarization are the same within 2σ. In Figure 12, we plot the polarized dust SED of Planck Collaboration XI (2018) and adopt it as a model constraint. We discuss the normalization of this SED to the hydrogen column in Section 6.2. In addition to these large-scale observations of the diffuse ISM, polarimetric observations of dense clouds have also shed light on the FIR polarization properties of interstellar dust. In star-forming molecular clouds, the degree of polarization has been observed to fall from 60 to 350 µm, then rise from 350 to 450 µm (Vaillancourt 2002; Vaillancourt et al. 2008). This behavior can potentially be explained by correlated variations in the dust temperature and alignment efficiency in different regions along the line of sight, as might be expected in star-forming dense clouds. However, BLASTPol observations of the Vela C molecular cloud region and the Carina Nebula have revealed very little ( 10%) evolution in the dust polarization fraction between 250 and 850 µm (Gandilo et al. 2016;Ashton et al. 2018;Shariff et al. 2019). In particular, Ashton et al. (2018) Table 3. Little wavelength dependence is observed except at the longest wavelengths where AME becomes a significant fraction of the total dust emission. implied by the polarized dust SED of Planck Collaboration XI (2018) presented in Table 3. Taken together, the Planck and BLASTPol results suggest a roughly constant dust polarization fraction between 250 µm and 3 mm, as shown in Figure 13. Polarization in the mid-infrared dust emission features is generally not expected due to the small sizes of the grains able to emit at these wavelengths. However, a detection of polarization in the 11.3 µm PAH feature has been reported in the nebula associated with the Herbig Be star MWC 1080 ). If the polarization is indeed resulting from aligned PAHs, this may have implications for the theory of alignment of ultrasmall grains and thus predictions of AME polarization Hoang & Lazarian 2018). However, it is not clear that either the dust properties or physical conditions in this region are likely to typify the diffuse ISM, and so we do not employ this result as a dust model constraint. Connection to Optical Polarization Because the same grains are believed to provide both polarized extinction in the optical and polarized emis-sion in the infrared, it is expected that these quantities should be tightly related. Indeed, the polarization fraction of the 353 GHz submillimeter emission p S divided by the V -band polarization per optical depth p V /τ V has a characteristic value between 4 and 5 over a range of column densities (N H 5 × 10 21 cm −2 , Planck Collaboration Int. XXI 2015; Planck Collaboration XII 2018). We adopt the best-fit value of 4.31 over diffuse sightlines (Planck Collaboration XII 2018) as representative of dust in the diffuse ISM. These relations between the polarized extinction and polarized emission from interstellar dust allow us to normalize the polarized dust SED derived by Planck Collaboration XI (2018) (see Figure 12) to the hydrogen column. First, p S / (p V /τ V ) = 4.31, [p V /E(B − V )] max = 0.13, and R V = 3.1 together imply a maximum 353 GHz polarization fraction of 19.6%, agreeing well with the observed maximum of p S = 22 +3.5 −1.4 % (Planck Collaboration XII 2018). Applying this polarization fraction to the adopted 353 GHz dust emission per H (see Table 3) yields a maximum 353 GHz polarized dust emission per H of 2.51 × 10 −28 erg s −1 sr −1 H −1 . The polarized dust SED of Planck Collaboration XI (2018), which is normalized to unity at 353 GHz, can then be used to compute the maximum polarized dust emission per H at lower frequencies, as presented in Table 3. We have color corrected all values, including corrections both at the observed frequency and at 353 GHz, to obtain monochromatic spectral energy densities which can be compared directly to models. Planck Collaboration Int. XXI (2015) also introduced the ratio of the 353 GHz polarized intensity to the Vband extinction, i.e, p S /p V . They found a characteristic value of 5.4 ± 0.5 MJy sr −1 on translucent sightlines. Planck Collaboration XII (2018) extended this analysis to diffuse lines of sight, finding a characteristic ratio of 5.42 ± 0.05 MJy sr −1 with a systematic decrease to roughly 5 MJy sr −1 at the lowest ( 1 × 10 20 cm −2 ) column densities observed. This ratio is not independent of values we have already adopted: which is consistent with observations at low column densities. We note, however, that the highly polarized region studied by Panopoulou et al. (2019) has P 353 /p V = 4.1 ± 0.1 MJy sr −1 , significantly lower than these values. Further study of this ratio and its variability across the sky is needed to understand this apparent discrepancy. Performing a similar comparison between optical and submillimeter polarization in the Vela C molecular cloud, Santos et al. (2017) related the 500 µm polarization fraction p 500 , I band polarized extinction p I , and V band total extinction, finding a characteristic p 500 / (p I /τ V ) = 2.4 ± 0.8. For a typical Serkowski Law (see Section 4.1), p V and p I differ by only about 10%. If the FIR polarization fraction is relatively flat between 500 and 850 µm (see Figure 13), then p 500 / (p I /τ V ) should be approximately 10% larger than p S / (p V /τ V ), which has characteristic value 4.31 (Planck Collaboration XII 2018). This apparent discrepancy may be due to the very different environments sampled by these observations, but given the importance of this ratio in constraining models, further investigation is warranted. AME Polarization If the AME arises from aligned, aspherical grains, then it too will be polarized. However, searches for polarized AME have thus far yielded only upper limits at the ∼ 1% level Macellari et al. 2011;Génova-Santos et al. 2015;Planck Collaboration XXV 2016;Génova-Santos et al. 2017). See Dickinson et al. (2018) for a recent review. To the extent that the smallest interstellar grains produce AME through rotational electric dipole radiation, the amount of AME polarization depends on how well these grains are able to align with the local magnetic field. The lack of polarization in the UV extinction curve (see Section 4) despite strong total extinction in the UV (see Section 3.3) suggests that these grains are poorly aligned. However, Hoang et al. (2013) demonstrated that if aligned PAHs were responsible for the claimed detections of polarization in the 2175Å feature and also produced the AME, then the AME should be polarized at the 1% level, near current upper limits. On the other hand, argued that quantization of the vibrational energy levels in ultrasmall grains leads to exponential suppression of their alignment, resulting in negligible AME polarization. SUMMARY AND DISCUSSION Based on the foregoing discussion, we argue that the following data represent the current state of observations that constrain models of interstellar dust, and so a successful model of interstellar dust in the diffuse ISM should be measured against its consistency with these data. We also present a table of constants (Table 5) based on observational data that enables the translation of these observables into constraints on the material properties of dust. • Extinction: We synthesize the extinction curves of Gordon et al. (2009) and Cardelli et al. (1989) in the FUV, which we join to that of Fitzpatrick et al. (2019) in the UV through the optical. From 0.55 to 2.2 µm, we employ the extinction curve of Schlafly et al. (2016) assuming A H /A K = 1.55 (Indebetouw et al. 2005 (Lenz et al. 2017). • Polarized Extinction: Between 0.12 and 4 µm, we join a Serkowski Law with parameters K = 0.87 and λ max = 0.55 µm (Whittet 2003) smoothly to a power law with index β = 1.6 in the IR . We normalize this curve to a maximum starlight polarization of p V /E(B − V ) = 0.13 mag −1 (Planck Collaboration XII 2018; Panopoulou et al. 2019). • Emission: In the MIR, we adopt the AKARI and Spitzer spectrum of the star-forming galaxy NGC 5992 (Brown et al. 2014) between 3 and 12 µm and the Spitzer IRS observations of the translucent cloud DCld 300.2-16.9 (B) (Ingalls et al. 2011) between 6 and 38 µm. The composite spectrum is scaled to the hydrogen column to match observations of diffuse Galactic emission in the DIRBE bands (Dwek et al. 1997 Total Emission Polarized Emission Figure 14. In the top panel, we plot our adopted constraints on the total (black) and polarized (red) extinction from dust in the diffuse ISM. In the bottom panel, we plot our adopted constraints on the total (black) and polarized (red) emission from interstellar dust. Note that for both polarized extinction and emission, we show the maximum level of polarization, corresponding the interstellar magnetic field lying in the plane of the sky. We have made use of the values in Table 5 where necessary to normalize the observational data to the hydrogen column. The data underlying the FIR emission constraints, including uncertainties, are presented in Table 3. A summary of the adopted constraints is given in Section 7. These curves will be made available in tabular form upon publication. The extrapolation of the extinction curve to FIR wavelengths can be found in Hensley & Draine (2020). These constraints are summarized visually in Figure 14, which illustrates the impressive breadth of our current knowledge, spanning a large dynamic range in wavelength, magnitudes of extinction, and intensity, and highlights the most pressing needs for augmenting the state of art. We close by highlighting a few such future directions of key importance for dust modeling. The spectroscopic features in extinction, emission, and polarization are the "fingerprints" of the specific materials that constitute interstellar grains, enabling determination of their chemical makeup. The Near InfraRed spectrograph (NIRSpec, 0.6-5 µm; Bagnasco et al. 2007) and Mid-Infrared Instrument (MIRI, 5-28 µm; Rieke et al. 2015) aboard the James Webb Space Telescope (JWST) will characterize the NIR and MIR spectroscopic dust features in unprecedented detail. Observing the full sky between 0.75 and 5 µm with a resolving power of up to 130, SPHEREx (Doré et al. 2016) will enable mapping of the strength of dust absorption and emission features and thus probe their variation with location in the Galaxy. The high spectral resolution of the XRISM (XRISM Science Team 2020) and Athena (Barcons et al. 2017) X-ray observatories promises to reveal the mineralogical composition of interstellar grains in ways complementary to what can be gleaned from the infrared features. As the 3.4 µm complex has been observed on very few sightlines that might typify the diffuse ISM, a number of questions can be addressed by more sensitive observations. Is it indeed generic of the diffuse ISM that the 3.3 µm aromatic feature is substantially broader in absorption than emission? To what extent does diamondlike carbon contribute emission and absorption in the 3.47 µm feature? How does the 3.4 µm profile change systematically with interstellar environment? The 6.2 and 7.7 µm aromatic features have been observed in absorption, but on few sightlines. Detailed characterization of these features, particularly comparison of the emission and absorption profiles, will clarify which grains are the carriers of aromatic material in the ISM. The aromatic features at still longer wavelengths have not been observed in absorption, making them a compelling target for JWST and an important constraint on PAH models. While the aliphatic 6.85 µm feature appears generic to the diffuse ISM on the basis of its detection in absorption toward Cyg OB2-12, the ubiquity of the 7.25 µm methylene feature is less clear. Characterization of these aliphatic absorption features and their strengths relative to the aromatic features is a relatively unexplored win-dow into the hydrocarbon chemistry of the ISM which JWST will enable. Likewise, the deuterated counterparts of both the aliphatic and aromatic features, inaccessible from the ground, will be accessible to JWST and SPHEREx in emission and absorption. The sensitivity of MIRI will enable searches for asyet undetected spectroscopic features and will characterize in greater detail those already observed. The silicate features can be probed for trace amounts of crystallinity, and the detection of crystalline forsterite can be verified on many more sightlines. Dedicated searches can be undertaken for the 11.2 µm SiC feature and the 11.53 µm feature from polycrystalline graphite, perhaps finally confirming or ruling out graphite as a major constituent of interstellar dust. In the NIR, NIRSpec can characterize the many DIBs found longward of 600 nm and perform sensitive searches for new ones. Likewise, the presence or absence of predicted features at 1.05 and 1.26 µm from ionized PAHs can be strongly constrained. While we anticipate advances in infrared spectroscopy, it is unfortunate that this is not the case for infrared spectropolarimetry. Polarimetry is a powerful complementary constraint on the properties of interstellar dust, particularly given the dichotomy observed in polarization between carbonaceous and silicate features. Additionally, the profiles of the spectroscopic features in extinction and polarization generically differ because each depends differently on the optical constants, and so measurement of both strongly constrains grain material properties. Additional spectropolarimetric measurements of the 9.7 and 18 µm silicate features and the 3.4 µm carbonaceous feature are desperately needed. In addition, the continuum polarization between 4-8 µm is poorly determined. Unfortunately, we are unaware of any operational facilities, nor of any planned ones, capable of spectropolarimetry or even broadband polarimetry between 3 and 8 µm. However, new polarimetric measurements of the 9.7 µm silicate feature are possible with CanariCam (Packham et al. 2005). Stellar optical polarimetry, on the other hand, will be pushed to high latitude, diffuse sightlines in the 2020s with the PASIPHAE survey . With a many-fold expansion of stellar polarization catalogues, new insights will be gained in the variations in the polarized extinction curve throughout the Galaxy, including its connection to polarized infrared emission. Because of the role of dust polarization in mapping magnetic fields and as a contaminant for Cosmic Microwave Background (CMB) polarization science, the prospects are better for studies of polarized emission. Of critical importance from the perspective of dust model-ing is extending coverage of the polarized dust SED to higher frequencies on sightlines that might typify the diffuse ISM. Measuring the wavelength-dependence of polarization near the peak of the dust SED will allow the contributions from different dust populations to be more efficiently disentangled. At even shorter wavelengths, we expect emission to be dominated by smaller, unaligned grains. While such measurements are already possible on dense sightlines using instruments like HAWC+ aboard SOFIA (Harper et al. 2018), the greater sensitivity afforded by upcoming facilities like CCAT-prime (260 ≤ ν/GHz ≤ 860; Stacey et al. 2018) is required to access diffuse sightlines. However, we are unaware of upcoming facilities that can perform polarimetry on the Wien side of the dust SED along diffuse sightlines. Particularly given the uncertainties in the level of polarization in the AME and the abundance of material able to emit microwave magnetic dipole radiation, extension of the determination of the polarized dust SED to lower frequencies is also of great interest. Such measurements will be made by upcoming CMB experiments such as the Simons Observatory , Lite-BIRD (Matsumura et al. 2014), and CMB-S4 (Abazajian et al. 2019), all of which have the sensitivity to characterize dust emission on the diffuse, high latitude sightlines of greatest interest to this work. These directions are but a few avenues to be explored with the wealth of upcoming data and are not intended to be exhaustive. As we emphasize in this work, dust modeling should be informed by the full range of optical phenomena associated with interstellar grains, and by combining the insights gleaned from a variety of observations across the electromagnetic spectrum, we can paint the clearest picture possible of the nature of interstellar grains. ACKNOWLEDGMENTS We are grateful to many stimulating conversations that informed this work over its long completion. We thank in particular Megan Bedell, Tuhin Ghosh, Vincent Guillet, Jim Ingalls, Ed Jenkins, Eddie Schlafly, and Chris Wright for sharing their expertise. This research was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This work was supported in part by NSF grants AST-1408723 and AST-1908123 (Virtanen et al. 2020)
2020-09-02T01:01:27.702Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "264e4b63dacf5ca19c957739ae32603d35ae9844", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/abc8f1/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "264e4b63dacf5ca19c957739ae32603d35ae9844", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
54808748
pes2o/s2orc
v3-fos-license
The Laryngeal Mask Airway (LMA) as an alternative to airway management in mentally retarded patients during dental procedures Abstract Background: To evaluate the possibility of airway management using a laryngeal mask airway (LMA) during dental procedures on mentally retarded (MR) patients and patients with genetic diseases. Design: A prospective pilot study. Setting: University Hospital. Methods: A pilot study was designed to induce general anaesthesia for dental procedures in 15 mentally retarded patients, with airway management using a laryngeal mask airway (LMA). The parameters assessed during the pilot study included ease of LMA insertion and its seal, inspiratory pressures with controlled ventilation, visibility of the operating field and surgical comfort, recovery from anaesthesia, LMA tolerability and postoperative complications. Results: LMA insertion was successful in all of the patients, operating field visibility was good in most patients, as was tolerability, and awakening was uneventful. Serious postoperative complicationsæbleeding, prolonged coughing and eating disordersæwere not observed. Conclusions: In this pilot study, the LMA was a suitable tool for airway management during dental procedures on the mentally retarded and on patients with genetic diseases. The Laryngeal Mask Airway (LMA) as an alternative to airway management in mentally retarded patients during dental procedures lead to anxiety or aggressiveness of the mentally retarded patient, and refusal to take food. 3 The aim of the present pilot study was to test laryngeal mask airway management in outpatient dental procedures in the MR patient and in individuals with upper airway abnormalities due to genetically related syndromes. Methods Over a period of 30 months (between 2001 and 2003), a total of 30 mentally retarded patients required dental care under general anaesthesia (out of a total number of 42 registered patients) at the Dental Care Unit for At-Risk Patients of the Prague-based Institute for Clinical and Experimental Medicine. Six patients were managed using spontaneous ventilation without any further airway management, 8 were intubated whilst the LMA was used in 15 (Table I). The diagnoses of patients in the LMA group are summarized in Table 1. The pilot group consisted of 7 women and 8 men, with a mean age of 27 years (range, 15-42 years). This pilot study was approved by the Ethics Committee of the institution. Correspondence: Dr P Michalek email: pavel.michalek@homocka.cz Outpatient dental care of the severely mentally retarded patient requires the induction of general anaesthesia, mainly because of their complete lack of cooperation. For years, the gold standard for these procedures has been orotracheal or nasotracheal intubation, with general anaesthesia, using controlled or intermittent mandatory ventilation. Use of conscious sedation, a common procedure in adult patients, is almost impossible in the mentally retarded patient because of the lack of cooperation. 1 However, tracheal intubation in these patients is associated with pitfalls. The altered anatomy of the upper airways, associated with some genetic abnormalities (e.g. Down's syndrome and "cri-du-chat'" syndrome) may result in problems with tracheal intubation. 2 Nasotracheal intubation may cause bleeding and contamination of the tube with the nasal cavity content. Post-intubation pain in the throat may Those indicated for general anaesthesia using the LMA included: • Patients in whom tracheal intubation is expected to cause problems (previous experience, known or presumed anatomical abnormalities) • A difficult-to-examine patient scheduled for a procedure of an unspecified type and extent. All patients were examined preoperatively, and were given oral pretreatment using dehydrobenzoperidole (2 mL) and atropine (0.5 mg). Anaesthesia was induced in ten patients intravenously using propofol, sufentanil and low-dose suxamethonium (0.3 mg.kg -1 ). The LMA was inserted after the induction of general anaesthesia. These patients showed a modicum of cooperation, and allowed us to insert an intravenous cannula. The remaining five patients were given inhaled sevoflurane first, followed by peripheral venous cannulation. An appropriately-sized laryngeal mask airway (sizes 3-5) was inserted, following the administration of suxamethonium at a dose of 0.3 mg.kg -1 . Anaesthesia was maintained with an O2 and N2O (nitrous oxide) mixture and sevoflurane. Patients were monitored using pulse oximetry, noninvasive blood pressure measurement, and electrocardiogram (ECG). Circuit pressure (inspiratory, expiratory and plateau) levels were also monitored. The LMA was fixed in the corner of the mouth opposite to that used for the surgical procedure. A tamponade (throat pack) was also established in the hypopharynx. The following parameters were assessed: • Ease of laryngeal mask insertion (assessed using a four-point scale: 1 = easy insertion at first attempt; 2 = LMA inserted after repeated attempts, no leakage; 3 = LMA inserted after repeated attempts, good seal on spontaneous ventilation, leakage on controlled ventilation; 4 = the mask does not seal after repeated attempts • Inspiratory pressure on controlled ventilation • Visibility of the operative field and level of comfort for the surgeon • Recovery from general anaesthesia, LMA tolerability, airway irritation, coughing, retching • Postoperative complications: circulatory instability, respiratory depression, neck pain, bleeding, refusal of food intake Results The surgical procedure was successfully performed on 15 patients using the LMA. Five patients had multiple tooth extractions, whilst four had conservative treatment of dental caries; six mentally retarded patients underwent more extensive combined procedures. In most patients LMA insertion was uneventful and successful at the first attempt. Only two patients required repeated LMA insertion and the LMA had to be replaced by another size in one patient. There was no LMA leakage in 14 patients whereas minimal leakage could be heard with one laryngeal mask during controlled ventilation. Inspiratory pressures on controlled ventilation were below 2 kPa (1-1, 8 kPa) in all patients. The laryngeal mask insertion and seal were also uneventful in the 7 patients with altered upper respiratory tract anatomy due to geneticallyrelated diseases with predicted (or previous) difficult intubation. The surgeon rated the operating field visibility as very good in 11 cases, and as somewhat poorer in 4 cases. In no patient was the scheduled surgical procedure reduced because of poor oral cavity visibility. The laryngeal mask was very well tolerated during anaesthesia. Awakening was calm in all patients, there was no aggressiveness, confused motion, coughing and retching. Five patients removed the LMA by themselves after awakening; the other 7 were fully awake and tolerated the presence of the LMA in the hypopharynx. The LMA was removed by the anaesthesiologist. No patient experienced serious perioperative complications such as circulatory instability, hypoventilation, aspiration or apnea. One patient required short-term (30 minute) supplemental oxygenation via a face mask. In the postoperative course, one patient developed a transient bout of coughing (2 days). Hoarseness, bleeding complications, sore throat, and refusal of food intake were not observed. Discussion This pilot study shows a relatively simple and feasible method for airway management of mentally retarded patients and those with genetic diseases undergoing dental procedures. The LMA is a standard tool for airway management during short-and medium-term procedures. 4 It took a long time before it came into routine use for procedures in the oral cavity, primarily because of the risk of dislodgment and poor visibility of the operating field. In recent years, reports have been published on a series of patients undergoing oral surgery with the help of airway management using the LMA, such as tonsillectomy, mandibular surgery and dental procedures. [5][6][7][8] The advantages of the LMA are self-evident: insertion does not require other instruments, manipulation is easy and quick, there is minimal intraoperative laryngoscope-caused damage to dentition, bleeding from nasal cavities and lower airway infection, as well as injury to the vocal cords. In the postoperative period the LMA significantly reduced the risk of laryngospasm, laryngeal and tracheal bleeding, coughing, and sore throats. In addition, the LMA can be used in patients expected to cause difficulty during intubation as a result of modified anatomical upper airway circumstances. However, the LMA has several drawbacks when used in oral cavity procedures including rigidity, increased potential for dislodgment compared with the tracheal tube, and reduced oral cavity visibility, compared with nasotracheal intubation. The LMA does not completely rule out the potential risk for aspiration, and its use may be associated with compression of the nerve structures in the oral cavity and hypopharynx. [8][9] Use of the laryngeal mask in mentally retarded patients and in those with genetic anomalies has not been investigated systemically. There have only been case reports and small series of fewer than 5 patients. 8,10 A mentally retarded patient is seldom able to undergo a procedure with analgosedation or conscious sedation, cooperating with the surgeon. These patients are usually anxious, wary of aliens, oversensitive to painful stimuli and unable to communicate. 1 Often it is not even possible to perform a dental examination prior to the procedure without general anaesthesia. In these patients, general anaesthesia is usually induced using orotracheal intubation. However, it is extremely difficult to perform tracheal intubation in some mentally retarded patients with genetic abnormalities. Down's syndrome patients (chromosome 21 trisomy) often have macroglossia, laryngomalacia, congenital subglottic stenosis and tracheal stenosis. 11 These patients are often diagnosed as having lymphoid hyperplasia; and one of its forms, referred to as lingual tonsillar hypertrophy, may cause lifethreatening airway obstruction. 12 Successful airway management in these patients using the LMA has also been reported. 13 LMA insertion has likewise been successful in our Down's syndrome patients. Another genetic syndrome associated with upper airway anomalies is the "cri-du-chat" syndrome (deletion of part of the short arm of chromosome 5). It is characterized by mental retardation, micrognathia, anatomical abnormalities of the larynx, hypotonia, and congenital heart disease. 14 Laryngeal mask insertion in the patient with the "cridu-chat" syndrome was also successful in our cohort. Our pilot study shows that anaesthesia with airway management using the LMA is well tolerated by mentally retarded patients; it was also successfully inserted in patients with upper airway pathology. Postoperative complications, including refusal of food intake, which may pose a considerable challenge in patients following tracheal intubation, were minimal in our series. Conclusion The laryngeal mask has proved to be an option of choice in airway management in short-term outpatient conservative and dental procedures under general anaesthesia in several men-tally retarded patients and in those with genetic syndromes associated with anatomical oral cavity and upper airway abnormalities. The LMA allows for quick and simple airway management, particularly in mentally retarded patients who will not even allow preoperative oral cavity examinations, making the extent of procedure unclear. Procedures to be undertaken in several quadrants and on the MR patient with macroglossia result in poor visibility of the operating field. To improve protection against aspiration, a gauze tamponade in the hypopharyx is advisable. Still, further, larger studies are warranted to define more accurate indications and algorithms for LMA use in the mentally retarded patient in dental care.
2018-12-15T14:07:42.254Z
2004-10-01T00:00:00.000
{ "year": 2004, "sha1": "c40067e151a89ee8c370a3c63972a30cac38d7de", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/22201173.2004.10872371?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "040af26ef0714a46feeb2132ad4d113c09affe6a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10726556
pes2o/s2orc
v3-fos-license
Mind–Body, Ki (Qi) and the Skin: Commentary on Irwin's ‘Shingles Immunity and Health Functioning in the Elderly: Tai Chi Chih as a Behavioral Treatment’ Visitors to China from the West (and also from Japan) are impressed in the early morning when they first wake up in their hotels to see many Chinese people quietly and beautifully performing Tai Chi Chuan (TCC: Tai-kyoku-ken, in Japanese, and, incidentally, Tai Chi Chih® is a registered trademark of a school of TCC) in the park just next to their hotels. Also it is now not unusual to see TCC practised by Westerners in New York or London these days. We remember that we were quite amused to see in the film Star Wars a somewhat caricatured version of martial arts probably related to TCC shown off in a ‘Western’ way. Although it seems difficult to ‘define’ what TCC is, given its broad range from health exercise to martial art, it would be pertinent to say that its essential components are meditation, breathing and slow body movement. We usually understand it as a mode of Qigong (Ki-kou, in Japanese), which may be roughly translated into ‘effectuation of Qi (Ki, in Japanese)’ in English. Qigong as applied to health practice can be divided into ‘external’ and ‘internal’ facets. In the former mode, Ki external to a subject is transferred into the subject mediated by an outside ‘master’ Ki practitioner. In the latter mode, a subject carries out sequences of practices by him/herself to enhance (or effectuate) his/her own Ki. It has now become clichéd among people interested in complementary and alternative medicine (CAM) that the modern analytical medicine should be transformed or evolved into more ‘holistic’ medicine. Many people will agree that one of the keys to this ‘holistic’ approach is mind/body integrity. The concept of Ki would be of great value towards this direction, as it is understood as a kind of ‘energy’ of mind/body as a whole. If, therefore, there was some concrete method to enhance (strengthen) Ki, it would become one of the cornerstones of holistic medicine of the future. Scientific study of Qigong is warranted from this perspective, and thus Irwin’s review ‘Shingles Immunity and Health Functioning in the Elderly: Tai Chi Chih as a Behavioral Treatment’ in eCAM (1) is a timely one. The basic principle of Qigong as a ‘health’ practice, we believe, is the assumption that if Ki is ‘depressed (or vacant)’, illness (evil) will develop (‘illness out of hollowed Ki’). This kind of understanding will be better illustrated by using the concepts of Yin/Yo (Yin–Yang, in Chinese: or maybe ‘overt/covert’ in this case) and Kyo/Jitsu (Xu-Shi, in Chinese: or maybe ‘hollow/full’ in this case) (2) as seen in Fig. 1. According to this scheme, a subject’s Ki is either Kyo (hollow) or Jitsu (full), while external evil’s potency is also Kyo (hollow) or Jitsu (full). When a subject’s Ki is Kyo, even an external evil with Kyo potency will cause disease. A typical example of such a situation would be opportunistic infection in immunocompromised patients. On the other hand, when a subject’s Ki is Jitsu but the potency of an external evil is Kyo, the subject will overcome the evil. When a subject’s Ki is Kyo but the potency of an external evil is Jitsu, in contrast, the evil will overcome the resistance of the subject and the subject will become ill. This kind of situation is the Yin condition of the disease, such as advanced cancer in old people. When both a subject’s Ki and the potency of an external evil are both Jitsu, there will emerge a Yo condition of disease, such as febrile chicken pox in young children. Thus, from the medical point of view, Ki can be seen as the totality of the body’s healing systems or defense mechanisms which include the immune system as their essential part. TCC and Ki (Qi) Visitors to China from the West (and also from Japan) are impressed in the early morning when they first wake up in their hotels to see many Chinese people quietly and beautifully performing Tai Chi Chuan (TCC: Tai-kyoku-ken, in Japanese, and, incidentally, Tai Chi Chih ® is a registered trademark of a school of TCC) in the park just next to their hotels. Also it is now not unusual to see TCC practised by Westerners in New York or London these days. We remember that we were quite amused to see in the film Star Wars a somewhat caricatured version of martial arts probably related to TCC shown off in a 'Western' way. Although it seems difficult to 'define' what TCC is, given its broad range from health exercise to martial art, it would be pertinent to say that its essential components are meditation, breathing and slow body movement. We usually understand it as a mode of Qigong (Ki-kou, in Japanese), which may be roughly translated into 'effectuation of Qi (Ki, in Japanese)' in English. Qigong as applied to health practice can be divided into 'external' and 'internal' facets. In the former mode, Ki external to a subject is transferred into the subject mediated by an outside 'master' Ki practitioner. In the latter mode, a subject carries out sequences of practices by him/herself to enhance (or effectuate) his/her own Ki. It has now become clichéd among people interested in complementary and alternative medicine (CAM) that the modern analytical medicine should be transformed or evolved into more 'holistic' medicine. Many people will agree that one of the keys to this 'holistic' approach is mind/body integrity. The concept of Ki would be of great value towards this direction, as it is understood as a kind of 'energy' of mind/body as a whole. If, therefore, there was some concrete method to enhance (strengthen) Ki, it would become one of the cornerstones of holistic medicine of the future. Scientific study of Qigong is warranted from this perspective, and thus Irwin's review 'Shingles Immunity and Health Functioning in the Elderly: Tai Chi Chih as a Behavioral Treatment' in eCAM (1) is a timely one. The basic principle of Qigong as a 'health' practice, we believe, is the assumption that if Ki is 'depressed (or vacant)', illness (evil) will develop ('illness out of hollowed Ki'). This kind of understanding will be better illustrated by using the concepts of Yin/Yo (Yin-Yang, in Chinese: or maybe 'overt/covert' in this case) and Kyo/Jitsu (Xu-Shi, in Chinese: or maybe 'hollow/full' in this case) (2) as seen in Fig. 1. According to this scheme, a subject's Ki is either Kyo (hollow) or Jitsu (full), while external evil's potency is also Kyo (hollow) or Jitsu (full). When a subject's Ki is Kyo, even an external evil with Kyo potency will cause disease. A typical example of such a situation would be opportunistic infection in immunocompromised patients. On the other hand, when a subject's Ki is Jitsu but the potency of an external evil is Kyo, the subject will overcome the evil. When a subject's Ki is Kyo but the potency of an external evil is Jitsu, in contrast, the evil will overcome the resistance of the subject and the subject will become ill. This kind of situation is the Yin condition of the disease, such as advanced cancer in old people. When both a subject's Ki and the potency of an external evil are both Jitsu, there will emerge a Yo condition of disease, such as febrile chicken pox in young children. Thus, from the medical point of view, Ki can be seen as the totality of the body's healing systems or defense mechanisms which include the immune system as their essential part. TCC Against Diseases? There have been several reports (3)(4)(5) and reviews (6) showing that TCC enhances immune functions. However, a recent systematic review by Wang et al. (7) concludes that welldesigned studies remain to be conducted in the future, as the data presented up to now have many limitations and biases. Irwin showed in his review in eCAM (1) and several previous articles (8,9) that TCC, which is supposed to support the Jitsu condition for Ki, induced proliferation of CD4 ϩ and CD45RO ϩ T-lymphocytes against varicella zoster virus (VZV) in those who practise it. This is a fine piece of data obtained by a well-designed method, directed towards one of the general ideas above, namely enhanced Ki will overcome an external Kyo evil. It should be noted, however, that their data do not show that those who practice TCC are less likely to be afflicted by shingles, or much less that TCC will cure it. His view is also very interesting for us, who have long tried to integrate Kampo and other CAM modalities into our clinical practices as dermatologists (10,11). Shingles is a very interesting disease condition, as it is caused by secondary reactivation of VZV many decades after its actual infection, well known as chicken pox. This reactivation process is intriguing, but it is already known that many factors could contribute to it, including overwork, stress, trauma, malignancy, autoimmune diseases, grave infections, use of immunosuppressive agents and radiation. These various factors may lead to depressed immune functions which will eventually allow reactivation of VZV. We dermatologists are usually more likely to be visited by patients who are already affected by shingles, even in severe states. In those cases, our first choice is antiviral agents, which would directly 'kill' the 'external evils', in this case VZV. Even steroids, which are in principle regarded as 'immunosuppressive' agents, are being revalued these days for fulminant cases. On the other hand, however, we often use Kampo medicine such as Hochu-Ekki-To (Bu-zhong-yi-qi-tang in Chinese) for patients with shingles in order to enhance their Ki, as it is one of the typical formulae in Kampo which is prescribed for people with the Sho of Ki-Kyo (Qi-Xu, in Chinese) or 'hollowed (deficient) Ki' conditions (12)(13)(14). It would be of great interest to examine if treatment of shingles by Hochu-Ekki-To is also associated with the proliferation of T cells against VZV. Irwin's study, on the other hand, focuses on the aspect of prevention. Especially interesting is his study design where subjects selected are those older ones at higher risk of shingles. Also we appreciate his study design of having control subjects on the waiting list from the ethical point of view. Since not participating in an exercise program as mild as TCC would certainly not do anyone harm. This kind of design should be considered in assessment of other CAM modalities as well, for example, Kampo. At the epidemiological level, however, as Irwin himself admits, it would be rather difficult to confirm if the increase of anti-VZV T cells by TCC will in fact decrease the incidence of shingles, as the incidence itself is quite low. However, if the question to be answered is not whether immunity to certain specific 'evils' is enhanced by TCC but whether its practice will enhance Ki to reduce the chance of various external evils harming the subject, say, influenza, epidemiological studies on a large cohort should give an interesting answer. I hope our colleagues in China will carry out such a prospective study with scientific rigor. Methodological Challenge One of the pitfalls in the study on Ki is obviously that there seems to be no 'scientific' objective measure to evaluate its 'quantity'. As an American psychiatrist, Irwin keeps this problem at bay by taking a 'behaviorist' approach, asking not whether Ki enhanced immune functions but just whether immune cells against VZV were increased in those who 'performed' TCC. Indeed, he tried not to make any references to Ki in his review. Instead, he presented SF-36 scores, related to quality of life (QOL), of those who did or did not practise TCC. He verified the effects of TCC in improving the scores, especially in those older adults whose baseline scores were at or below the population norm. He proposes as a hypothesis to explain these effects that either relaxation or exercise, or both, may mediate the observed changes in immunity and health outcome, suggesting the sympathetic/parasympathetic balance as a basic mechanism for their influence. We are tempted to ask at this point if it is not possible to call 'Ki' something, which is influenced by relaxation and/or exercise, related to QOL, and based at least partly on the 'sympathetic/parasympathetic balance'. This may be a nice way to introduce the 'holistic' concept of Ki into 'analytical' Western medicine. The merit of the Ki concept would be well understood if compared with the related English term such as 'spirit' or 'vitality/vigor'. In the dualistic tradition of Western culture, spirit is understood as something non-physical or intangible, so it is essentially not influenced by one's physical condition: a person can be 'high-spirited' even if he is fatally ill. On the other hand, vitality/vigor is understood as something more related to one's physical condition, so one would not be seen as 'more vigorous' if quietly meditating. There seems to be no good term to represent the 'mind-body balance' condition, like Ki, in English. One way to illustrate Ki as a 'mind-body' concept is to point out that the word Ki is probably related to 'breathing' (and therefore of course 'air'). Indeed, it should be borne in mind that the crucial component of TCC is breathing. By breathing deeply, one can feel that they are really living, and in deep breathing a practitioner of TCC is suggested to unite concentration of mind and relaxation of body. As Asian clinicians, though, we would not like to 'mystify' the concept of Ki and also the practise of TCC so much. Ki could well be roughly regarded as a shorthand sign to represent 'mind-body QOL' and we have no doubt that several modes of exercise practised even in the West have many things in common with TCC in enhancing Ki. In fact, we have recommended to our Japanese patients not only TCC but stretching, swimming or even household work, for enhancement of Ki. In his study on laughter comparing rheumatoid arthritis patients and healthy people, Yoshino (15,16) reported that several 'stress' measures such as serum cortisol, interleukin-6 (IL-6) and adrenaline levels are decreased in patients who laughed, and the natural killer (NK) cell activity was increased both in patients and in healthy subjects who laughed often. We suggest that laughter would be for some subjects as effective as TCC in enhancing Ki. Thus, the concept of Ki would be as important and effective, and also as difficult to quantify, as the concept of 'stress'. However, if the concept of 'stress' can have citizenship in modern medicine, as we think is the case, why not the concept of Ki, which may be understood as something which could be the target of stress? The concept of Ki is so rich as to encompass such wide characterizations of its disturbance as Ki-Kyo (hollowed Ki), Ki-Gyaku (back streamed Ki) and Ki-Utsu (stagnant-Ki) (12). The usual concept of stress is probably related to the latter two Ki states, while the first would be understood as the condition where a person has yielded to stress. We should like to point out that as the research into 'stress' has suffered from the absence of 'objective' measures which could be called 'stress indices (or stress coefficient, SQ?)', difficulty in quantifying Ki and its derangement is not related to its cultural (Asian) worldview. It is the difficulty of introducing any holistic or 'qualia'like concepts into the modern Western methodology of science. Subjective or Objective It would therefore be tempting for a Western psychiatrist such as Irwin to approach this problem from a behavioristic point of view and try to represent a subject's Ki (or stress) state in such terms as the SF-36. Especially interesting is the possibility of relating Ki to those SF-36 scores such as General Health Perception and Vitality. We are sure that SF-36 would help us obtain useful data without which no large-scale comparative trials would be possible. However, we must express serious reservations concerning this approach, not because it tries to quantify something 'subjective', or what cannot in principle be quantified. As long as its limitation is realized, such an attempt of objectively representing 'subjective' qualities has actually advanced, not hampered, the progress of scientific study of the mind. What we would like to point out is that such a questionnaire method has limitations, because it takes a subject's verbal reactions to written questions at face value. In addition to, of course, the problem of conscious lying, questionnaire methods are not well guarded against the problem of a subject's unconscious self-deception. This point is especially important in assessing many CAM modalities in which some practitioners offer 'magical' healing power. For example, nowadays we encounter a lot of 'external' Qigong 'masters' who boast they can heal many patients' diseases by transmitting their special Ki energy. I am afraid that there are not just a few people who could be deceived by such gurus into responding to questionnaires in such a way that their SF-36 scores would appear to have dramatically improved. Ki can be Approached 'Objectively' We would like to remind readers of the very merit of the concept of Ki, that it is the concept of the state of the mind/ body as a whole. It is thus not a 'subjective' state, which can only be known introspectively. From our clinical experience, the 'Ki-Kyo' state is diagnosed very 'objectively': those with 'Ki-Kyo' are weak in voice, have no 'strength' in their eyes and their posture is poor. We do not think anyone can pretend to be in good Ki condition, even if they can pretend to have high SF-36 scores on the paper questionnaire. In this sense, Ki is a very objective entity. It is not an abstract and subjective entity like soul or spirit. As experienced dermatologists, we would also like to point out that we can judge a patient's Ki state by just glancing at their skin condition. Those people healthy in mind-body, or with good Ki, have bright and 'full' skin. Though difficult to quantify, these are 'objective' and non-verbal properties, unlike subjective states of expressed verbally by the subject in response to a written questionnaire. There is thus a definite possibility that we can elaborate on the concept of Ki as an objective 'scientific' term. It is a basic East Asian 'philosophy' of health/disease that those with a good Ki state are highly immune to diseases. It is very nice to see that Western clinical researchers such as Irwin have undertaken the challenge to tackle this difficult problem of Ki, or mind-body unity. Now is an exciting era, when for the first time it has became possible for a western psychiatrist and an Eastern dermatologist to work together towards reconciling this fundamental difference between the medicines of the East and the West.
2016-05-04T20:20:58.661Z
2005-03-01T00:00:00.000
{ "year": 2005, "sha1": "665242c9df2d1f05e7bacf1a8b9c7ddb087f18f1", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2005/178234.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7c682313dcdf6b8430129ddc3cbac900b3e5c2e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235652266
pes2o/s2orc
v3-fos-license
Domain-guided Machine Learning for Remotely Sensed In-Season Crop Growth Estimation Advanced machine learning techniques have been used in remote sensing (RS) applications such as crop mapping and yield prediction, but remain under-utilized for tracking crop progress. In this study, we demonstrate the use of agronomic knowledge of crop growth drivers in a Long Short-Term Memory-based, domain-guided neural network (DgNN) for in-season crop progress estimation. The DgNN uses a branched structure and attention to separate independent crop growth drivers and capture their varying importance throughout the growing season. The DgNN is implemented for corn, using RS data in Iowa for the period 2003-2019, with USDA crop progress reports used as ground truth. State-wide DgNN performance shows significant improvement over sequential and dense-only NN structures, and a widely-used Hidden Markov Model method. The DgNN had a 4.0% higher Nash-Sutfliffe efficiency over all growth stages and 39% more weeks with highest cosine similarity than the next best NN during test years. The DgNN and Sequential NN were more robust during periods of abnormal crop progress, though estimating the Silking-Grainfill transition was difficult for all methods. Finally, Uniform Manifold Approximation and Projection visualizations of layer activations showed how LSTM-based NNs separate crop growth time-series differently from a dense-only structure. Results from this study exhibit both the viability of NNs in crop growth stage estimation (CGSE) and the benefits of using domain knowledge. The DgNN methodology presented here can be extended to provide near-real time CGSE of other crops. Introduction The increase in synchrony of global crop production and frequency of climate changedriven abnormal weather events is leading to higher variance in crop yields [1][2][3]. Most staple food crops are more vulnerable to yield loss in specific stages of growth and as such, accurate crop growth stage estimation (CGSE) is vital to track crop growth at different spatial scaleslocal, regional, and national -and anticipate and mitigate the effects of variable harvest. Highresolution Remote Sensing (RS) data have been successfully employed to track crop growth at regional scales, however current methods for CGSE utilize curve-fitting and simplistic Machine Learning (ML) models that cannot describe the more complex relationships between crop growth drivers and crop growth stage progress [4][5][6][7][8]. Many of these methods require full-season data and do not provide in-season CGSE information. Advanced ML models have found success in applications such as crop-cover mapping/classification and yield estimation [9][10][11][12][13], but these models have yet to been applied for in-season CGSE. Whereas methods such as Neural Networks (NNs) have been used in crop mapping, for which researchers can utilize crop cover maps [14] to retrieve millions of crop cover examples per year, field-level crop growth stage (CGS) data is not publicly available and producing field scale ground truth data via field studies is prohibitively expensive. As such, arXiv:2106.13323v2 [cs.LG] 22 Sep 2021 large scale CGSE research relies on local and regional level crop progress data for ground truth. Even for the longest continually running sub-weekly temporal resolution remote sensing (RS) sensors (e.g. MODIS), there are only 21 full growing seasons of crop growth data. Constructing accurate, in-season ML approaches from such limited data is difficult, particularly with few example seasons of abnormal weather. In addition, many crop growth studies estimate events such as 'start of season', 'peak of season', 'end of season' etc. [6] [7], even though these events do not really describe phenological progress and knowledge of their timing may not be actionable. Recently, domain knowledge has been used to improve the performance of ML techniques in applied research using techniques collectively known as Theory-guide Machine Learning (TgML) [15] [16]. TgML techniques include the use of physical models outputs [17], the integration of known domain limits into ML loss functions [18], and the designing of NN structures that reflect how variables interact within a real physical system [19]. These techniques have been shown to reduce the amount of data required to reach a given level of performance [15]. These Theory-guided Neural Networks have begun to significantly improve upon current state-of-the-art methods in applications such as [20] [21]. Whereas NNs have shown great promise in agricultural RS studies (e.g. [11] [12]), TgML methods have yet to utilize significant disciplinary advances in agriculture over the last two to three decades. The goal of this study is to understand the impact of incorporating domain knowledge into NN design for in-season CGSE at regional scales. Specifically, the objective of the study is to develop a Domain-guided NN (DgNN) that separates independent growth drivers and compare its performance to sequential NN structures of equivalent complexity. The TgML approach in this study is demonstrated for regional CGSE in field corn, which is one of the most cultivated crops in the world [22]. The methodology here, when paired with adequate crop mapping techniques, can be extended to track in-season growth of other crops. Study Area and Data This study was conducted in the state of Iowa, US, from 2003 to 2019. The state consists of nine separate Agricultural Statistical Districts (ASDs) (see Figure 1) and had an average of 13.4 million acres of corn under cultivation across the study period [23]. [23] Location of corn field within the study region were obtained from the Corn-Soy Data Layer (CSDL) [24] for 2003-2007 and from the USDA Crop Data Layer (CDL) from 2008-2019 [25]. In Iowa, corn is typically planted in mid April / early May (week of year (WOY) [15][16][17][18][19][20][21][22][23][24], reaches its reproductive stages around late June (WOY 27 onward), and is harvested from early September through late November (WOY [36][37][38][39][40][41][42][43][44][45][46][47][48]. Weekly USDA-NASS Crop Progress Reports (CPRs), generated from grower and crop assessor surveys, were used as ground truth. CPR progress stages include Planted, Emerged, Silking, Dough, Dent, Mature, and Harvested. In this study, the Planted stage was replaced with Pre-Emergence, a placeholder progress stage that represents all crop/field states prior to emergence, and the Dough and Dent stages were combined as Grainfill. CGSE requires both canopy growth information and meteorological data. This study used ASD-wide means and standard deviations of field-level RS and other data shown in Table 1. Fields in each ASD were selected from the CSDL and CDL based on size and boundary criteria (see Figure 2). Micro-meteorological observations obtained from DayMet were used to compute fieldlevel accumulated growing degree days (AGDD), which is a measure of accumulated temperature required for crop growth. AGDD is used to model progression through different corn growth stages, both in remote sensing studies and in mechanistic models [4][8] [26]. The total number of growing degree days (GDD) for a single 24 hour period is calculated using the function: where T max is lower value of daily maximum temperature and 34 • C, T min is the minimum recorded daily temperature, and T base is the minimum temperature above which GDD is accumulated, set to 8 • C [27]. AGDD is a running total of daily GDD values, and in this study it is calculated from April 8th of a given year, which is the date prior to first planting during the study period. Solar radiation inputs were converted from W/m 2 to MJ/m 2 /week using day length taken from DayMet to incorporate photoperiod information. Saturated hydraulic [30] To measure canopy growth, 4-day MODIS Fraction of absorbed Photosynthetically Ac-1 tivate Radiation (FPAR) values for each field within an ASD were filtered to produce daily 2 time series data following the Savitsky-Golay (SG) filter method for NDVI used in [31]. In week vary slightly as the season progresses and more data is included within the long-term 8 filtering window. Noise filter adaptions to the existing SG method included rejecting points 9 with an absolute gradient of > 0.3 from the previous value, prior to September, the earliest 10 harvest over the study period. These adaptions prevented noisy, phenologically unrealistic 11 data from being included in the moving filter window. It should be noted that while this 12 filtering system is effective for a uni-modal crop such as corn, it may not be effective for crops 13 with more complex seasonal FPAR patterns such as winter wheat, where higher polynomial 14 filter parameters may be required. 15 16 Since CPRs are released every Monday from data collected during the prior week, weekly 17 meteorological data were obtained by aggregating field-scale daily data from Monday-Sunday. 18 The dataset spanned 38 weeks, WOY 13-51, encompassing the earliest planting and latest 19 harvest reported during the study period. To simulate in-season monitoring, one time series 20 was produced from pre-emergence to the 'current' week per field, totaling 39 time series. 21 Field-level time series for each input were then aggregated to ASD-level by calculating the 22 mean and standard deviation of the values across each district (median was used for rainfall), 23 with 12 total inputs (Table 1). These ASD-wide means and standard deviations formed the 24 un-scaled data for in the study. Solar radiation and rainfall were standardized using Z-score 25 scaling and AGDD and FPAR were standardized using MinMax scaling. To standardize the 26 length of each time series to 39 weekly values, all Z-scored in-season time series were zero 27 padded, while MinMax-scaled inputs were padded with 0.5. ASD location within Iowa was 28 represented using a one-hot location vector of length 9, with each bit representing an ASD. 29 The complete 17-year dataset consisted of 5967 time series, each of dimension (39 X 12) with 30 accompanying location vector. 31 32 The NNs in this study were based upon Long Short-Term Memory (LSTM) layers [32], 33 that are widely used in sequence identification / classification problems, such as speech 34 recognition, translation, and time series prediction [33]. LSTM has also found success in RS 35 studies, including crop classification [11] and yield prediction [34]. In this study, LSTM was 36 used because of its ability to handle time series data with variable length gaps between key 37 events [35], such as variable in-season crop growth data. Three NN implementations were 38 investigated, two reference structures and a third NN that incorporated domain knowledge. 39 The two reference structures included a dense NN, with traditional dense layers of decreasing 40 size ( Figure 3a) and a sequential NN in which LSTM layers were linearly chained ( Figure 3b). 41 In designing the third NN, interactions between the 12 different inputs (see Table 1) and 42 their effect on crop growth were considered. Domain knowledge was incorporated into the 43 NN by separating inputs into a branched, structure based on their relationship to crop growth. 44 TgML studies suggest that organizing NN inputs to reflect their real world interactions may 45 improve performance [15]. For example, Khandelwal et al. [16] [37], and excess solar radiation or photoperiod is 53 used in mechanistic crop models to determine growth stage timing (e.g. [26]). In addition, 54 soil moisture stress, due to low rainfall, in juvenile corn has been found to delay growth 55 progression and reduce final plant size [38] [39]. Typically, the effects of these two drivers 56 on canopy growth and crop progress are modeled separately [26,40,41]. In this study, solar 57 radiation and soil moisture-related inputs were separated from FPAR and AGDD using a 73 In this study, Kullback-Leibler Divergence (D KL ) was used as the loss function for all NNs. D KL is a measure of the difference between two probability distributions. D KL is often used in NN regression problems with targets that are distributions. Given two distributions P(x)a nd Q(x), D KL is calculated as: Here D KL is used as it provides a measure of the difference between the predicted and actual ASD-level CGS estimates for the NNs and HMM were aggregated to state-level estimates for comparison via weighted sum, with ASD weights calculated based on the number of corn fields in each ASD that passed the processing criteria, explained in Section 2.1 and shown in Figure 2. Performance of the three NN structures and the HMM were evaluated against state-wide USDA CPRs using two metrics. The first, Nash-Sutcliffe efficiency (NSE), is a measure commonly used in hydrology and crop modeling to measure how well a model describes an observed time series versus the mean value of that time series, and is defined as: given growth stage over time. 110 The second metric, cosine similarity (CS), is a measure of the angle between two vectors in a multi-dimension space. CS between two vectors A and B is calculated as: were visualized for the layers in each NN feeding into the 128 node dense and softmax layers, 123 these being common to all three NNs (see Figure 3). Color representations of crop progress 124 were formed by reducing the six crop stages to three RGB channels using a UMAP reduction 125 to 3 dimensions, with 15 neighbors. [18][19]. In addition, as seen in Table 6, the DgNN produced 186 the best estimates across all growth stages for the greatest number of weeks in each of the test 187 years, producing the highest CS value for 39% more weeks than the next best NN. stages. This is expected given the large deviation in crop progress timing from the study 201 mean during that year (see Table 3). With the exception of the Emerged stage, the DgNN best 202 described each of the growth stages, as measured by NSE. The week-to-week CS performance 203 degraded less for the DgNN than the other NNs during the fast-moving WOY [33][34][35][36][37][38][39][40][41][42][43] Mature progression, as seen in Figure 6. progress to rates not seen since 1987 [48]. As shown in Table 3, growth stage time to 50% was Figure 9. 214 In 2014, Silking NSE for the NNs was low, even though crop progress that year was 215 relative typical of the study period. All three NN structures were late in estimating the onset 216 of the Grainfill stage, as shown in Figure 9 and also reflected in the decline in CS between 217 WOY 30 and 34 (see Figure 7). Cumulative progress estimates, as seen in Figure 9, exhibit stage (see Figure 9). In the absence of measurable canopy change, AGDD is a useful proxy 244 for estimating progress from Silking to Grainfill. AGDD, however, is cultivar specific, and 245 cultivars are selected based on different factors, such as planting timing and drought risk. 246 Cultivar-specific variation in required AGDD for progress through Silking and the short 247 duration of that stage may reduce the effectiveness of AGDD as a proxy. In addition, AGDD Conflicts of Interest: The authors declare no conflict of interest.
2021-06-28T01:16:06.070Z
2021-06-24T00:00:00.000
{ "year": 2021, "sha1": "3a1e37a87773d65f4b3ca39894700592154dd476", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/13/22/4605/pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "3a1e37a87773d65f4b3ca39894700592154dd476", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
233177256
pes2o/s2orc
v3-fos-license
Internet use at and outside of school in relation to low- and high-stakes mathematics test scores across 3 years The excessive use of Internet-based technologies has received a considerable attention over the past years. Despite this, there is relatively little research on how general Internet usage patterns at and outside of school as well as on weekends may be associated with mathematics achievement. Moreover, only a handful of studies have implemented a longitudinal or repeated-measures approach on this research question. The aim of the current study was to fill that gap. Specifically, we investigated the potential associations of Internet use at and outside of school as well as on weekends with mathematics test performance in both high- and low-stakes testing conditions over a period of 3 years in a representative sample of Estonian teenagers. PISA 2015 survey data in conjunction with national educational registry data were used for the current study. Specifically, Internet use at and outside of school as well as on weekends were queried during the PISA 2015 survey. In addition, the data set included PISA mathematics test results from 4113 Estonian 9th-grade students. Furthermore, 3758 of these students also had a 9th-grade national mathematics exam score from a couple of months after the PISA survey. Finally, of these students, the results of 12th-grade mathematics national exam scores were available for 1612 and 1174 students for “wide” (comprehensive) and “narrow” (less comprehensive) mathematics exams, respectively. The results showed that the rather low-stakes PISA mathematics test scores correlated well with the high-stakes national mathematics exam scores obtained from the 9th (completed a couple of months after the PISA survey) and 12th grade (completed approximately 3 years after the PISA survey), with correlation values ranging from r = .438 to .557. Furthermore, socioeconomic status index was positively correlated with all mathematics scores (ranging from r = .162 to .305). Controlled for age and gender, the results also showed that students who reported using Internet the longest tended to have, on average, the lowest mathematics scores in all tests across 3 years. Although effect sizes were generally small, they seemed to be more pronounced in Internet use at school. Based on these results, one may notice that significantly longer time spent on Internet use at and outside of school as well as on weekends may be associated with poorer mathematics performance. These results are somewhat in line with research outlining the potentially negative associations between longer time spent on digital technology use and daily life outcomes. Introduction The relationship and possible impact of information and communication technology (ICT) on academic outcomes has received considerable attention over the past years. Diffusion of various devices, such as personal computers and laptops, smartphones, and tablets have the potential of enhancing productivity and efficiency of learning. For instance, it is possible to have a library of materials in one's personal device, replacing a heavy backpack filled with textbooks. Information-seeking is easier than ever-popular search engines, such as Google, and online encyclopedias (e.g., Wikipedia) allow to browse for up-to-date knowledge, supplementing or even replacing textbooks. Various applications may be useful in visualizing mathematical concepts (e.g., the nature of a regression equation, geometry, etc), and simulations could help with understanding scientific concepts (Atit et al., 2020). Paper-and-pencil testing could be supplemented or substituted by computer-based assessment (Nissen, Jariwala, Close, & Dusen, 2018). There is also evidence that educational technology applications may have a positive effect on mathematics achievement (Cheung & Slavin, 2013;Miller, 2018). On the other hand, it has also been shown that ICT use may not necessarily enhance academic experience as expected (Star et al., 2014). Furthermore, several studies have reported that excessive use of Internet-based digital technologies 1 is associated with impairments in daily life. Not only has it been demonstrated to inversely correlate with psychological well-being (Twenge & Campbell, 2018), it has also been shown that educational factors have negative associations with excessive Internet-based digital device use (Kates, Wu, & Coryn, 2018;Lepp, Barkley, & Karpinski, 2015;Rozgonjuk, Saal, and Täht, 2018). From an instructor's point of view, it may be relevant to understand the association between online-and offline-teaching (Yang, 2017) as well as study habits that could be influenced by ICTs (Hora & Oleson, 2017;Rozgonjuk, Kattago, and Täht, 2018). In addition, it has been found that the potential benefits of ICTs in mathematics education context may not be fully harnessed if the educator lacks ICT-related knowledge, has minimal training as well as learning opportunities regarding ICT, and if the technical support of ICT is limited (Zakaria & Khalid, 2016). Therefore, it would be relevant to take a closer look at Internet use in relation to mathematics achievement. In the current study, we focus on students' general self-reported duration of Internet use at and outside of school, as well as on weekends, in relation to PISA 2015 mathematics test results and 9th-and 12th-grade national mathematics exam scores. The results could be helpful, as they may provide insights into the association between ICT and mathematics outcomes; this knowledge, in turn, could be potentially useful in designing cyber-hygiene practices that may be helpful in improved academic achievement. Literature overview Mathematics has been demonstrated to be a key factor in future academic success (Konvalina, Wileman, & Stephens, 1983). The results of mathematics exams could play an important role in students' future educational path: for instance, in Estonia, a students' admission to university may heavily depend on how well one performs on a mandatory national mathematics exam. Although there are many studies that have investigated the interplay between academic outcomes and Internet-based digital technology use, studies have not generally focused on mathematics or have focused on very specific educational technology applications or platforms (Cheung & Slavin, 2013;Fabian & Topping, 2019). Nevertheless, it has been suggested that these digital technologies may shape the future of mathematics education (Engelbrecht, Llinares, & Borba, 2020). Even though the usage of ICTs has the potential to improve the efficiency of a classroom experience (e.g., by substituting paper-based books with digital resources, as well as by allowing the implementation of distance learning), previous studies have demonstrated that the effects of ICT use are not necessarily learningenhancing. In fact, some studies have found that ICT use may even impair learning-this also in the context of mathematics (Bulut & Cutumisu, 2017;Zhang & Liu, 2016). Ravizza, Hambrick, & Fenn (2014) demonstrated that non-academic Internet use was negatively related to learning even when the students' ability was controlled for. Several factors may explain this association (Hu, Gong, Lai, & Leung, 2018). First, school-level indicators, such as the availability of Internet-based technologies as well as the size of school could play a large role (Eickelmann, Gerick, & Koop, 2017;Luu & Freeman, 2011). Second, several factors regarding individual differences in students' perception and attitudes towards ICT use may be relevant (Hu, Gong, Lai, & Leung, 2018). Third, where (e.g., at school vs outside of school) and for what purposes (e.g., purely for learning vs entertainment) the ICTs are used may also be helpful in explaining the association between ICT use and academic outcomes (Petko, Cantieni, & Prasse, 2017;Skryabin, Zhang, Liu, & Zhang, 2015). Finally, individual differences in predisposing factors, such as personality traits, emotion regulation, and tendency to procrastinate have been shown to be relevant in developing problematic ICT use patterns (Brand, Young, Laier, Wölfling, & Potenza, 2016, Brand et al., 2019Rozgonjuk, 2019) . While there are studies that have investigated the relationship between Internet and other ICT use and academic achievement, research on general ICT usage patterns in relation to mathematics outcomes is rather scarce. Moreover, most of these studies tend to rely on cross-sectional data. Of note, however, it should be mentioned that Zhang and Liu (2016), for instance, have also provided evidence over a longer period of time, finding that ICT use at school is negatively correlated to academic achievement. In the current study, we aim to present a more comprehensive set of empirical evidence on the association between Internet use and mathematics achievement. Specifically, we take into account the differences in Internet use at and outside of school and on weekends, and we include both low-and high-stakes mathematics achievement test scores. The former includes the PISA 2015 (see the Sample and procedure section) mathematics test scores, while the latter data are from two additional time points: from 9th-and 12th-grade national mathematics exams. Importantly, while PISA 2015 mathematics test could be considered as a low-stakes test-since the outcome of this test does not affect a student's future significantly (Mägi, Adov, Täht, & Must, 2013;Silm, Must, & Täht, 2013)-the national mathematics exams can be considered as high-stakes tests within the Estonian educational system, because the results of these tests could play a proportional role in a student's admission scores for the next stage of education. Additionally, students' gender and socioeconomic status (SES) may play a role in academic achievement and mathematics results. According to a meta-analysis by Voyer and Voyer (2014), female students tend to achieve better grades in school-this is a common finding, regardless of culture, school subject, or time period (based on literature from 1914 to 2011). But in addition to better academic achievement, female students also achieve better rates in higher education graduation and postsecondary school enrollments, and demonstrate better overall retention when compared with male students (Clark, Sang Min, Goodman, & Yacco, 2008). On the other hand, meta-analysis that included standardized test performance results indicates that male students tend to achieve better results in mathematics (Lindberg, Hyde, Petersen, & Linn, 2010) and natural sciences (Hedges & Nowell, 1995). Men also tend to have a higher propensity towards choosing a science, technology, engineering, and mathematics (STEM)-related career (Ketenci, Leroux, & Renken, 2020). On the other hand, female students have been found to have more positive mathematics attitudes compared with male students (Zuo, Ferris, & LaForce, 2020). The positive relationship between students' SES and academic achievement has been shown in many studies (e.g., Jimerson, Egeland, Sroufe, & Carlson, 2000;Liu & Schunn, 2020;Sirin, 2005;White, 1982). For instance, a meta-analysis by White (1982) demonstrated that the relationship yielded an effect size of r = .343, while a meta-analysis by Sirin (2005) found the effect size of r = .299. Therefore, one may conclude that SES generally has a medium-sized positive effect on academic achievement, including in mathematics and natural sciences. Therefore, it would be a necessary covariate in investigating links between mathematics performance and other variables. The data set of the current study is based on an Estonian student sample. This may be of interest for mathematics as well as educational scientists in general for two reasons. Firstly, according to PISA 2015 results, the achievements of Estonian students in mathematics (as well as functional reading and science) were among the highest within the countries participating in PISA 2015 survey (OECD, 2016). Second, Estonia is also wellknown for its high levels of digitalization and diffusion of e-governance (Solvak et al., 2019) that may also promote higher digital technology implementation in education. The results of this study could shed light on whether these aspects could be relevant when compared with previous findings in the field of ICT use and academic achievement (in mathematics). Conceptual framework One of the explanations to these findings is the so-called displacement hypothesis, according to which the negative effects of ICT use are directly proportional to time spent on one's device. This is because time spent on a digital device decreases the potential time one could spend on reading books, exercising, and/or socializing in nondigital settings (Neuman, 1988). According to another explanation, the digital Goldilocks hypothesis, engaging too little or too much in digital technology use could result in poorer outcomes (Przybylski & Weinstein, 2017). On one hand, not using the Internet may result in a student's inability to seek for additional information and to discuss homeworks via social networking sites. On the other hand, too much Internet use could result in a student not paying attention to learning relevant materials-especially when it is being communicated orally by a teacher. In fact, some ICT use has been shown to be related to more favorable outcomes than no or too much use in children's subjective well-being (Przybylski & Weinstein, 2017;Twenge & Campbell, 2018) and cognitive test results (Rozgonjuk & Täht, 2017). In general, the lowest test scores were associated with individuals reporting highest time spent online. The current study could add to this theoretical framework by providing empirical evidence that could further help the discussion on the association between ICT use and academic outcomes. Specifically, we show how groups based on self-reported Internet use at and outside of school as well as on weekends differ from each other in mathematics performance. Furthermore, the associations are controlled for students' SES and gender, variables that are typically associated with academic performance, including STEM subjects (Sirin, 2005;Voyer & Voyer, 2014). Therefore, these results could potentially show if any of these conceptual frameworks could be used as a potential explanation for the results. Aim of this work The general aim is to investigate if self-reported Internet use at and outside of school and on weekends that was queried in 9th grade is associated with mathematics test performance across both low-and high-stakes mathematics testing conditions over a period of approximately 3 years. It should be noted, however, that in the current study, Internet use at school means not only using it for schoolwork, but also for leisurely activities. Based on previous findings, there is reason to believe that the lowest mathematics achievement scores are associated with the highest self-reported Internet use. However, in addition to providing insights into Internet use at school in association with mathematics performance, we also provide evidence for how mathematics performance is correlated with Internet use outside of school as well as on weekends. The findings of this study may be informative regarding how general Internet use could be a predictive factor for mathematics achievement. For instance, large-effect associations would mean that Internet use at school could play a pivotal role either in improving or hindering students' achievement in mathematics. On the other hand, small or non-significant associations between Internet use and mathematics test scores may be helpful for recalibrating people's expectations towards technology's effects on mathematics achievement. As has been shown recently, the potential effects of digital technology use may be smaller (yet negative) than could be expected from sensationalized headlines in mass media (Orben & Przybylski, 2019) which may unnecessarily fuel technology use-related panics (Orben, 2020;Segool, Goforth, Bowman, & Pham, 2016). In other words, due to sensationalized headlines in the media, teachers, parents, and other stakeholders may be led to believe that digital technology use is more harmful than it actually may be. In the context of the current study, for instance, mathematics teachers may choose not to use digital technology-based teaching methods because of the fear that digital technology use may be detrimental for students' achievements. Similarly, small or non-significant associations could also be helpful in curbing one's enthusiasm regarding the hype typically associated with new technological advances. Accordingly, this study may have the potential to provide input into policymaking regarding ICT use in education (e.g., whether the general public should be concerned or not). In addition, if the results demonstrate the associations between Internet use (e.g., the more Internet is used, the lower the mathematics results) outside of school or on weekends, there could be a reason to believe that by targeting the Internet use on one's time outside of school could have potential to improve a student's mathematics performance. Sample and procedure The data that we used was combined from two sources: (1) the Economic Co-operation and Development (OECD) public PISA 2015 data repository and (2) the National Exam Data Repository managed by the Innove Foundation in Estonia. Variables regarding Internet use at school as well as sociodemographics were retrieved from the former, and national mathematics exam scores from the latter. Since the Innove Foundation was responsible for administering both PISA survey as well as national mathematics exams, the data were merged inhouse based on students' personal identification code. The data set shared with the reviewers did not include personal identifiers and was therefore fully anonymous. Merging these datasets allows to use self-report measures in combination with high-stakes exam scores from nationally standardized settings in repeated-measures study design. The PISA is a triennial international survey which attempts to evaluate education systems worldwide by testing the skills and knowledge of 15-year-old students. In the current study, we focused on the PISA 2015 Estonian sample. The PISA 2015 used a stratified systematic sample where sampling probabilities had to be proportional to the estimated number of 15-year-old students (OECD, 2016). Sampling consisted of two stages: first, schools were sampled from all schools of Estonia (which included 15-year-old students); second, students were sampled from these schools. This method should provide a high-standard sample that is representative of 15year-old students in Estonia. The PISA 2015 survey was carried out in Estonia in April 2015. Data for all of the participating countries is available on the OECD website (OECD, 2017a). The Estonian sample used in the current study included 6147 15-year-old students participating, 49.3% of whom were boys. Students were assessed in science, mathematics, and reading comprehension. These data were merged with national mathematics exam scores administered at the end of secondary school. As national exams are different across each year, and students who took the PISA test while in 8th grade took their exams a year later than students who were 9th graders at the time of the PISA survey, we excluded 8th graders from the effective sample so that we could consistently compare the results (remaining n = 4709). Then, we included only students who had responded to the Internet use, socioeconomic status, and gender variables (remaining n = 4113). The 4113 students (48.23% male, 51.76% female) had the PISA mathematics test scores. Out of them, 3758 students (48.32% male, 51.68% female) had a 9th-grade mathematics exam results from approximately 2 months after the PISA survey. Finally, 1612 students (51.24% male, 48.76% female) had a "wide" mathematics exam score, while 1174 students (32.45% male, 67.55% female) had a "narrow" mathematics exam score from 12th grade. Sample breakdown by Internet response variables and gender, along with each group's average SES, are in Supplementary Table 1. It is necessary to comment that the reason for sample attrition over time is not completely clear; it may be that some students decided not to pursue their educational path beyond 9th grade. It could also be that some of the students decided not to take the national mathematics exam in 2019 for other reasons (e.g., because of studying abroad, choosing to continue their education in a vocational school, etc.). Another important issue necessary to be mentioned is the somewhat nested nature of the data. While the PISA data set does include some school-level information, and it may be accounted for in analyses involving the PISA mathematics test, students in Estonia may change their school when they pursue their educational path in a secondary school. In addition, many Estonian schools only provide education until 9th grade, meaning that students who would like to pursue studying need to change their school for secondary education. Measures In the current work, we focused on Internet use and mathematics achievement variables. Nevertheless, the PISA survey data set also includes some sociodemographic variables. We used participants' gender (coded as 1 = female, 2 = male), and socioeconomic status (SES) index which consituted standardized z-scores computed from different relevant SES-related items queried in the PISA 2015 survey. SES index variable is standardized across all OECD countries that participated in the PISA study. Internet use items The study participants provided self-reported estimation for the duration of their Internet use (a) at school, (b) outside of school, and (c) on weekends. The scale for all variables was the same: 1 = No time; 2 = 1-30 min per day; 3 = 31-60 min per day; 4 = Between 1 and 2 h per day; 5 = Between 2 and 4 h per day; 6 = Between 4 and 6 h per day; 7 = More than 6 h per day. PISA mathematics test The PISA 2015 dataset (OECD, 2017a) includes mathematics achievement test results along with other educational test results. As each student responded only to a fraction of the entire assessment, other answers were imputed. The PISA student achievement dataset provides so-called "plausible values" for secondary analysis. Plausible values are multiple imputations of unobservable latent achievement for each student; details about the imputation procedure could be found in Wu (2005). Ten plausible values were given to all subjects within the framework of PISA 2015 (OECD, 2017b). Wu (2005) states that it is possible to use one of the plausible values in order to recover population parameters. However, as the current study is not modeling population parameters, it is more appropriate to use all plausible values in order to retrieve more accurate parameter estimates. Therefore, we followed the guidelines by OECD (2018) and computed the results for PISA mathematics test scores across all ten plausible values in all relevant analyses. The mathematics achievement test is meant to measure mathematics literacy (defined as students' capacity to formulate, employ, and interpret mathematics in a variety of contexts). It includes reasoning mathematically and using mathematical concepts, procedures, facts, and tools to describe, explain, and predict phenomena (OECD, 2016). Mathematics national exam scores for 9th graders (2018) The mandatory 9th-grade mathematics national exams are administered to students in Estonia in spring during the last period of the 9-grade education program. In the current case, the students took the exam in spring 2015. In Estonia, a mathematics exam is compulsory for graduating from primary school (9-grade education). This exam covers all the topics taught in primary school. The 9th grade mathematics exams are scored on the scale of 0 to 100 points. Mathematics national exam scores for 12th graders (2018) The mandatory mathematics national exams at the end of 12th grade are administered to students in Estonia in spring during the last period of the secondary education program. In the context of the current study, the students took the exam in spring 2018. In Estonia, a mathematics exam is compulsory for graduating from secondary school. However, students can choose whether to take the simplified ("narrow") or more comprehensive ("wide") mathematics exam. The latter may have more relevance in continuing one's education on the next, higher education stage. Both exams cover all topics taught in secondary school (e.g., geometry and trigonometry, among others), while tasks in the narrow mathematics exam are less comprehensive. Both exams are scored on the scale of 0 to 100 points. Analysis The data were analyzed in RStudio version 4.0.3 (R Core Team, 2020). In order to compute the results that include PISA 2015 mathematics test scores, we followed the guidelines by OECD (2018) and estimated all statistics across ten plausible values of those mathematics scores. We conducted Pearson correlation analysis (p-values adjusted with the Holm's method) to investigate the relationship between 9th-and 12th-grade national mathematics exam scores and SES using the rcorr.adjust() function from the RcmdrMisc package v 2.7.1 (Fox, 2020). Welch's two sample t tests were used to compare average 9th-and 12th-grade national mathematics exam scores across gender from the R's base package. Cohen's d-s as group difference effect size measures were computed using the cohensD() function from the lsr package v 0.5 (Navarro, 2015). In order to compare groups of students based on their self-reported Internet use at and outside of school and on weekends, we computed a series of analyses of covariances (ANCOVAs) using the ancova() function from the jmv package v 1.2.23 (Selker, Love, & Dropmann 2020), controlling for gender and SES effects. These ANCOVAs were computed with 9th-and 12th-grade national mathematics exam scores as dependent variables in each model. In addition, Holm's post hoc tests were used for computing the differences between each pair of groups, complemented with effect size estimates (Cohen's d-s) for group differences. Descriptive statistics for and correlations between SES and mathematics scores In Table 1 below, we present the descriptive statistics and correlations for SES and mathematics scores. As can be seen from Table 1, SES is positively correlated with all mathematics variables, yielding small-tomedium effect sizes. PISA mathematics test scores have rather medium-to-large positive associations with 9thand 12th-grade exam scores. Ninth-grade national mathematics exam scores have strong positive correlations with 12th-grade exam scores. The results from Welch's two sample t tests showed that male students (M = 534.72, SD = 80.55) had higher scores in PISA 2015 mathematics test (estimated across all ten plausible values) than female students ( Group differences in mathematics scores based on internet use Below, the results for ANCOVAs are presented. Figure 1 depicts group differences based on self-reported Internet use at and outside of school and on weekends across the PISA mathematics test (estimated across ten plausible values), 9th-grade national mathematics exam, and 12thgrade (both "wide" and "narrow") mathematics exam scores, controlled for potential SES and gender effects (estimated marginal means are depicted on y-axis). Holm's post hoc comparisons for 9th-and 12th-grade exam scores are presented in Supplementary Table 2. Overall, the results indicate to either curvilinear or linearly decreasing scores with the increase of selfreported Internet usage. As can be observed below in Tables 2, 3, 4, and 5, SES was a significant covariate in all models predicting mathematics results. In addition, significant group differences due to Internet use yielded small (based on Cohen, 1988, andSawilowsky, 2009) or medium (based on Kraft, 2020) effects. Below, the results for each model are presented. Table 2 shows that Internet use variables as well as SES and gender were all significant predictors of PISA mathematics test scores in all models. It could be observed that the effects of SES are consistent throughout all models, yielding medium size. While gender has consistently small effects, Internet use at school has medium, while Internet use outside of school and on weekends have small-sized associations with PISA mathematics test scores. Fig. 1 Estimated marginal means of mathematics scores based on Internet use at and outside of school and on weekends, controlled for SES and gender. Notes: On x-axis (Self-reported Internet Use in 2015): 1 = "No time"; 2 = "1-30 min per day"; 3 = "31-60 min per day"; 4 = "Between 1 and 2 h per day"; 5 = "Between 2 and 4 h per day"; 6 = "Between 4 and 6 h per day"; 7 = "More than 6 h per day". G12 (Narrow) = Grade 12 national mathematics exam (narrow) score; G12 (Wide) = Grade 12 national mathematics exam (wide) score; G9 = Grade 9 national mathematics exam score; PISA PV = PISA 2015 mathematics test scores (estimated across ten plausible values as suggested in OECD (2018). The 95% CIs are depicted. Supplementary Table 2) show that, in general, some Internet use at school has positive associations with PISA mathematics test scores, whereas the no or 6+ h per day of Internet use at school tend to be negatively associated with PISA mathematics test scores. In general, one may also notice that there may be a sweet spot of Internet use at school after which there seems to be a linear decline in PISA mathematics scores. Post hoc test results (in Within Internet use outside of school, somewhat similar patterns could be observed: students who reported no or 6+ h per day of Internet use outside of school had lower PISA mathematics test scores than students who reported some Internet use outside of school. Finally, similar patterns described above tended to be also present in relation to Internet use on weekends. Here, too, the significant group differences tended to emerge either between students who reported not using Internet on weekends and students who reported using Internet on weekends between 2 and 4 h a day, or between students who reported using the Internet for 6+ h per day on weekends and those who tended to use the Internet on weekends for a couple of hours per day. In general, therefore, students who either reported that they did not use Internet or used it for 6+ h per day tended to score lower than other students. Ninth-grade mathematics national exam scores and internet use The results in Table 3 show that, as with PISA mathematics scores, 9th-grade mathematics national exam scores were predicted by Internet use variables as well as SES and gender in all models. While potential gender effects were small, SES yielded small-to-medium effects. Internet use variables, too, had small associations with 9th-grade mathematics national exam scores. Highest partial eta-squared for Internet use was in the model regarding Internet use at school. Holm's post hoc test results (in Supplementary Table 2) indicated to similar patterns in 9th-grade national exam scores as observed in PISA mathematics test: in general, moderate Internet use was associated with better outcomes than no or 6+ h per day of Internet use at school. Somewhat similar patterns could be observed for Internet use outside of school. Interestingly, when looking into Internet use on weekends, only some of the differences that were significant included 6+ h per day of Internet use-indicating to poorer outcomes than when Internet was used for a couple of hours per day on weekends. Twelfth-grade "wide" mathematics national exam scores and internet use The results in Table 4 are somewhat different than in the case of PISA and 9th-grade mathematics national exam scores. While SES remains to be a significant covariate predicting 12th-grade "wide" mathematics exam scores (with small effect sizes), gender is not a significant covariate in any models. While Internet use variables were significant predictors, the effect sizes were small, with highest partial eta-squared in the model for Internet use at school as predictor. Results of Holm's post hoc tests (in Supplementary Table 2) show that across Internet use at school, there seems to be a relatively linear decline in mathematics scores with the increase of Internet use. Students who reported using the Internet at school for 6+ h per day scored lower on the "wide" mathematics national exam than students who reported not using the Internet at school. In general, students reporting using the Internet at school for 6+ h per day at school tended to have the lowest scores in mathematics exams. While there were no group differences in "wide" mathematics scores between students who reported not or moderately using the Internet outside of school, 6+ h per day of Internet use outside of school was associated with poorer mathematics exam scores. Finally, there were no group differences in the "wide" mathematics exam scores across Internet use on weekends. Twelfth-grade "narrow" mathematics national exam scores and internet use According to Table 5, SES and gender were significant covariates predicting 12th-grade national "narrow" mathematics exam scores (with small-sized effects). While Internet use at and outside of school had small yet significant associations, Internet use on weekends was not a significant predictor of these mathematics exam scores. Across these exam scores, the Holm's post hoc test results (in Supplementary Table 2) showed that using the Internet at school for up to 30 min tended to be associated with better "narrow" mathematics exam outcomes than using the Internet for more than 2 h per day at school. Students reporting using the Internet for 6+ h per day outside of school scored lower than students who reported using the Internet for between 12 h per day. There were no group differences in "narrow" mathematics scores based on Internet use on weekends. Discussion The main aim of the present study was to investigate how students' Internet use at and outside of school and on weekends predicts mathematics performance across several years in low-and high-stakes testing conditions. Below are some of the insights into the findings as well as the contribution and limitations of this study. We expected to find that students who report using Internet the most (at and outside of school, on weekends) to have the lowest average scores in mathematics tests. This was generally the case in all exams across all Internet use conditions-with the exception of Internet use on weekends as a predictor in 12th-grade mathematics exam scores. The results of this study are mostly in line with some previous, more general findings that have demonstrated the small negative relationships between Internet-based technology use and academic achievement (Kates, Wu, & Coryn, 2018;Lepp, Barkley, & Karpinski, 2015;Rozgonjuk, Saal, et al., 2018), as well as in the domain of mathematics education (Bulut & Cutumisu, 2017;Eickelmann, Gerick, & Koop, 2017;Hu, Gong, Lai, & Leung, 2018;Skryabin, Zhang, Liu, & Zhang, 2015). Yet-and especially in the case of Internet use at school-it also seemed that both the students who reported using the Internet for 6+ h per day at school as well as students who reported not using the Internet at all had lower mathematics test scores than students who reported some Internet use. This finding is in line with the digital Goldilocks hypothesis which states that there may be an optimal time of ICT use which may be associated with favorable outcomes (Przybylski & Weinstein, 2017). Moreover, these patterns seemed to be highly similar across different mathematics tests over a 3-year period. What do these findings tell us? From one hand, these results support the notion that perhaps general Internet use, especially for a very long duration, may not be unidirectionally beneficial to mathematics performance. Perhaps too much Internet use may lead to activities which are irrelevant to learning objectives and outcomes, therefore resulting in poorer performance on mathematics tests. Using Internet during classes could potentially contribute to multitasking behavior while studying, which in turn could lead to distraction and be detrimental for understanding class context (Sana, Weston, & Cepeda, 2013). In fact, a recent study demonstrated that interruptions due to pop-up notifications are associated with more surface approach to learning (Rozgonjuk, Elhai, Ryan, and Scott, 2019). On the other hand, it should also be stressed that these associations, although negative, are relatively small. Of course, it also depends on what exactly constitute small effects. Recently, Kraft (2020) argued that perhaps research in the field of education should not rely on common benchmarks proposed by Cohen (1988), since effect sizes in educational research may be small yet still pragmatically relevant. By benchmarks proposed in Kraft (2020), several group differences yielded medium-sized effects. This said, while the negative associations of Internet use seem to be present in some conditions, there are probably other more important aspects of education that drive the mathematics performance of a student. Mapping these variables should be in focus of subsequent studies. Although it may be hardly plausible to assume that a teenager's Internet use may affect their mathematics performance after 3 years, the results seem to suggest that there may be some substance to it that deserves further attention. On one hand, it could be that Internet use patterns do not change dramatically over time, potentially explaining these findings. Of course, although the current study has a repeated-measures approach, this causal hypothesis cannot be answered with a high degree of validity based on the data used in this study. On the other hand, there may be an alternative explanation for these findings. Importantly, it should be stressed that a student's socioeconomic status (SES) was positively associated with all mathematics scores both in bi-and multivariate analyses. Furthermore, with the exception of "wide" mathematics exam scores, gender was also a significant covariate in models predicting mathematics performance. While boys scored higher in PISA mathematics test, girls had higher scores in 9th-and 12th-grade "narrow" mathematics exam scores. These findings are in line with common results in educational research, including mathematics, where SES and gender are associated with academic performance (Lindberg, Hyde, Petersen, & Linn, 2010;Sirin, 2005;Voyer & Voyer, 2014;White, 1982). It is nevertheless curious that even when mathematics results were controlled for these covariates in the current study, some of the relationships between Internet use variables and mathematics test scores remained significant, albeit small. In general, no Internet use or using the Internet for a very long time (e.g., 6+ h per day) was associated with poorer outcomes. The main contributions of the current study regard the nature of the data, and some novel findings. First, the PISA 2015 sampling standards are very high and resemble the whole population of 15-year-old students in Estonia. Therefore, this sampling method provides more generalizability with regards to results, because one of the more common limitations in survey-based studiesconvenience sampling and self-selection bias-are mitigated. Second, in addition to self-reported Internet use, the data included actual (that is, not self-reported) results for mathematics performance. Furthermore, these results were obtained for both low-stakes (PISA 2015 mathematics test) as well as high-stakes tests (9th-and 12th-grade national mathematics exams). This should support the validity of findings. While PISA 2015 mathematics test scores correlated strongly with national mathematics exam scores, the patterns in the relationships between Internet use variables and mathematics scores were similar, regardless of the low-or high-stakes testing conditions, and even when the possible effects of students' SES and gender were taken into account. Third, mathematics results from three time points were used, further increasing validity and reliability of findings. While repeated-measures design provides a stronger case for causality in the reported relationships (Cole & Maxwell, 2003;Gollob & Reichardt, 1987), in order to establish a causal link, chronological order of measurements may not be sufficient, and a (quasi-)experimental study should be conducted to replicate these findings with more robust confidence. Therefore, we also want to explicitly state that our results do not support strong causal interpretation. Of more theoretical contributions, the results show that the association between Internet use and mathematics achievement could be predicted over a period of three years, both across low-and high-stakes testing conditions. Furthermore, the results also seem to indicate that it may be necessary to distinguish Internet use at and outside of school (also Internet use on weekends) in this line of research, since the relationship patterns with mathematics achievement may vary due to that. This is also in line with some previous findings (Petko, Cantieni, & Prasse, 2017;Skryabin et al., 2015). The limitations of the study ought to be addressed as well. Firstly, although the mathematics scores were not self-reported, Internet use variables did rely on selfreports. Studies have shown that people are not very accurate in estimating their digital technology use duration and frequency in relation to objectively measured device use (Boase & Ling, 2013;Kobayashi & Boase, 2012;Loid, Täht, & Rozgonjuk, 2020;Rozgonjuk, Levine, Hall, & Elhai, 2018). Therefore, future studies should aim towards documenting, or tracking, the actual digital device use in classroom, e.g., as done in learning analytics (Schneider, Reilly, & Radu, 2020) and digital phenotyping in other fields (Baumeister & Montag, 2019;Rozgonjuk, Elhai, & Hall, 2019). This could provide a more valid picture regarding the potential effects of digital technology use in relation to mathematics achievement. Secondly, as mentioned earlier, the causal interpretation regarding the effects of Internet use at school is largely based on chronology of measurements-this, of course, does not necessarily mean that there is a direct causal effect. It could also be that other factors influence both Internet use (or how the student recalls their typical Internet use) and mathematics performance. For instance, individual differences could influence both (self-reported) technology use and academic outcomes. For example, more conscientious students could engage in less non-purposeful digital technology use and are also more disciplined to attain to learning tasks. In addition, one may also hypothesize that mathematics anxiety hinders learning mathematics and may motivate using ICTs instead of learning. It has recently been demonstrated that mathematics anxiety is associated with more surface approach to learning , a factor also associated with problematic ICT use (Alt & Boniel-Nissim, 2018). A third limitation is using rather general Internet use measures and correlating these self-reported estimates with mathematics scores. In order to gain insights into the potential effects of digital technology on mathematics, it would be highly informative to include data about digital technology use in mathematics learning, specifically. It could be questionable whether students really spend 6+ h per school day online. The question asking about the duration of Internet use at school does not specify whether the activities spent on the Internet are school-related or not. Therefore, it could be that students who reported using 6+ h of Internet per school day also included Internet use outside of coursework (e.g., during recess, etc) in their estimation. Clearly, this is a limitation of the study, and should be taken into account in future investigations. Finally, it should be noted that it was not possible to account for the nested nature of the data for the national exam scores, and the reason for sample attrition over time was not completely clear. Further research should address these limitations. In conclusion, this study demonstrates that the relationship between Internet use and mathematics achievement is rather curvilinear, but students who reported using the Internet for 6+ h per day tended to have, on average, lower mathematics test scores than students reporting less Internet use. This said, it is also interesting that these patterns may have variations depending on where and when Internet is used-either at or outside of school, or on weekends.
2021-04-08T13:43:19.968Z
2021-04-07T00:00:00.000
{ "year": 2021, "sha1": "9c593a8f3e03910fa6bb6f7e7160ed24333c7294", "oa_license": "CCBY", "oa_url": "https://stemeducationjournal.springeropen.com/track/pdf/10.1186/s40594-021-00287-y", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "228c10e4bea4602887c717e3b9a1a34a1b70a271", "s2fieldsofstudy": [ "Education", "Mathematics", "Computer Science" ], "extfieldsofstudy": [] }
73663415
pes2o/s2orc
v3-fos-license
Verification of Space Weather Forecasts issued by the Met Office Space Weather Operations Centre The Met Office Space Weather Operations Centre was founded in 2014 and part of its remit is a daily Space Weather Technical Forecast to help the UK build resilience to space weather impacts; guidance includes four day geo-magnetic storm forecasts (GMSF) and X-ray flare forecasts (XRFF). It is crucial for forecasters, users, modelers and stakeholders to understand the strengths and weaknesses of these forecasts; therefore, it is important to verify against the most reliable truth data source available. The present study contains verification results for XRFFs using GOES-15 satellite data and GMSF using planetary K-index (Kp) values from the GFZ Helmholtz Centre. To assess the value of the verification results it is helpful to compare them against a reference forecast and the frequency of occurrence during a rolling prediction period is used for this purpose. Analysis of the rolling 12-month performance over a 19-month period suggests that both the XRFF and GMSF struggle to provide a better prediction than the reference. However, a relative operating characteristic and reliability analysis of the full 19-month period reveals that although the GMSF and XRFF possess discriminatory skill, events tend to be over-forecast. Introduction In recent decades there have been significant technological advances upon which governments, industries and organizations have become increasingly dependent. Many of these advances are vulnerable to space weather to the extent that security and/or safety could be severely compromised when significant events occur. After severe space weather was added to the UK's National Risk Register of Civil Emergencies in 2011, the UK government sought to establish a 24/7 space weather forecasting centre and the Met Office Space Weather Operations Centre (MOSWOC) was officially opened on 8 th October 2014. Part of MOSWOCs remit is to issue a daily Space Weather Technical Forecast (SWTF) to help affected UK industries and infrastructure build resilience to space weather events; issued at midnight with a midday update, it contains a: • space weather activity analysis; • four-day solar activity summary; • geo-magnetic storm forecast (GMSF); • coronal mass ejection (CME) warning service; • X-ray flare forecast (XRFF); • solar radiation storm forecast; and • high energy electron event forecast. Verification of these products is crucially important for forecasters, users, modelers and stakeholders because it facilitates an understanding of the strengths and weaknesses of each forecast product. Ideally verification should be performed in near-real time to enable instant forecaster feedback because this enables: a) necessary corrections to be made in a timely fashion; and b) operational forecasters to use the results to further develop their forecasting skills. NASA Community Coordinated Modelling Center on the implementation of the Flare Scoreboard (https://ccmc.gsfc.nasa.gov/challenges/flare.php); a system to enable the automatic upload of flare predictions and provide immediate verification to intercompare the forecasts from participating organisations; 2. the EU on the development project Flare Likelihood And Region Eruption foreCASTing (FLARECAST; http://www.flarecast.eu) to automatically forecast and verify X-ray flares. Some initial MOSWOC flare forecast verification has been undertaken as part of the FLARECAST project (Murray et al, 2017), however the real-time operational verification system that has now been developed for use by the MOSWOC forecasters was not fully explored in this work. Operational verification of most SWTF products is planned and progress to date includes investigations into the skill of GMSFs, XRFFs and Earthbound-CME warnings; with near-real-time verification of the former two products already operational using the Warnings Verification System (WVS) (Sharpe, 2015) and the Area Forecast Verification System (AFVS) (Sharpe, 2013). The methodologies used to verify GMSFs and XRFFs are outlined in Section 2 and results for the period between April 2015 and October 2016 are presented in Section 3. Section 4 contains brief conclusions and an outline of further work. Geo-Magnetic Storm Forecasts The GMSF is both probabilistic and multi-category; each category referring to a different geomagnetic activity level, measured using the K index at 13 observing sites stationed across the globe, from which a planetary K value (Kp) is evaluated. GMS level: Forecasters issue GMSFs by first analyzing images to identify CMEs and coronal holes and then using the Wang-Sheeley-Arge Enlil model (Edmonds, 2013) to identify high-speed solar wind streams and CMEs. However associated forecasts of GMSs are limited because values of the zcomponent of the sun's magnetic field are unknown (except as measured by the ACE/DSCOVR satellite). One further source of information is from an Autoregressive Integrated Moving Average model of Kp values; however, there is no model which accurately predicts Kp fluctuations. Consequently, forecasting is essentially a subjective process which continues to rely heavily upon the experience of operational forecasters. The columns from left to right display: 1. a single word description of the GMS type; 2. the G-scale level associated with this type of GMS; 3. whether each type of GMS has been observed during the previous 24 h period; 4-7. forecast probabilities for the likelihood that each type of GMS will occur during four consecutive days into the future. The probabilities in Table 1 refer to the chance that the GMS level will be reached or exceeded at least once during the 24 h period. Therefore, column 4 forecasts that the probability associated with: G0. is 100% -55% = 45%; G1/G2. is 55% -5% = 50%; G3. is 5% -0% = 5%; G4. and G5. is 0%. The verification metric most commonly associated with multi-category probabilistic forecasts is the Ranked Probability Score ( ) (Epstein, 1969 andMurphy, 1971). The RPS is defined by where in the present case ( + ) is the forecast probability that the maximum GMS level to be observed during the 24h period is ≤ + (where = 0, 1/2, 3, 4 or 5) and ( + ) is 0 if the maximum observed level is < + and 1 otherwise. The (which ranges from 1 to a perfect score of 0) is calculated separately for every day of each forecast and a mean value ( <<<<<< ) is obtained by simply averaging the values calculated for a large number of forecasts; 90% confidence intervals are produced using simple bootstrapping with replacement. The provides a very valuable approach to the problem of verifying multi-category probabilistic forecasts; however, a reference is required against which to benchmark the performance. Three common reference forecast choices are: random chance, persistence and climatology. Short-term climatology (subsequently referred to as a prediction-period) has been chosen for the reference in the present study and =>? has been evaluated by replacing ( + ) in Equation (1) with the frequency of occurrence of GMSs over a prediction period encompassing the most recent 180 days. 180 days was chosen following an investigation (outlined in Section 3) which revealed it to be an accurate predictor period for GMSs. The use of a reference forecast enables the Ranked Probability Skill Score ( ), defined by to be evaluated; this score ranges from -∞ to a perfect score of 1 with > 0 implying that the forecast is more skilful than the reference. Confidence intervals for this statistic (calculated using bootstrapping with replacement) indicate whether there is any statistically significant evidence to suggest that the forecast is more skillful than the reference. Verification of the GMSF has been performed for each forecast range by the AFVS (Sharpe, 2013) using daily maximum values of Kp. The AFVS was originally designed to verify a range of forecast categories against a truth data distribution (representing the conditions throughout an area); however, when presented with a truth data source containing only a daily maximum it can also be used to verify a daily maximum forecast like the GMSF. An alternative verification approach is to treat the GMSF as a probabilistic warning service, verifying the forecast probabilities associated with each GMS level separately. In practice however, only categories ≥ G1 may be evaluated, because the more severe levels occur so rarely that robust statistics cannot be obtained over the available time frame. In the present study the Warnings Verification System (WVS) (Sharpe, 2015) has been used to verify the GMSF as a service, using Relative Operating Characteristic (ROC) plots and reliability diagrams (Jolliffe and Stephenson, 2012). The WVS is a flexible system originally developed to verify terrestrial weather; this system allows the analysis of near-hits by way of flexing thresholds in terms of space, time, intensity and confidence. However, in the present study flexing has only been applied in terms of intensity so that Kp values of 4-, 4o and 4+ are each categorized as a 'lowmiss' except when they occur during a warning when they are categorized as a 'low-hit'. The only other possible flex appropriate to this analysis is time because: confidence flexing cannot be applied since there is only one definitive Kp value and spatial flexing cannot be applied since no near-Earth Kp values are available against which to assess the forecast. Very Active X Class N 2 2 2 2 Table 2. XRFF contained within the 00Z SWTF on 21 st July 2016. X-ray Flare Forecasts The second aspect of the SWTF considered in the present study is the X-ray Flare Forecast (XRFF); a sample of which is shown in Table 2. This forecast is similarly displayed to the GMSF shown in Table 1; from left to right, column(s): 1. contains a description of the type of XRF; 2. identifies the class associated with each type of flare; 3. identifies whether any XRFs have been observed during the previous 24 h period; 4-7. contain forecast probabilities that each type of XRF will occur during four consecutive days into the future. Forecast probabilities for each active region are calculated using a Poisson-statistics technique (Gallagher et al, 2002) based on historical flare rates for forecaster-assigned McIntosh classifications (McIntosh, 1990). These active region probabilities are combined to give a fulldisk forecast, i.e., the chance of a flare occurring somewhere on the solar disk in the next 24 hours. The resulting model probability can be edited by the MOSWOC forecaster using their expertise before being issued as the Day 1 forecast as shown in Table 2. The Day 2-4 forecasts are purely based on forecaster expertise. More details about the forecasting method can be found in Murray et al, 2017. The XRF classes below M-class are A-class, B-class and C-class; however, these types of flare are not included in the forecast. In the soft X-ray range, flares are classified as A-, B-, C-, M-, or X-class according to the peak flux measured near Earth by the GOES spacecraft over 1-8 Å (in Wm -2 ). Each class has a peak flux ten times greater than the preceding one, with X-class flares having a peak flux of order 10 -4 Wm -2 ." During each 24 h period of Table 2 an M-class flare is predicted to occur with a probability of 28% and an X-class flare with a probability of 2%. There is a subtle, yet important difference between the values contained within Table 1 and Table 2; in the former the probabilities denote the chance of exceeding each level, whereas in the latter the probabilities indicate the chance that each class will be observed at least once during the 24 h period. Consequently, using ( ) and ( ) to denote the probabilities associated with X-class and M-class flares (as they appear in Table 2), it is theoretically possible for ( ) to be greater than ( ); whereas, ( 5) > ( 4) is impossible. Although the values in Table 2 denote the probabilities of observation (rather than exceedance), user interest will lie mainly in the maximum flux class to occur during each 24 h period; therefore, some manipulation of the values displayed in Table 2 is required. The following paragraph derives expressions for (maximum flux is A,B or C-class), (maximum flux is M-class) and (maximum flux is X-class) from the probabilities that appear in Table 2 (denoted by (M) and (X)). Evaluating the probability that the maximum flux is X-class is simple; because X is the maximum possible flux class; (maximum flux is X-class) = ( ). (1) To calculate the probability that the maximum flux is M-class it is first necessary to observe that where ( J ) = 1 − P(M) is the probability that M-class will not occur. The XRF truth data source is long wave radiation observations reported by the Geo-Orbiting Earth Satellite (GOES-15) which takes measurements every 60-seconds. X-class flares are very rare, for example Wheatland et al (2005) noted that out of 10,226 days from 1975 to 2003, M-class flares occurred on ~25% of those days whereas X-class events occurred on only ~4% of those days. Analysis for the present study reveals that X-class flares occurred on just over 2% of days between 2010 and 2015. Therefore, it is relatively safe to assume that the first term on the right hand side of Equation (2) is zero since it is virtually impossible for the minimum 60-second observation during a 24 h period to be X-class and consequently, The final expression to obtain is (maximum flux is M-class); since Equations (1), (3) and (4) are used to calculate maximum XRFF probabilities which are used by the AFVS to verify the skill using the RPS and the RPSS. For the latter, a reference forecast is necessary and (as with the GMSF) the frequency of XRF class activity during a rolling prediction period is used for this purpose. The analysis outlined in Section 3 suggest that a prediction period of 120 days is a suitable choice. The XRFF is also assessed by the WVS, verifying separately the forecast probabilities associated with M-class and X-class flares; in practice however, only M-class flares can be evaluated because during the trial period X-class flares occurred too rarely to facilitate the calculation of robust statistics. As was the case for GMS verification, only intensity flexing is applied, for which a low-hit threshold of 10 -6 Wm -2 (C-class) is used. Results Two 4-day SWTFs are issued each day; a main forecast is issued at 00Z and an update at 12Z. It is inappropriate to analyze the performance on day 1 by amalgamating the update with the main forecast because the day 1 component to this update is a 12 h (rather than a 24 h) forecast; therefore, only 00Z forecasts are considered in the present study. Geo-Magnetic Storms The RPS is used to assess the skill of MOSWOC forecasts; however, as discussed in Section 2, a reference forecast is required against which to benchmark the score by calculating the RPSS. Arguably the simplest (and most basic) choice is random chance since its production requires no prior knowledge or information. The skill associated with a forecast generated by random chance is usually low and consequently it does not predict events well. However, despite this random chance is used (implicitly) in a number of popular verification statistics such as the Equitable Threat Score and the Peirce and Heidke Skill Scores (Jolliffe and Stephenson, 2012). Persistence is another popular reference choice, again (no doubt) because it requires little prior knowledge or information; persistence forecasts usually predict that tomorrow will be the same as today. When events are rare or conditions are benign persistence can produce a very favorable score; however, it is a completely ineffective predictor of the onset of severe events. The third most popular choice for a reference forecast is climatology. This option is less common because it requires prior knowledge of the conditions over a long time frame; indeed, in meteorology it is common to calculate climatology over a 30 years period. This reference sets the probability of each forecast category to its climatological frequency of occurrence. Usually the climatological period is fixed in advance; for example, in meteorology it is common to compare the latest season against the distribution formed by accumulating each corresponding season over the 30year period between 1981 and 2010 (National Climate Information Centre, 2017). The climatology of solar activity is dissimilar to meteorological climatology, so it is not valid to follow this methodology exactly; however, it is unclear which prediction period is most appropriate. Consequently, different period lengths (from 30 to 360 days) have been analyzed to obtain an accurate predictor. For each prediction period definitive GFZ data was used to calculate the frequency of occurrence of each GMS-level; however, GFZ Kp values are only available following a one-month (approximately) latency period (although SWPC produce real time estimates). The near-real time nature of GMS forecast verification precludes the use of truth data which is unavailable to the forecaster; therefore, a one-month latency period has been built into this analysis. Extensive checking confirmed that each prediction period length appeared to produce a similar 12-month rolling mean RPS value. Therefore, the minimum RPS value was calculated on only the first day of every month throughout the 16-year trial period. This method identified 180 days as the best performing prediction period length . • G0 on between 77.8% and 88.9% of days; • G1-2 on between 11.1% and 19.4% of days; • G3 on between 0% and 2.8% of days; • G4 on between 0% to 1.7% of days; and • G5 was not observed. Although it is inappropriate to calculate the RPSS for individual forecasts, monthly or annual values are available via Equation (2) following an evaluation of <<<<<< and <<<<<< =>? -the latter being calculated by substitution of the PDFs in Figure 1 for the forecast. The lower tail of the intervals for October 2016 does not intersect with this line, implying evidence at the 90% level to indicate that the skill during this 12-month period is greater than that associated with a rolling 180-day prediction period of GMS activity. However, all remaining confidence intervals intersect the green line, indicating that there is currently no evidence to suggest that these forecast days are more skillful than a daily 180-day prediction period at identifying the correct maximum daily GMS level. Figure 3 displays Relative Operating Characteristic (ROC) plots calculated using GMSFs issued between April 2015 and October 2016; these plots describe the skill associated with each day of the forecast at correctly discriminating the days on which Kp reached or exceeded 5-(G1). A ROC curve is simply a plot of the Hit Rate (the proportion of events that were forecast) against the False Alarm Rate (the proportion of non-events that were incorrectly forecast), both of which range between 0 and 1. The Hit Rate is positively orientated whereas the False Alarm Rate is negatively orientated. Each point on a ROC curve represents the value of these two statistics at a different probability level. If action is taken when the forecast probability of an event is low the Hit Rate will be relatively large because events are forecast more frequently; however, the False Alarm Rate will also be relatively large because many of these forecasts will be false alarms. As the action/no-action forecast probability threshold is increased the value of both statistics reduce (tending to zero when action is never taken) and a ROC curve is formed by drawing a line through all these points. The further this curve resides above and to the left of the leading diagonal the more skill the forecast has at correctly distinguishing events from non-events; however, a curve that resides close to the diagonal indicates that the forecast cannot distinguish events from non-events. Although the WVS assesses the performance of all levels identified in the GMSF, only G1/2 is considered in the present study because the performance statistics associated with more severe levels are insufficiently robust for detailed analysis due to their low base rates. There are three ROC-curves in each sub-plot, the green curve represents standard (un-flexed) verification methodology, whereas the black lines apply flexing using low-hit and low-miss categories. All the points within each sub-plot of Figure 3 reside above the grey diagonal no-skill line, indicating that each day of the GMSF has skill at discriminating events of G1 or above. The black line formed by + points awards a hit to any warning during which the maximum Kp value is at least 4-but only registers a missed event when G1 is not forecast and the maximum Kp significantly increases whereas the False Alarm Rate remains virtually unchanged -this is a consequence of the low base rate. The curve formed by the + points indicates that during a significant proportion of the days on which G1 was predicted the maximum Kp value was either 4-, 4o or 4+. In each plot the exclusive-flexed curve (+) shows better discrimination than the green un-flexed curve. This clearly indicates that maximum daily Kp values of 4-, 4 or 4+ often occur when G1 is forecast with a non-zero probability. The inclusive-flexed curve (□) amounts to simply reducing the Kp event threshold to 4-(from 5-), it is interesting to observe that the resulting ROC-curves virtually coincide with the green un-flexed curves. A comparison of each point in sub-figure (a) reveals that inclusive-flexed values of the Hit Rate and False Alarm Rate are smaller than their green un-flexed counterparts. The fact that the curves are virtually coincident is an indication that the decrease in the proportion of correctly warned-for events is matched by an increase in the proportion of forecasts that were false alarms; consequently, the ability with which events are correctly identified is almost unchanged. In other words, the GMSF is equally skilled at identifying days on which Kp≥4-as it is at identifying days on which Kp≥5-. The same conclusion also applies to sub-figures (c) and (d) (forecast days 3 and 4); however, sub-figure (b) appears to indicate that day 2 (identified as the worst performer in Figure 4) has (slightly) more skill at identifying Kp=4-. The Figure 3. The horizontal dot-dashed lines reveal that the maximum daily Kp value was ≥ 5-on 18% of occasions and ≥ 4-on 39% of occasions. The histograms (which are identical in both figures because the forecast is identical) reveal that G1 was rarely forecast with a high probability, especially at longer range. On days 1, 2, 3 and 4 probabilities ≥ 50% were issued on 17%, 13%, 9% and 7% of occasions and probabilities ≥ 90% on only 15, 3, 1 and 1 occasions respectively. Consequently, there is low confidence associated with the points in these figures that represent forecasts of higher confidence; never-the-less (with the exception of the 0-10% probability bin) almost all remaining points in sub-figure (a) lie below the no-skill region -a clear indication that G1 was over-predicted. The equivalent lines in sub-figure (b) lie above the dotted-diagonal (perfect skill) line because the event threshold used in this plot is a Kp-value of 4-(rather than 5-); however, although the majority of these points indicate under-forecasting many of them lie in the region between the two grey off-diagonal dashed lines (the forecast-skillregion). This appears to indicate that the forecast is more reliable at correctly identifying Kp values ≥ 4-than those ≥ 5-. The Brier score can be decomposed into three components (Jolliffe and Stephenson, 2012), one of these is a negatively orientated measure (between 0 and 1) of the reliability ( ) given by In this expression denotes a probability bin, is the total number of forecast days, U is the number of times a geo-magnetic storm was forecast with a probability U and U is the total number of times a geo-magnetic storm was observed given that it was forecast with a probability U . REL for forecast day: 1. is 0.024 in (a) and 0.009 in (b); 2. is 0.025 in (a) and 0.014 in (b); 3. is 0.019 in (a) and 0.018 in (b); 4. is 0.013 in (a) and 0.023 in (b). These scores (being negatively orientated) confirm that for forecast days 1 and 2 the GMSF appears to provide a slightly more reliable forecast for lower Kp (4-to 4+) events. X-ray Flares A similar analysis to that described in Section 3.1 was also undertaken to evaluate the most suitable rolling prediction period for the evaluation of a reference for XRF forecasts. Rolling mean RPS values were again used to identify 120 days as the best prediction period. Examination of flare occurrence over solar cycle (see e.g., the histograms of Figure 5 in Wheatland, 2005) confirms that a relatively short prediction period is a sensible choice, since periods longer than 12 months would prove problematic during the sharp rising and declining phases; therefore, daily prediction period lengths between 30 and 360 days were considered. When undertaking this analysis for GMSFs the truth data were only available following an (approx) one-month latency period; however, the truth data source for XRFFs is GOES longwave radiation flux and minute-by-minute values for these are available instantly. Therefore, no such latency period is appropriate because the truth data are immediately available to the forecaster. As was the case or GMSFs, extensive checking of every considered prediction period length appeared to produce similar 12-month rolling mean RPS values; however, in the present case, an examination of minimum mean RPS values on the first day of each month throughout the 16-year trial period did not reveal any optimal prediction period length. Therefore, again taking into consideration solar cycle variation as highlighted in Figure 5 of Wheatland, 2005 a 120-day prediction period was chosen. • ABC on between 70.0% and 99.2% of days; • M on between 0.8% and 28.3% of days; • X on between 0% and 1.7% of days. Rolling 12-monthly RPSSs for each day of the XRFF (together with 90% confidence intervals, calculated using bootstrapping with replacement) are displayed in Figure 6. These scores have been evaluated via Equation (2), using the PDFs in Figure 5 to calculate <<<<<< =>? ; forecast days 1 to 4 are shown as solid, long-dashed, short-dashed and dotted lines respectively. All point estimates of the RPSSs on days 1 and 2 lie above the green no-skill line, as do the majority of estimates on days 3 and 4; however, all their accompanying confidence intervals cross this line. Therefore, there is little evidence to suggest that the skill of the forecast at correctly identifying the maximum daily XRF class exceeds that obtained by using a rolling 120-day prediction period. Similarly the confidence intervals provide little evidence to suggest that any one forecast day is more skilful than another; however, the estimates alone suggest that day 1 tends to be more accurate than subsequent forecast days. What is obvious from this figure is the increasing Figure 7 displays ROC-plots calculated from XRFFs issued between April 2015 and October 2016; these plots describe the skill associated with each forecast day at correctly discriminating when the maximum daily flux is at least M-class. Although the WVS assesses the performance of both M and X class flares, only M is considered in the present study because the performance statistics associated with X are insufficiently robust for detailed analysis, due to their low base rate. The three curves that are displayed in each sub-figure are as described in relation to Figure 3, except that in the present case the low-hit threshold corresponds to a C-class flare. The points on each curve of each sub-figure lie above the grey diagonal no-skill line, indicating that each day of the XRFF has skill at discriminating fluxes corresponding to M-class (or C-class) flares or above. In each plot the exclusive-flexed curve (+) displays better discrimination than the unflexed curve (×), clearly indicating that a C-class flare often occurs when an M-class is forecast with a non-zero probability. The inclusive-flexed curve (□) amounts to simply reducing the event threshold to a C-class flare, and the resulting curves indicate less discriminatory skill compared with the un-flexed curves. It is likely that a reduction in the event threshold will increase: the base rate, the number of hits and the number of missed events; it is also likely to decrease the number of false alarms and correct rejections. In Figure 7 the Hit Rates on each inclusive-flexed curve are smaller than their un-flexed counterparts, indicating that as a proportion, the number of missed events has increased more than the number of hits. Inclusive-flexed False Alarm Rates have also reduced compared with their un-flexed counterparts, indicating that (as a proportion) the number of false alarms has reduced more than the number of correct rejections; however, this decrease is not large enough to offset the reduction in Hit Rate and consequently the area underneath the inclusive-flexed ROC-curve (a summary indicator of discriminatory skill) has reduced. The areas under the un-flexed, inclusive-flexed and exclusive-flexed ROC-curves on day: For each type of flexing the area under the ROC-curve decreases monotonically with increasing forecast range. This trend was also found in the Murray et al (2017) work, with the day 1 forecast generally being more skillful than subsequent days. The fact that the area beneath each inclusive-flexed ROC-curve is smaller than the corresponding area beneath each un-flexed ROCcurve indicates that either the C-class flare threshold (1.0E -6 Wm -2 ) provides a low-hit threshold that is too small or that the discriminatory skill of the XRFF service is optimized by the M-class flare threshold (1.0E -5 Wm -2 ). + points only register missed events when an M-class flare is not forecast and an M-class flare occurs, whereas a hit is awarded to any forecast during which the maximum XRF class is at least C; the purpose being to give the forecaster the benefit of the doubt when XRFs occur which are almost classified as M-class, whilst not penalizing flare events that were nearly M-class. Comparing each pair of + and □ points at every probability threshold reveals that the change in Hit Rates is significantly greater than the change in False Alarm Rates -this is a consequence of the low base rate associated with XRFs. The histograms reveal that M-class was rarely forecast with high probability; however, there is very little difference between the different shaded bars, indicating that the frequency with which M-class flares are forecast with n% probability (where n is a decile) is similar on each day of the forecast. In sub-figure (a) the majority of points lie below the no-skill region -a clear indication that M-class flares are over-forecast, as also found in the Murray et al (2017) study. However, the curve in sub-figure (b) is significantly above the diagonal indicating under-forecasting of Cclass flares. Therefore, it appears that instead of using a long-wave flux threshold of 1.0E -5 Wm -2 (M-class) the actual (unintentional) threshold was between 1.0E -6 Wm -2 and 1.0E -5 Wm -2 . The reliability component to the Brier Score for forecast day: Conclusions The present study contains the results of analyzing GMSFs and XRFFs contained within daily 00Z SWTFs issued by MOSWOC over the 19-month period between April 2015 and October 2016. Two approaches have been adopted: 1. a ROC and reliability analysis is used to assess the ability with which G1 GMSs and Mclass XRFs are predicted; and 2. a RPSS analysis is performed to analyse the skill of the GMSFs and XRFFs against the skill demonstrated by simply forecasting the frequency of occurrence over the most recent 180-day and 120-day prediction periods respectively (chosen to optimise <<<<<< =>? ). For the GMSF: • The ROC analysis revealed that each day of the forecast had skill at discriminating days on which the maximum Kp-value was greater than or equal to 5-(G1); however, the forecast displayed a virtually identical level of skill at identifying days on which the maximum Kp-value was greater than or equal to 4-. • The reliability analysis revealed that G1 storms were over-forecast, whereas Kp-values ≥5-were slightly under-forecast; consequently, the GMSF was found to more reliably predict maximum Kp-values ≥4-than maximum Kp-values ≥5-. • The RPSS analysis presented little statistically significant evidence that day 1 of the GMSF was a better predictor of maximum GMS level than the frequency of occurrence over the preceding 180 days. For the XRFF: • The ROC analysis revealed that each day of the forecast had more skill at correctly identifying M-class flares than C-class flares. • The reliability analysis confirmed that although M-class flares are over-forecast, C-class flares are greatly under-forecast; therefore, it is likely that the most appropriate eventthreshold was between 1.0E -6 Wm -2 (C-class) and 1.0E -5 Wm -2 (M-class). • The RPSS analysis indicated that the XRFF struggled to outperform a forecast comprised of only the frequency of occurrence over the preceding 120 days, with the confidence intervals associated with these estimates providing no statistically significant evidence. In the future our goals are to continue the analysis of the GMSF and XRFF components to the SWTF as this provides valuable feedback and guidance to MOSWOC forecasters. Plans also exist to: compare the performance of these services against equivalent services provided by other space weather centres and expand the verification to include other SWTF components (the next area of study being coronal mass ejection forecasts).
2018-12-21T22:04:08.517Z
2017-10-01T00:00:00.000
{ "year": 2018, "sha1": "ba23bdd067b235e62244bd1b33c555c718a67ef0", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2017SW001683", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ba23bdd067b235e62244bd1b33c555c718a67ef0", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
257299942
pes2o/s2orc
v3-fos-license
Human Capital, Social Capital, And Innovation Capability In Performance Of Village-Owned Enterprises . This study aims to analyze the effect of human capital and social capital on the performance of Village-owned Enterprises, which is mediated by innovation capability. The population in this study is Village-owned Enterprises located in Kampar Regency. Respondents in this study were the director of BUMdes. as many as 228 Village-owned Enterprises. The data collection technique is by sending a questionnaire in an internet questionnaire designed using Google Forms. A total of 120 BUMDes Directors participated in this research. The results of data analysis with PLS show that the study results indicate that human capital does not affect the performance of Village-owned Enterprises. Social capital involves the performance of Village-owned Enterprises. Social and human capital have also been shown to affect the ability to innovate. Social capital has also been shown to affect humans, and innovation capability affects performance. The power of innovation can also partially mediate the relationship of social capital with the performance of Village-owned Enterprises and fully mediate the effect of particular capital on the performance of Village-owned Enterprises. However, human capital is not a mediating variable for social capital's impact on Village-owned Enterprise's performance. This research has a contribution to improving the performance of Village-owned Enterprises. INTRODUCTION Village-owned Enterprises are Village business institutions managed by the community and the Village Government.According to the regulation of Ministry of Village, Development of Disadvantaged Regions and Transmigration of Republic of Indonesia No. 4 of 2015, the objectives of establishing Village-owned Enterprises include improving the village economy, creating market opportunities and networks supporting the public service needs of citizens, and improving community welfare through the improvement of public services, growth and equitable distribution of the village economy and increasing village community income.This regulation has encouraged the growth of Village-owned Enterprises, which is relatively high.According to data from the Ministry of Villages, Development of Disadvantaged Regions, and Transmigration of the Republic of Indonesia in 2018, the total of Village-owned Enterprises throughout Indonesia reached 35 thousand in 74,910 villages.This amount is five times the target of the Ministry of Villages, Development of Disadvantaged Regions, and Transmigration of the Republic of Indonesia, in which the ministry only set 5000 Village-owned Enterprises.However, various data have indicated that most Village-owned Enterprises are only established, and they do not have productive activities.Notwithstanding, most of them are stagnant.Village heads represent a lack of understanding regarding the knowledge concerning Village-owned Enterprises (Berdesa, 2018) In the Province of Riau, the growth of Village-owned Enterprises has been quite significant in the last five-year.Every village is competing to establish Village-owned Enterprises.However, the percentage of the established Village-owned Enterprises which are successful is still far from the expectation.There are 849 Village-owned Enterprises spread over 1592 villages.Of this number, 131 Village-owned Enterprises went bankrupt (Gagasan.riau, 2019).In the Regency of Kampar, there are 242 active Village-owned Enterprises and 26 inactive Village-owned Enterprises.It shows that the management of Village-owned Enterprises is far from optimal, indicating the lack of performance of Villageowned Enterprises. One of the factors that affect the performance of Village-owned Enterprises is human capital.Unger et al. (2011) define Human Capital as individual skills and knowledge gained through investment in school education and life experience.Previous research has shown that human capital affects organizational performance, for example, the study by Gupta & Rahman (2021) on companies in India.Research by (Nugraha, 2018) also shows that individual capacity and individual motivation affect the non-financial performance of advertising companies.However, Oktaviany & Raharjo (2019) research shows that human capital does not influence organizational performance. In addition to human capital, research by Gandhiadi & Kencana (2020) and Kim & Aldrich (2005) has found that social capital is one of the factors causing high and low business performance.Kim & Aldrich (2005) describes social capital, broadly, as a resource available to people through social relationships.Social capital can affect success because the information obtained from business acquaintances can sharpen the entrepreneur's perception of the business being managed.The social capital possessed by the owner will affect the relationship between the external environment both with other entrepreneurs, related agencies and institutions, suppliers, consumers, and the surrounding community.Managers of Villageowned Enterprises should have the ability to establish relationships with parties whose relationships can be utilized to support the business.However, the Social Capital owned by business actors is still optimally unutilized. The effect of social capital on Villageowned Enterprises can transform things in terms of corporate governance, improve the quality of human resources, and develop organizations with community welfare.Village-owned Enterprises based on social capital can bring change in a promising direction.Social capital affects the survival of Village-owned Enterprises in the village.This description is also supported by research conducted by Walenta (2019) and Yohanes et al. (2017) on SMEs, showing that social capital affects these SMEs.However, Akintimehin et al., (2019) research show that social capital does not affect organizational performance. Previous research has shown inconsistent results.Other factors may indirectly influence social and human capital's influence.In contrast to previous studies Walenta (2019) and Gandhiadi & Kencana, (2020), this study uses innovation capability as a mediating variable.Varadarajan & Jayachandran (1999) Explain that the concept of organizational innovation capability refers to a set of beliefs and ways of working that influence an organization's view of how innovation capability and change should be handled. Innovating is needed for organizations to face various changes in a dynamic environment.Jamshidi & Kenarsari, (2015) argues that social capital can increase the innovative behavior of employees.Wellformed social capital will impact employee performance or performance, one of which is innovative behavior.Research Widjajanti et al (2017) shows that human capital can also increase innovative behavior.Therefore, organizations are required to create new ideas and offer innovative products.Therefore, innovation capability can improve business performance (Widjajanti et al., 2017). In addition to using innovation capability as a mediation, this study also examines the effect of social capital on human capital, which was not carried out in previous studies.Research by Kimbal (2020) on MSMEs shows the component of human capital, which is identified with individual competence, high work motivation, work environment value system, teamwork, and leadership in running a small business.These components contribute to establishing and strengthening the required social capital in small companies like relatively strong networks, mutual trust and cooperation, adherence to norms, exchanging kindness, and the value of a meaningful life.Social capital sourced from substantial human capital is vital energy for the survival of small businesses during intense competition in the era of the industrial revolution 4.0. This study aims at examining the effect of human capital and social capital on the performance of Village-owned Enterprises.This study also tested innovation capability as a mediating variable.In addition, this study also examines the effect of human capital on social capital.It looks the mediation of human capital on the relationship between social capital and the performance of Village-owned Enterprises.This study on human capital and social capital in Village-owned Enterprises is rarely done.Previous research was mainly conducted on MSMEs and large companies, such as Kimbal (2020) and Widjajanti et al (2017).Therefore, this study is interesting to do. Resource-Based View (RBV) Theory According to RBV theory (Wernerfelt, 1984), its resources and capabilities are essential for the company because it is the basis of its competitiveness and performance. RBV is a method for analyzing and identifying a company's strategic advantages based on a review of the combination of assets, expertise, capabilities, and intangible assets that are specific to an organization.When resources are managed so that what is produced is difficult for competitors to imitate or make, which in turn creates barriers to competition (Mahoney & Pandian, 1992) The effect of social capital on the performance of Village-owned Enterprises Social capital is a person's ability to get benefits or profit through investment strategies in social networks.The higher the social capital owned by the manager of Villageowned Enterprises, the more resources, and benefits that can be obtained from the successful management of Village-owned Enterprises.Managers of Village-owned Enterprises who are able to take advantage of relationships with other parties, customers, and related agencies or institutions can work together sharing useful information until distributing resources that can support the success.An excellent business is a business that benefits itself and the community around the company.The participation of business actors with the surrounding community is a form of relationship establishing mutual trust that the company will not harm the community and the environment around the company.The confidence arising in the community can increase the success of Village-owned Enterprises.Hence, it can be concluded that social capital has a positive effect on the performance of Village-owned Enterprises.Research conducted by Walenta (2019) and Yohanes et al (2017) shows that social capital affects the performance of MSMEs.Based on the description above, the hypothesis is: H2: Social capital affects the performance of Village-owned Enterprises The effect of social capital on innovation capability Jamshidi & Kenarsari (2015) argues that social capital can increase employees' innovative behavior.Well-formed social capital will impact employee performance, one of which is creative behavior.(Landry et al., 2002) argues that an increase in social capital in the form of participation assets and relational assets contributes to increasing company innovation.In Village-owned Enterprises, cooperation can be established with various business circles.This collaboration will encourage creativity to generate new ideas following business demands to increase the ability to innovate.The result of research by (Pertiwi, 2020) found that social capital affects innovation.Based on the explanation above, it is hypothesized: H3: Social capital affects the innovation capability The impact of human capital on innovation capability Human capital must involve competencies of Human Resource (e.g., skills, knowledge, and capabilities) and their commitment (e.g., willingness to dedicate their lives and work for the company).According to (Collins & Clark, 2016), human capital is the characteristics of human resources determined by the knowledge used to create value for the organization.This explanation shows that someone with high abilities can establish and utilize company resources in ways that make company innovation.Research conducted by Widjajanti et al (2017) proves that human capital can increase innovative behavior.Innovative behavior is all individual behavior that is directed to produce, introduce and apply new things that are useful at various levels of the organization (De Jong & Hartog, 2003).Based on the description above, it is hypothesized: The Effect of Social Capital on Human Capital According to Kim & Aldrich (2005), social capital is a resource available through social relationships.This ability to establish social networks is called social capital.The wider one's association and the more comprehensive the network of social relations, the higher one's value.Recent developments in social capital conclude that the value of human capital can be increased through the will and goodwill established with a series of social relationships that can be done to facilitate the action (Widodo, 2010).According to (Suriatna, 2013), Social factors or social support will make it easier for individuals and a source of strength when facing problems.From the description above, it can be concluded that the better the social capital owned, the better human capital that can be formed.It is supported by research conducted by (Widjajanti et al., 2017), showing that social capital positively affects human capital.Based on this description, the hypothesis in this study is: H5: Social capital affects the human capital The effect of innovation capability on the performance of Village-owned Enterprises According to OECD (2005), four product/service innovation types are process innovation, organizational innovation, and marketing innovation.Product/service innovation introduces a product or service with significantly improved performance.Product/service innovation is an important performance factor providing the capability for expansion into new markets and industries (Damanpour & Gopalakrishnan, 2001) and allowing to explore opportunities for abnormal profits and providing a route for companies to earn profits (Savitz et al., 2000)).Process innovation is the implementation of new or significantly improved production or delivery methods.It may consider changes in tools, human capital, and working techniques or a combination of these, such as new installations or upgraded software to speed up the claims settlement process and policy issuance (OECD, 2005).Process innovation introduces new tactics for a product or service or new ways to commercialize a product or service.Process innovation may affect productivity, productivity growth, or profitability. The process required to produce a product or service is unpaid directly by the customer.Therefore, process innovation must be a new change to the act of making or delivering a product allowing significantly increased value delivered to stakeholders (Savitz et al., 2000).Marketing innovation introduces new marketing methods involving significant changes in product design, product placement, and product promotion or pricing (OECD, 2005).The main goal of marketing innovation is to address customer needs better, penetrate new markets, or position new company products to increase company sales.(Savitz et al., 2000) has investigated the impact of marketing innovations in private commercial banks in Jordan.Their findings prove that marketing innovation has a positive effect on creating long-term competitive advantage and company growth.In addition, it is crucial managers align with the company's strategy and perceptions of marketing innovation to create sustainable development. Thus, the higher the innovation capability owned by the manager of Villageowned Enterprises, the better the performance of the Village-owned Enterprises by producing innovative products or businesses needed by the community.Previous research has shown that innovation capability encourages increased company performance (Rajapathirana & Hui, 2018).The result of research Widjajanti et al (2017) shows the ability of innovation capability has a positive effect on the marketing performance of MSMEs.Based on the description above the hypothesis: H6: Innovation capability has an effect on the performance of village-owned enterprises Mediation of innovation capability on the relationship between social capital and the performance of Village-owned Enterprises The description above shows that social capital can create creative ideas with social networks (Jamshidi & Kenarsari, 2015).Furthermore, innovation capability can improve performance (Rajapathirana & Hui, 2018).Innovative behavior is the successful implementation of the creative ideas of Village-owned Enterprises' managers and is a significant factor in improving the performance of Village-owned Enterprises.Thus, the higher the company's innovation capability, it will enhance the company's performance by increasing purchasing decisions.The results of previous research show that social capital affects innovation capability (Pertiwi, 2020), and the ability to innovate can improve business performance (Widjajanti et al., 2017).Based on the description above, it is hypothesized: H7: Innovation capability mediates the relationship between social capital and the performance of Village-owned Enterprises Mediation of innovation capability on the relationship between human capital and the performance of Village-owned Enterprises Human capital, like a person's skill and knowledge, can create company innovation (Collins & Clark, 2016), and innovation capability will improve performance through innovative behavior that can generate, introduce and apply new things that are useful in various organizations so that it has a positive effect on the improvement of organizational performance (Savitz et al., 2000).The result of research by (Widjajanti et al., 2017) proves that human capital can increase innovative behavior.The result of research by Pertiwi, (2020) and Widjajanti et al (2017) also confirms that innovation capability can improve performance.Based on the description above, it is hypothesized: H8: Innovation capability mediates the relationship between human capital and the performance of Village-owned Enterprises Mediation of human capital on the relationship between social capital and the performance of Village-owned Enterprises The wider one's association and the more comprehensive the network of social relations, the higher one's value.Recent developments in social capital conclude that the value of human capital can be increased through the will and goodwill established with a series of social relationships that can be done to facilitate collective action (Widodo, 2010).According to (Suriatna, 2013), Social factors or social support will make it easier for individuals and a source of strength when facing problems.From the previous description, it can be concluded that the better the social capital owned, the better human capital that can be formed.Human capital must involve competencies of Human Resource (e.g., skills, knowledge, and capabilities) and their commitment (e.g., willingness to dedicate their lives and work for the company).(Widjajanti et al., 2017) proves that human capital can increase innovative behavior.Innovative behavior is all individual behavior directed to produce, introduce and apply new things that are useful at various levels of the organization (Jong et al., 2003).Based on the description above, it is hypothesized: H9: Human capital mediates the relationship between social capital and the performance of Village-owned Enterprise RESEARCH METHODS This research is quantitative research using a population of 228 Village-owned Enterprises spread across various sub-districts in the Kampar Regency.The sampling technique employed in this study is probability sampling with area sampling technique or cluster sampling, namely grouping Villageowned Enterprises per district area.There are 21 sub-districts in Kampar Regency with 250 villages.However, the number of Villageowned Enterprises is only 228, and the number of Village-owned Enterprises that are still active is only 216.Determination of the number of samples employed the Slovin obtained a selection of 140. Data collection in this study was using a questionnaire.Some were delivered directly to the managers of Village-owned Enterprises, and some were sent via digital media, namely Google From.Google Form is an online form application to efficiently collect information through surveys, namely by inputting questionnaire questions distributed digitally to every manager of Village-owned Enterprises acting as a respondent in this study.Information on managers of Village-owned Enterprises was obtained from the Community and Village Empowerment Service of Kampar Regency. Performance of Village-owned Enterprises Performance results from optimal work performance carried out by a person or group, or business entity.The indicators employed in this study were adopted from the research of (Savitz et al., 2000), consisting of 8 indicators: Return on Assets (ROA), Return on Equity (ROE), revenue growth, and sales returns, loyalty, competitiveness, stability, and customer satisfaction. Social Capital Social Capital is a person's ability to get benefits or profits through investment strategies in social networks.The indicators employed are the ability to establish cooperation, establish trust, and participate in local communities.(Ferdinand, 2003)Social Resources with hands of Social Capability, Social Networks, Trust, Cohesion Human Capital Human Capital is the knowledge and expertise of individuals owned and acquired through investment in education and experience helpful in improving performance and success.Human Capital can be measured changes in skills, creativity, and intelligence, developing new ideas and knowledge.The human capital indicators used are from (Mayo, 2000) developed by (Widodo, 2010) Innovation Capability Innovation Capability is the ability to generate new ideas leading to higher performance, create new opportunities, increase future capacity, technology leadership, and increase the knowledge base through managing technological change (Malaysia Productivity Corporation, 2009).Indicators employed to measure innovation capability are developed by (Widjajanti et al., 2017)with hands New product development, Application of appropriate technology, Process development and adaptation, Response to competitors All variables were measured using a 5-point Likert scale: 1 = Strongly Disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly Agree Data Analysis Technique Data analysis was carried out using SmartPLS software.PLS is an alternative approach shifting from a covariance-based SEM approach to a variance-based system that can simultaneously test the measurement model and test the structural model.The measurement model is utilized to test the validity and reliability, while the structural model tests causality (testing hypotheses with predictive models).In addition, SmartPLS is used for intervening testing with path analysis models.The path analysis model systematically becomes a standardized regression model (without constants) because it wants to compare various paths or know the direct and indirect effects between variables (Ghozali, 2014). RESULTS AND DISCUSSION Of the 140 questionnaires distributed to respondents, there were 120 questionnaires readmitted (85%).The characteristics of the respondents are presented in table 1: Test Results of Validity Test results of convergent and discriminant validity after eliminating several indicators can be seen in table 2. Factor loading shows the factor loading value > 0.7, and the AVE value in table 4 also shows a value > 0.5. Cross loading also indicates that the factor loading variable block value is higher than the other variable blocks (Hair et al., 2010).It shows that the convergent validity and discriminant validity have been met.Figure 1 shows the full structural equation model, and table 2 shows the value of the loading factor. Test Results of Reliability The results of the Reliability test can be identified in Table 3.The results of the reliability test show the value of Cronbach's alpha and composite reliability > 0.7 (Hair et al., 2010) , indicating that all variables meet reliability Evaluation Results of Structural Model The value of R-Square can be identified in Table 3 Test Results of Hypotheses Hypotheses testing is based on the pvalue.The hypothesis is accepted if it has a pvalue < 0.05.The results of hypotheses testing can be seen in the path coefficient table of direct effect in table 4. DISCUSSIONS The effect of human capital on the performance of Village-owned Enterprises The test results can be seen in table 4, showing P-value of the effect of Human capital on the performance of Village-owned Enterprises was 0.882 or ≥ 0.05.It can be concluded that hypothesis 1 is rejected: Human capital does not affect the performance of Village-owned Enterprises. The results of this study indicate that the competence of Village-owned Enterprises managers cannot directly affect the improvement of Village-owned Enterprise's performance.The expertise and knowledge of Village-owned Enterprises managers will not directly improve the performance of Villageowned Enterprises.Still, competence will have an impact on the ability of human resources to create creativity.If human resources with high competence do not have high creativity, they will necessarily be unable to improve organizational performance.It means that even though a person's level of education is high, it does not guarantee that they can improve the performance of Village-owned Enterprises.The results of this study are also in line with research by (Oktaviany & Raharjo, 2019), showing human capital does not affect organizational performance. The Effect of Social Capital on the performance of Village-owned Enterprises Test results of Hypothesis 2 in the structural model image show that the p-value is 0.002 or < 0.05.The original sample estimate value of 0.453 indicates that social capital has an effect on the performance of Village-owned Enterprises.It can be concluded that social capital has an impact on the performance of Village-owned Enterprises in the Kampar Regency.Thus, Hypothesis 2 is accepted. According to (Kim & Aldrich, 2005), social capital is a resource available through social relationships.Social capital can affect success because the information obtained from business acquaintances can sharpen the entrepreneur's perception of the business being managed.Hence, Managers of Village-owned Enterprises who can take advantage of relationships with other parties, customers, and related agencies or institutions can work together sharing useful information until distributing resources that can support the success.The trust arising in the community can increase the success of Village-owned Enterprises. According to Heider's attribution theory, situational attribution is an external cause referring to the environment affecting behavior, such as social conditions, social values, societal views.Social capital, a social relationship owned by managers of Villageowned Enterprises, can affect the success of Village-owned Enterprises with the trust gained through social connections.Social relations can increase the sales of Villageowned Enterprises. The results of this study support research by (Pertiwi, 2020) showing social capital can improve performance.The test results show the P-value of social capital's effect on innovation capability was < 0.000 or ≤ 0.05.The original sample estimate value of 0.755 indicates hypothesis 3 is accepted: social capital affects innovation capability in Village-owned Enterprises in Kampar Regency. The results of this study support the attribution theory: external attribution, which is a social-environmental factor that can affect the way a person behaves.A motivating and supportive social environment can affect managers of Village-owned Enterprises to be more innovative.It is in line with research conducted by (Jamshidi & Kenarsari, 2015), stating that social capital can increase creative behavior.Social capital can affect the quality of human resources in an organization.The collaboration will encourage creativity to generate new ideas following business demands to increase innovation capability.This study is also in line with the statement of (Pertiwi, 2020) arguing that social capital will increase the innovative behavior of managers. The Effect of Human capital on the innovation capability The test results show the P-value of human capital's effect on innovation capability was < 0.013 or ≤ 0.05.The original sample estimate value is 0.169.It can be concluded that hypothesis 4 is acceptable, showing that human capital affects innovation capability in Village-owned Enterprises in Kampar Regency. Human capital is the skills, knowledge, capabilities, and commitments possessed by individuals in the organization.(Collins & Clark, 2016) stating that human capital creates value for the organization.Managers of Village-owned Enterprises own human capital in Village-owned Enterprises.Therefore, managers of Village-owned Enterprises with have high abilities can build and utilize Village-owned enterprises' resources by creating creativity.For instance, it is creativity in producing products according to the village's potential.Furthermore, it is creativity in marketing the product.It supports the statement of (Jong et al., 2003), stating that innovative behavior can generate, introduce and apply new things that are useful at various levels of the organization. This study supports the research (Widjajanti et al., 2017), showing that human capital can increase innovative behavior. The effect of social capital on human capital The test results of hypothesis 5 showing the P-value of the effect of social capital on human capital was < 0.000 or ≤ 0.05.The original sample estimate value is 0.763.It can be concluded that hypothesis 5 is acceptable, showing that social capital affects human capital in Village-owned Enterprises in Kampar Regency. The study results indicate that an increase in human capital, in this case, is an increase in one's skills and competencies caused by the establishment of a network of social relationships with other people.Someone with a broader network will have higher competencies and abilities.It means that the more comprehensive a person's association and network of social relationships, the higher a person's value.These results also support attribution theory, especially external attribution, namely external effect like the social environment affecting one's ability.The results of this study also support the research of Kesi et.al., (2016), showing social capital has a positive effect on human capital.Hence, the first hypothesis in this study is: The effect of innovation capability on the performance of Village-owned Enterprises The test results showing the P-value of the effect of innovation capability on the performance of Village-owned Enterprises was 0.003 or < 0.05.The original sample estimate value is 0.398.It can be concluded that hypothesis 6 is acceptable, showing that innovation capability affects the performance of Village-owned Enterprises in the Kampar Regency. In line with research by Trott (2005), they stated that innovation capability is the ability to generate new ideas, products, or processes.Innovation capability for individuals is the achievement of ideas that can encourage organizational progress.Thus, the higher the innovation capability owned by the manager of Village-owned Enterprises, the better the performance of the Village-owned Enterprises by producing innovative products or businesses needed by the community.The results of this study also support the attribution theory, namely internal attribution.A person's abilities, including the ability to innovate, can affect the way in which goals are achieved.The results of this study support the research of Kesi et al. (2016), which shows that the ability of innovation capability has a positive effect on the marketing performance of MSMEs. Mediation of innovation capability on the relationship between social capital and the performance of Village-owned Enterprises For mediation testing, it can be seen on the indirect effect.To determine whether the mediation is partial or full, comparing the relationship before mediation is carried out is necessary.The path coefficient of direct result can be seen in table 5: Barron and Kenny (1986) state that a mediating variable is a partial mediation if the influence of the independent variable on the dependent variable has a significant effect before and after the mediation variable.Social capital, a social relationship owned by managers of Village-owned Enterprises with various stakeholders, can generate creative ideas.Managers of Villageowned Enterprises can increase assets and discover opportunities to market the products of Village-owned Enterprises with the existence of social networks.It has an impact on improving the performance of Villageowned Enterprises.Innovative behavior is the successful implementation of the creative ideas of Village-owned Enterprises' managers and is a significant factor in improving the performance of Village-owned Enterprises.Thus, the higher the innovation capability carried out by the company; it will improve its performance.It supports stewardship theory stating that internal and situational factors originating from environmental influences can affect goal achievement.The results of previous research show that social capital affects innovation capability (Pertiwi, 2020), and the ability to innovate can improve business performance (Widjajanti et al., 2017).It shows that innovation capability can mediate the relationship between social capital and the performance of Village-owned Enterprises. Mediation of innovation capability on the relationship between human capital and the performance of Village-owned Enterprise Test Results In the structural model image, it can be identified that the indirect effect of human capital-innovation capabilityperformance of Village-owned Enterprises shows a P value of 0.03 < 0.05, meaning that innovation capability mediates the impact of social capital on the performance of Villageowned Enterprises.Thus, hypothesis 8 is accepted.The two relationships are significant, meaning that innovation capability can mediate the relationship between human capital and the performance of Village-owned Enterprises.The direct effect of human capital on the performance of Village-owned Enterprises is shown in table 8 with a p-value of 0.746 > 0.05, meaning that innovation capability is full mediation.Following Barron & Kenny (1986) stated that a mediation variable is a full mediation if the independent variable on the dependent variable has no effect before the mediation variable and impacts after the mediation variable. It supports stewardship theory stating that individual factors such as competence can affect a person's behavior in achieving their goals.In line with (Unger et al., 2011) saying that human capital, such as skills and knowledge is the key to success in business.The skills possessed by managers of Villageowned Enterprises can create innovation, creativity, and business opportunities to increase the income of Village-owned Enterprises.This result support research conducted by (Widjajanti et al., 2017), stating that innovation capability can improve performance. Mediation of human capital on the relationship between social capital and the performance of Village-owned Enterprises Test results of hypothesis 9 showing the mediation of human capital on the relationship between social capital and the performance of Village-owned Enterprises are demonstrated by the indirect effect of social capital-human capital-performance of Villageowned Enterprises with a P-value of 0.822 > 0.05, meaning that human capital is not a mediation of the impact of social capital on the performance of Village-owned Enterprises.Thus, hypothesis 9 is rejected.It is because human capital cannot directly improve the performance of Village-owned Enterprises.Even though managers of Village-owned Table 3 : Construct Reliability and Validity . The R-Square value of the effect of human capital, social capital, and innovation capability on the performance of Villageowned Enterprises shows a value of 0.658 or 65.8%, indicating the performance of Village-
2023-03-03T16:11:33.381Z
2021-12-30T00:00:00.000
{ "year": 2021, "sha1": "7d6104277d67f604669f595dd583baf58c62bee0", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.17509/jaset.v13i2.37763", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "fd0d8a3e21361351accee6303107b3f161b58ff3", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
253525359
pes2o/s2orc
v3-fos-license
Predictors of malaria vaccine uptake among children 6–24 months in the Kassena Nankana Municipality in the Upper East Region of Ghana Background The Malaria Vaccine Implementation Programme (MVIP) coordinates the routine implementation of the RTS,S vaccine pilot in strategically selected locations in Malawi, Kenya, and Ghana. The pilot programme thoroughly assesses the programmatic feasibility of administering the four doses of the RTS,S vaccine. It will also assess the impact on malaria morbidity and mortality, as well as monitor and detect the vaccine's safety for routine usage. The malaria vaccine was introduced into Ghana's routine vaccination programme in May 2019 in seven regions, comprising 42 districts, including Kassena Nankana Municipal in the Upper East region of Ghana. Therefore, this study seeks to assess the predictors of the malaria vaccine uptake in children 6 to 24 months in the Kassena Nankana Municipal in Ghana. Methods The survey used a cross-sectional study design and included 422 mothers/caregivers with children aged 6 to 24 months from the Kassena Nankana Municipality. WHO cluster survey questionnaire was altered for use in data gathering with caregivers as respondents. The Statistical Package for the Social Sciences (SPSS) version 25.0 (for descriptive statistics) and Stata version 13 (for calculating odds ratios) were used to analyse the data. Results The findings depict that, the mean age of respondents for the study was 27 ± 5 years and average age of children was 15 ± 8 months. The study found that coverage uptake was high (94%). Chi-square and odds ratios testing revealed statistically significant associations between health service factors and vaccine uptake: education on malaria vaccine cOR(Cl); 9.69(3.496–25.425), (P < 0.001), giving caregivers the option to accept malaria vaccine cOR(Cl); 7.04 (2.759–17.476), (P < 0.001). Confidence in the efficiency of the vaccination was found to have a statistically significant association with malaria vaccine uptake (P < 0.005) and (p < 0.001) for ‘somewhat confidence’ and ‘not confidence at all’, respectively. Attitude of health workers was found to be significant predictor of malaria vaccine uptake (P < 0.003). Conclusion Malaria vaccine uptake was high among the study population in the municipality; however, dose four uptake coverage by age two was low. This indicates that mothers/caregivers did not understand the notion of immunization throughout the second year of life. As a result, it is recommended that the municipality raise awareness about immunization services among mothers/caregivers beyond year one in order to improve performance and reduce the risk of disease outbreaks in the municipality. Background One of the vector-borne protozoan disease is malaria caused by Plasmodium species which is spread when a female Anopheles mosquito bites susceptible host. The disease remains one of the public health importance outcomes in developing countries [1]. Nevertheless, the disease is preventable and curable, it was documented in 2015 that about 212,000,000 new cases with 429,000 deaths was recorded globally, of which ninety percent of the cases and ninety-two percent of the deaths happened in African. Additionally, 219,000,000 new cases of malaria happened in 2017 of which 435,000 deaths were recorded globally and as usual, the most affected were children and pregnant women [2]. In Ghana malaria accounts for about 17.6 percent of the general out-patient department attendance, 13.7 percent of ward admissions, and 3.4 percent of total maternal deaths [3]. There is demonstration of re-emergence of malaria in most places that successfully reduce the disease burden and were free from malaria. This development characterizes main risk for control and prevention of malaria, signifying the importance for new approaches to implement the control and preventive strategies and interventions [4]. From the year 2000, efforts are being intensified to fight against the prevalence of malaria through varied concerted campaigns such as the "Roll Back Malaria" campaign, which help to drastically reduce the impact of the disease. These programmes led to first-time high intervention performances and scaling up of effective managements and treatments across Africa. The new goals for World Health Organization (WHO) for the global lessening of malaria incidence and mortality rates by a minimum of ninety percent by the year 2030 and the elimination of malaria in not less than 35 endemic countries by 2030 [5]. Momentous effort has been made globally with much progress with the aim of reducing the malaria prevalence and burden. This accomplishment is basically due to the interventions to protect the susceptible hosts by providing and promoting the use of long-lasting insecticidal nets (LLINs), seasonal in-door residual spraying in communities and households, and artemisinin-based combination therapy for the management and treatment of malaria infections [6]. The blend of these interventions has contributed to about 40% reduction in malaria incidence and a 50% decrease in infections due to Plasmodium falciparum parasites. Even though the level of reduction of the malaria infection appeared convincing, the figures were still high and fall short of the 75% target set by the WHO to reduce malaria burden by 2015. To be able to achieve the set target in reducing the prevalence, more efforts, strategies and interventions are paramount, especially effective vaccines in high prevalent places [6]. Vaccines have proved to be most cost effective and efficient health interventions for the general public with the achievement of morbidities and mortalities prevention in most developing countries [7]. Nonetheless, in malaria prevention, vaccine is not the sole significant matter, but the effectiveness of the vaccine, the burden of the disease on the population, and the cost implications arising as a result of the introduction of the vaccines are among the common concerns [7]. Expanded Programme on Immunization (EPI) is responsible for vaccines and vaccination to control, eliminate and eradicate vaccine preventable diseases (VPDs). Having strong immunization systems to deliver vaccines to those who need them most play a significant role in achieving the health, equity and economic objectives of several global development goals. These include the 2030 Sustainable Development Goals (SDGs), the 2011-2020 Decade of Vaccines, the 2030 Universal Health Coverage (UHC) agenda, the 2011-2020 Global Vaccine Action Plan (GVAP), the Global Routine Immunization Strategy and Plan (GRISP), and the Regional Strategic Plan for Immunization 2014-2020 [8]. Malaria vaccine development is predicted to provide a low-cost intervention to contribute to the reduction of malaria episodes. The milestones regarding the development of malaria vaccine has hastened in the recent times with increased research motivating the unearthing of new vaccines and vaccine expertise, and many vaccine candidates are being moved through the vaccine development pipeline [9]. Statement from the WHO, indicates that the RTS,S vaccine is so far the most progressive vaccine candidate in the vaccine development trail, which is to serve as a complementary malaria control strategy that might possibly be added to the already existing interventions and not to replace the main preventive and treatment interventions [10]. According to the WHO, cost-effective malaria vaccine with high efficacy will suppress morbidities and mortalities and contributes to malaria control strategies that is immunization services among mothers/caregivers beyond year one in order to improve performance and reduce the risk of disease outbreaks in the municipality. Keywords: RTS,S vaccine, Predictors, Vaccine uptake, Malaria, Ghana needed, especially in most endemic places where health service and control strategies might be difficult to sustain. The RTS,S vaccine is the first malaria vaccine licensed for use and which is an indication of an important step headed for malaria control and prevention. On the other hand, if the RTS,S is not as effective as expected, it will create challenges, especially to evaluate the efficacy of the malaria vaccines [1]. The WHO has indicated that a whole sporozoite vaccination method has shown hopeful outcomes, encouraging immunity in small number trials in adults, but may not attain strong protection in malaria endemic population densities [1]. "Vaccines targeting both the preerythrocytic and the erythrocyte-invasive form of the parasite (merozoites) may repeal leap forward infections by neutralizing merozoites developing from infected hepatocytes, while vaccines targeting the sexual stages strive to interruption the transmission cycle. " Moving forward, multiple vaccines could be the next step toward malaria prevention [1]. With the introduction of the RTS,S vaccine as part of the control strategies in sub-Saharan Africa, there would be a great impact on healthcare delivery in the sub-region. RTS,S is the first malaria vaccine and it is now being piloted and implemented in three African countries, which is Ghana, Kenya and Malawi to inform policy direction on the scaling up and broader use of the vaccine [6]. In order to ensure successful malaria vaccine combination into the already existing malaria intervention programmes, the interaction and relationship between Expanded Programme on Immunization and communities must be made unequivocal [6]. Communities' socio-cultural values, religion, believes and particular unique characteristics must be considered so as to ensure acceptance to immunization services and these are key strategies that address the human realities in vaccine trials and pilot implementation, leading to positive health outcomes. For every successful community-based intervention, paying serious attention to critical sociocultural values of the communities in question is highly paramount [6]. In Ghana, vector control methods are the main malaria control methods used. The provision and use of mosquito nets (LLINs), where the impregnated insecticide keep up to three years and above, and the nets are occasionally washed to keep it clean whiles the insecticide is kept for active use for up to 12 months [11]. It is endorsed by the WHO that all countries and agencies scale up the supply and distribution of mosquito nets, especially for target populations at high-risk areas [11]. A lot of countries malaria prevention programmes have adopted the universal coverage of LLINs supply and distribution, where mass distribution campaigns are conducted between the intervals of two to four years depending on the endemicity levels. Integration of multiple interventions and strategies are the bedrock of global malaria control campaigns, which greatly contributed to the reduction of malaria burden. Most successful countries used global malaria prevention campaign strategies which resulted in the prevention of malaria from such countries and significantly lessened the burden in others [11]. Additionally, the use of chemoprophylaxis for women during pregnancy after quickening, which is delivered through directly observe treatment doses in the form of intermittent preventive treatment in pregnancy to prevent malaria infection in pregnancy thereby reducing the risk of anaemia and other negative birth outcomes [11]. Also, there is another intervention for children where antimalarial drugs are administered seasonally in the form of chemoprevention for children. These integration and combination of interventions being use to prevention malaria and now with the introduction of the malaria vaccine to add on to the already existing multi strategies for malaria prevention [11]. The malaria vaccine in Ghana is one of the effective malaria prevention interventions, which targets 95% coverage for pilot implementing districts. Kassena Nankana Municipality is among the only two selected implementing areas in Upper East region of Ghana. Since the introduction of the vaccine into the routine Immunization programme in May 2019, the municipality could not cover its annualized target consistently in 2019 and 2020. In the first year of implementation, the municipality covered 48.5% for dose one, 49% for dose two and 40.4% for dose three. During 2020, the municipality covered 43.4% for dose one, 40.5% for dose two and 40.2% for three [12]. Due to poor community entry process, engagement with community leaders, opinion leaders, identifiable groups and all other stakeholders, most people did not accept the vaccine for their children to be vaccinated. Additionally, demand generation, communication, publicity and awareness creation on the importance of the vaccine was poorly done in the communities [12]. In terms of trainings for health staff to be able to administer the vaccine, only few health staff were trained and as a result, the few trained staff could not cover all the communities to vaccinate the eligible children. As a result of this abysmal performance by the municipality, eligible children are left unreached and unvaccinated. This challenge could have been generated by the health service delivery system such as health staff knowledge on vaccination schedules, eligibility criteria, staff attitude, data capture and logistics supply. There could be community factors such as anti-vaccination groups activities, vaccine hesitancy, inadequate knowledge on the malaria vaccine and other social, cultural and religious practices that precipitate vaccine acceptance. Implications for low performance coverage are the accumulation of susceptible unimmunized children which impedes the planned evaluation of the feasibility of administering the scheduled four doses, evaluate the impact on morbidities and mortalities due to malaria and the tracking of the vaccine's safety when use in the routine vaccination programme to inform policy direction by WHO on the scale up of the vaccine. This study thereby seeks to assess the predictors of the malaria vaccine (RTS,S) uptake among mothers/caregivers with children 6 to 24 months in the Kassena Nankana Municipal in the Upper East Region of Ghana. Study design and population The study adopted a cross-sectional design in conducting this study. This study design is referenced about a single point in time for both the exposure and outcome variables. The rationale for the selection of cross-sectional study design was that, the collection of data was done at a particular point in time. The study population was children age between 6 months to twenty-four months (6 to 24 months) whose mothers/caregivers resided in the municipality. The schedule for the malaria vaccine starts with infants at age 6 months and ends at twenty-four months hence the reason for selecting this age group to determine the factors associated with the uptake. Sampling and sample size Probability sampling method was adopted to select the study participants. At the first stage, a simple random sampling method was used to pick 30 clusters (communities) from the 110 communities in the municipality. Using the balloting method, communities' names were written on pieces of paper and kept in a container and with vigorous shaking, all the 30 communities were selected. In the second stage, proportion to population size was used to determine the number of children studied in each of the selected clusters. Cumulative population for the selected 30 clusters was determined and, therefore, each cluster's population was divided by the cumulative population and multiplied by the sample size to determine the study participants in that cluster. Also, in a cluster, sampling of participants was done from the centre of the selected community and followed the direction of the spun pen or pointer, houses in that direction were selected for the survey through the principle of the next nearest household. Children were taken on sequentially until the planned cluster sample size was attained. House-to-house visits and face-to-face interviews were done with mothers/caregivers who had eligible children. In a household where mothers with eligible children were more than one, a simple random sampling method, that is balloting was done to pick one for the study. The study adopted the formula for sample size determination from Yamane's formula [13] as indicated below formula: where n = sample size, N = study population (8311 target of children 0-24 months), α = margin of error which is 0.05 with significance level of 95%. Hence, the sample size of 422 children aged 6-24 months were sampled and studied. Data collection and analysis The study adapted World Health Organization (WHO) cluster survey questionnaire for the data collection. Seven data collectors were trained on the study protocols and the questionnaires. In addition, a pretest of the tool was done after the training to ensure understanding of the tools. Data were collected electronically by field data collectors using Kobocollect application on android mobile devices (https:// www. kobot oolbox. org). After an informed consent and child assent were sought and received from the mothers/ caregivers, the selected children's mothers responded to interviewer-administered questionnaires. Information on malaria vaccine immunization was obtained through review of the children's vaccination records books and their mothers recall and verbal reports. The mothers were asked to show the interviewer the child health record booklet with immunization dates to authenticate the uptake of malaria vaccine. After the data collection in the field, the administered questionnaire data were downloaded and checked for completeness and cleaned. The extracted data from the KoboCollect server was analysed using the Statistical Package for the Social Sciences (SPSS) version 25.0 [14]. In addition, Stata version 13 [15] was used to calculate the odds ratios. The odds ratios were obtained from binary logistic regression model analysis of the dependent variable RTSS uptake against the independent variables to find the odds of association to determine the predictors of the malaria vaccine uptake with p-values and 95% CI. Descriptive and inferential statistics were computed and presented in frequencies and percentages in tabular forms. Probability values less than or equal to 0.05 was considered statistically significant. Results Socio-demographic characteristics of respondents were captured in data collection and analysed in Table 1 as independent variables, which can directly or indirectly influence malaria vaccine uptake by eligible children. The findings depict that, the mean age of respondents for the study was 27 ± 5 years and average age of children was 15 ± 8 months. Regarding the uptake of RTS,S 94% of the children received full doses of the vaccine and one major reason for not receiving all the doses was the sickness of a child ( Table 2). There was strong association between health education on the malaria vaccine, given options for acceptance and attitude of health staff during immunization sessions and RTS,S uptake (Table 3). Community factors that hinder RTS,S vaccine utilization among the study group. There was statistically significant association (p < 0.005) between respondents' who were 'somewhat confidence' in the effectiveness of the RTS,S vaccine. There was also a strong association between those who were 'Not confident at all' and still took the RTS,S vaccine (p < 0.001) ( Table 4). Attitude of health staff during immunization sessions being reported by respondents as "disappointing" was a significant predictor of malaria vaccine uptake (P = 0.003). Also, education on malaria vaccine, given the respondents the option to accept the malaria vaccine were all statistically significant predictors of the malaria vaccine uptake (P < 0.001) ( Table 5). Caregivers who had not received education on the malaria vaccine were 9.69 times more likely to receive the vaccine compared to those who have received education [cOR = 9.69 (CI 3.496-25.425), p < 0.001]. Again, those who were not given the option to accept the malaria vaccine were 7.04 times more likely to receive the vaccine compared to those who were given the option for the vaccine uptake education [cOR = 7.04 (2.759-17.476), p < 0.001]. Surprisingly, caregivers who rated the attitude of health staff during immunization were 20.91 times more likely to take the vaccine compared to those who rated them excellent [cOR = 20.91 (0.244-1647.49), p = 0.003]. Level of uptake of RTS,S This study assessed the level of uptake of RTS,S vaccine performance and elements related with it in children 6 months to 24 months old in Kassena Nankana Municipal of the Upper East Region. When the children immunization statuses were confirmed using vaccination cards and mothers recall method, it came to light through the study findings that, fully immunized coverage was high (94%) among the study participants. This figure is in line with WHO's Global Vaccine Action Plan which proposes that countries attain about 90 percent and districts attain about 80 percent fully immunized children by the year 2020 [16]. The findings from this study showed a comparatively similar results to a study in Sunyani [16] in the Bono Region of Ghana, which indicated that uptake of RTS,S first dose was 94.1 percent. Nonetheless, this figure declined to 90.6 percent for RTS,S second dose, and 78.1 percent for RTS,S third dose. Therefore, this high immunization coverage in this study implies that there was herd immunity among children in the district and, therefore, the risk of vaccine-preventable diseases like malaria is expected to be low in counts and severity. Although, this finding clearly indicates that the municipality may not have high number of unimmunized children as the administrative coverage highlights, there may be unimmunized children out there that the municipality needs to strategically trace and immunize them to reach every child in the catchment area to achieve optimal immunization coverage. The entire municipality has a lot of rural setting with 80 percent of its facilities as Community-based Health Planning and Services (CHPS), therefore, through the activities of CHPS (home visits, defaulter tracing, and vaccination, among others), unimmunized children can easily be traced and immunized. Immunization performance based on children vaccination cards and mothers' memory recall for the RTS,S vaccine was high as presented in the results section in this write up, these findings were in contrast to the municipal vaccine administration rate as it had been recording low vaccination coverage per its targets as presented in Upper East Regional Health Directorate Annual Report, 2020. In the first year of implementation (2019), the municipality covered 48.5%, 49% and 40.4% for first, second and third dose respectively and in the second year (2020), the municipality covered 43.4%, 40.5% and 40.2% for first, second and third dose, respectively [12]. These discrepancies between the municipality's low administrative coverages and these high study findings coverages are unexplained because the study did not cover immunization service providers' perspectives on the low administrative coverage. However, these findings can suggest that staff with inadequate knowledge in charge in handling immunization data and inadequate supervision and monitoring may contribute to low administrative coverage. Appropriate screening of vaccination status might not have been done by the health staff when mothers/ caretakers came to health facilities with their children for preventive and curative services may also contribute to low administrative coverage. Another possible explanation can be attributed to poor data management including erroneous population indicators. Further statistical analysis using odds ratio calculation and chi-square to determine which variables relate significant with the uptake level of the malaria vaccine, showed that; 'education on malaria vaccine' (P < 0.001) and 'given option to caregivers to accept malaria vaccine' (P < 0.001) had significant associations as health service factors and the uptake of the vaccine. Community factors that showed statistically significant association with malaria vaccine uptake is confidence in the effectiveness of the vaccine (P < 0.005). However, attitude of health staff during immunization sessions being reported by respondents as "disappointing" was a significant predictor of malaria vaccine uptake (P = 0.003). These associations can provide cues and it is a wake-up call for public health workers in the municipality to strengthen education on vaccines to communities, especially caregivers/mothers. Good working relationship and engagement with their clients as well as improved inter-personal communication should be the hallmark for staff in achieving the set goals of reaching every eligible child in the municipality with the required vaccines including malaria vaccine. Significantly caregivers' education to keep the immunization cards as a source of documentation and records to track children immunization to its completeness should be prioritized. Also, education on the second year of life immunization should be prioritized to lessen the risk of malaria episodes in the municipality. Health service delivery factors associated with RTS,S uptake Factors affecting immunization services are often obstacles within the health service and caregivers' factors. Among the study participants there were high acceptance rate of the malaria vaccine because most of the respondents were educated by the health staff on the vaccine. Majority of the respondents said their children did not experience any adverse events following immunization, which could have affected the vaccine uptake. Additionally, most of the respondents were satisfied with the attitude of the health staff. A majority of the respondents who missed the full uptake were as a result of their children been sick or were not around RTS,S implementation catchment areas. These findings were in contrast with the assertion of Van Den Berg et al. [6] on the various reasons giving by caregivers on why they could not complete their ward's immunization ranges from obstacles such as inconvenient timing for immunization services, mother's too busy and long waiting time; inadequate information on place of vaccination unknown and lack of motivation like postponing of immunization services. Additionally, no respondent mentioned poor health staff attitude, inadequate vaccination skills that cause adverse event following immunization, unapproved charges by health staff, unplanned and lack of communication on changes in schedules and migrations to contribute to none completion of immunization. The study did not reveal similar findings with Ballou's study [17] that indicated that caregivers paid for immunization services, that notwithstanding, lack of information on immunization and time factor were supported by these findings. The findings once again showed that people were reachable with health services even though few nomads and seasonal migrants were encountered and strategic approach is needed to reach out to all eligible children for immunization. This will buttress the assertion by Dimala et al. [18] that immunization coverage is affected by several factors and to achieve the optimal coverage level, systems thinking and strengthening is imperative. As it was stated by Abdulkadir et al. [19] that factors affecting immunizations are frequently due to supposed and real deficits within the health sector such as inadequate information on immunization services. This is often the main hindrance to attaining full immunization of children and women. Mostly, they might not know the places and date or time for the vaccination, this study proved otherwise as the respondents were well aware and educated on the immunization services. The reasons that hindered caregivers from completing their children's immunization and created gaps in immunization services can be categorized under health system factors and caregivers' factors, therefore, for successful implementation and sustainability of EPI services, identification and addressing of these gaps is paramount [20]. The findings support the above assertion and if steps are taking to address identified gaps, immunization services will be improved. Community factors influencing RTS,S uptake Access to immunization services was good as most of the respondents alluded to the fact that they spend less time in accessing immunization services. Factors affecting immunization services are often obstacles within the health service and caregivers' factors. However, community factors such as negative rumours, vaccine hesitancy, religious and cultural disbelieve in orthodox medicines including vaccines most often than not affect vaccine uptake. In the case of this study findings, the vaccine acceptance rate was high among the study participants. The findings once again buttress Mukungwa's pronouncement [21] that mothers and care givers awareness of immunization schedule and consistency of immunization schedule delivery increase the probability of children being fully immunized at the appropriate age. However, education level of the respondents did not have significant impact on the uptake level of the RTS,S vaccine as it was noted by [22] that people are well placed in the social class in terms of education and jobs are likely to use health services more than those perceived to be in the lower social class. Moreover, individuals' cultural belief also impacts the level of use of services such as immunization. This study findings revealed similar output by Dimala et al. [18], who indicated that the acceptance and uptake of the RTS,S vaccine may be improved if caregivers' perceptions about vaccines and their importance are adequately informed and supported through engagement and education. A study on the uptake of RTS,S vaccine conducted in Sunyani by Tabiri et al. [16], found that Uptake of 1st and 2nd doses met WHO's target, however, the subsequent doses were low as a result of increasing negative perception in communities that injectable vaccines are becoming much more for children, which negatively affects uptake. This study found similar high uptake with kept dropping out as a result of missed opportunities due to vaccine shortages, competing health programmes and interventions. Health workers play very important roles in working closely with their communities as the respondents constantly cited health staff as their dependable sources of health information on vaccination, therefore it is imperative for health staff to educate and mobilize communities' support for vaccination and to use immunization services. This necessitates health staff and other people who have tried and tested the system to keep caregivers informed of places and time that they need to bring children for vaccination [23]. According to Van Den Berg et al. [6], integrating community values in vaccination exercise will help address real challenges in most trials and pilot implementation of intervention leading to positive health outcomes. As the communities begin to show concerns about more vaccine's introductions, there should be frantic efforts by health authorities to systematically engage all stakeholders including community leaderships, opinion leaders and all who matter to build consensus and this will help to dispel most of the negative rumours about the safety of vaccines as also indicated by Meñaca et al. [24], in a study which identified that number of challenges such as the hesitancy, rumours and misinformation by some people about RTS,S vaccine could be addressed through planned communications strategy and simple messages. Adopting the recommendations of Dimala et al. [18] that operational implementation of RTS,S vaccine necessitates vigilant thoughtfulness of the social, religious and cultural perspective of each community through community engagement and involvement and the establishment of adequate health information system in an acceptable form through consistent communication channels should be the way to go. In essence, this study findings and what is known in the existing literatures point out to the fact that for a pilot district to achieve high uptake of malaria vaccine, there is the need to harmonize the health service goals and actively engage the communities to address their issues and concerns on the growing numbers of injections among childhood vaccinations. However, some limitations of this study were that, recall bias from mothers/caregivers' verbal reports as some might have not recall past events correctly. This is due to the study design used, but this did not affect the outcome of the study so much. Conclusion Generally, malaria vaccine uptake among children was high in the Kassena Nankana Municipality, which can contribute to the level of protection against the risk of prevalence of malaria episodes among children leading to preventable deaths. However, full uptake of the vaccine among the children by age two, especially the fourth dose declined as the children aged which was an indication of high drop-out rate. The findings of this study have health policy implications for the health system in Ghana. The gaps found as predictors for the caregivers on routine immunization services may lead to low use of the health information, planning and decision-making. The low administrative immunization coverage data may also lead to funding implications and supply of vaccines which can be negatively affected during outbreaks or emergencies. The study also buttresses and confirms most of the literature assertions on Expanded Programme on Immunization, however, this finding can provide an opportunity to address the existing gaps and improve the overall health system.
2022-11-16T14:54:09.696Z
2022-11-16T00:00:00.000
{ "year": 2022, "sha1": "e2b693236cc897c9dd2dd4833d5422a3819392ee", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "e2b693236cc897c9dd2dd4833d5422a3819392ee", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
117848477
pes2o/s2orc
v3-fos-license
Random matrix theory for systems with an approximate symmetry and widths of acoustic resonances in thin chaotic plates We discuss a random matrix model of systems with an approximate symmetry and present the spectral fluctuation statistics and eigenvector characteristics for the model. An acoustic resonator like, e.g., an aluminium plate may have an approximate symmetry. We have measured the frequency spectrum and the widths for acoustic resonances in thin aluminium plates, cut in the shape of the so-called three-leaf clover. Due to the mirror symmetry through the middle plane of the plate, each resonance of the plate belongs to one of two mode classes and we show how to separate the modes into these two classes using their measured widths. We compare the spectral statistics of each mode class with results for the Gaussian orthogonal ensemble. By cutting a slit of increasing depth on one face of the plate, we gradually break the mirror symmetry and study the transition that takes place as the two classes are mixed. Presenting the spectral fluctuation statistics and the distribution of widths for the resonances, we find that this transition is well described by the random matrix model. I. INTRODUCTION Random matrix theory has been used with success in a variety of physical systems for the description of certain generic features of spectral correlators which are determined by the underlying symmetries of the Hamiltonian [1]. In Sec. II of this paper we discuss a random matrix model of systems with an approximate symmetry. A problem like this is found, e.g., in nuclear physics where isospin symmetry, characteristic of the strong interactions, is only approximate due to Coulomb effects [2]. Isospin mixing was analysed by Guhr and Weidenmüller in 1990 using a random matrix approach [3]. They used a random matrix model to describe experimental data and to estimate the average symmetry-breaking matrix element, i.e., the average Coulomb matrix element. The random matrix model discussed here differs from the one considered in [3], and we comment on this difference. In addition to the spectral fluctuation statistics for the model we consider a measure of the asymmetry of the eigenvectors and describe it using simple analytical arguments. In Sec. III we present two experimental studies of acoustic resonances in thin aluminium plates. The plates have the shape of the so-called three-leaf clover, see Sec. III B. Frequency spectra of acoustic resonators were first compared with random matrix results by Weaver in 1989 [4]. Further experimental studies of the fluctuation properties of acoustic resonance spectra in blocks of aluminium and quartz were made by Ellegaard and coworkers [5,6]. In Ref. [7] the level spacing distribution measured in [6] was compared with the random matrix model of [3]. In this paper we focus on acoustic plates which in many respects are simpler than the three-dimensional resonators mentioned before. Acoustic resonances in plates were investigated by Bertelsen et al. [8]. We present a short review of the theory of acoustic waves in thin isotropic plates and discuss the characteristics of the different types of resonances. We find experimentally that modes can be separated into two different classes which each have a characteristic dependence of their widths on the damping by the air surrounding the plate. One class of modes has widths which are almost independent of the air-pressure, and the other class has widths with a strong dependence on the air-pressure. We argue that these modes are in-plane and flexural modes, respectively. In the first experiment, we measure the spectral fluctuation statistics for both mode types individually and compare with well-known results for the Gaussian orthogonal ensemble (GOE). Then, in a second experiment, we mix the two mode classes by gradually cutting a slit on one face of the plate. We thus observe the transition from two separate classes of modes to one class of modes. This transition is studied by comparing the data to the random matrix model for systems with an approximate symmetry for both the spectral fluctuation statistics and the width distribution. The latter is described using eigenvector information from the model. II. RANDOM MATRICES AND APPROXIMATE SYMMETRIES A. The random matrix model Let H be a random real symmetric N × N matrix with the following block-structure: where D A and A are random N 1 × N 1 matrices, and D B and B are random N 2 × N 2 matrices. The random matrix C is N 1 × N 2 , and the coupling strength, g, is a real parameter. Note that N ≡ N 1 + N 2 . The elements of the diagonal matrices D A and D B are drawn uniformly on the interval [−0.5, 0.5] and ordered in increasing order for each block. This choice of probability distribution leads to level spectra for D A and D B which, except for small end-point corrections, are described by the Poisson statistics appropriate for a sequence of uncorrelated energy levels. The elements of A, B, and C are Gaussian distributed with zero mean. The variance, σ 2 , of the distribution of the diagonal elements of A and B scales as 1/N 2 . The variance of the distribution of the off-diagonal elements of the matrices A and B and the elements of C is set to half the value of σ 2 . The diagonal contributions D A and D B in H are intended to mimic the effects of the kinetic energy operator, and the Gaussian distributed elements of A and B simulate "interactions" due to boundary conditions. Since the elements of D A and D B are "sufficiently" small compared with the Gaussian distributed elements, the short-range spectral fluctuation statistics are identical to the statistics obtained for two superimposed GOE spectra (2 GOE) when g = 0 and to GOE statistics when g = 1. (See Sec. II B for a more detailed discussion of the spectral fluctuation statistics.) The average distance between neighbouring levels scales as 1/N because of the presence of the diagonal matrix elements. With a finite value of g both the root mean square (RMS) symmetry-preserving matrix element and the RMS symmetry-breaking matrix element also scale like 1/N , and the transition from 2 GOE statistics to GOE statistics takes place as a function of g independent of the value of N . This is not the case if the kinetic energy terms are not present as in a random matrix model, like the one used in Refs. [3,7], with two GOE-like diagonal blocks coupled by Gaussian-distributed matrix elements. For such a model the ratio between the RMS symmetry-breaking matrix element and the average distance between neighbouring levels for the unperturbed problem scales like g √ N . The transition from 2 GOE to GOE spectral fluctuation statistics takes place as a function of this ratio. If g is independent of N , it follows that the ratio scales like √ N , and in particular that it goes to infinity in the large-N limit for any finite value of g. To observe a smooth transition from 2 GOE statistics to GOE statistics independent of N , it is thus necessary that the ratio between the RMS symmetry-breaking matrix element and the RMS symmetry-preserving matrix element scales like 1/ √ N in a random matrix model without kinetic energy terms. B. Spectral fluctuation statistics To describe the short-range spectral fluctuation statistics we consider the standard level spacing distribution, and as a measure of the long-range spectral fluctuation statistics we choose to look at the ∆ 3 -statistic [9]. Numerical calculations of the level spacing distribution and the ∆ 3 -statistic for N 2 = 2N 1 = 200 with σ 2 = 16/N 2 are shown in Figs. 1 and 2 together with the exact results for the GOE and two superimposed GOE spectra with fractional densities 1/3 and 2/3, respectively. The ensembles in the simulations consisted of 500 matrices, and 150 eigenvalues from the "middle" of the spectrum for each matrix were considered. Figures 1(a) and 1(d) show that the level spacing distribution for the random matrix model is identical to the 2 GOE result when g = 0 and to the GOE result when g = 1. Similarly Fig. 2 shows that this is also the case for ∆ 3 (L) for L ≤ 20. It is clear from Fig. 1 that the level spacing distribution looks very much like the level spacing distribution for the GOE even when g = 0.1. It is well known that ∆ 3 (L) for a model with diagonal terms like D A and D B deviates from the corresponding GOE result for large values of L [10]. The value of L where this transition to more Poisson-like behaviour sets in is referred to as the Thouless energy. For g = 1 and σ 2 = 16/N 2 we find a Thouless energy of about 35. A different choice of the variance, σ 2 , leads to a picture for the spectral fluctuation statistics similar to the one shown on Figs. 1 and 2 as long as L is less than the Thouless energy. C. Eigenvector information As a measure of the asymmetry of the eigenvectors of H we define a quantity, a, which we denote the asymmetry number. Consider a N = N 1 + N 2 dimensional vector (v 1 , v 2 , ..., v N ) of unit length, and let a be defined by For two decoupled systems described by the subspaces spanned by the first N 1 and the last N 2 basis vectors, respectively, the distribution P A (a) has a δ-function peak at a = −1 and one at a = 1. For the GOE it has a single peak at a = (N 1 − N 2 )/N . These features are obvious in Fig. 3, which shows P A calculated numerically for the ensembles considered in Sec. II B. Notice that the smaller of the two peaks present when g = 0 has almost vanished when g = 0.1, whereas the strength of the largest peak is reduced to half its original value. Imagine that two uncoupled classes of resonances have width distributions P F and P I , respectively. The width distribution of all the resonances, P (Γ), is the sum of P F and P I when the two classes are uncoupled. The width distribution, P (Γ), changes if the two classes are coupled, and in our random matrix approach we model P (Γ) using the asymmetry distribution: Notice that the integral reduces to the weighted sum of P F and P I if P A is the sum of two δ-functions as in the case g = 0 shown in Fig. 3(a), and that P is expressed in terms of P A if P F and P I are δ-functions. We now consider the case N 1 = N 2 and describe the characteristic properties of the asymmetry distribution using a simple analytical model and arguments from perturbation theory. Figure 4 shows numerical calculations of the distribution of the asymmetry numbers for N 1 = N 2 = 150 for three values of g. We considered ensembles of 500 random matrices, and, as in the numerical simulations described in Sec. II B, we focused on the 150 eigenvalues in the "middle" of the spectrum. The distribution P A (a) increases linearly as a function of g close to a = 0 as shown in Fig. 4(d). The small fraction of the eigenvectors for which the asymmetry number is close to zero are most likely superpositions of an eigenvector of D A + A and an eigenvector of D B + B with eigenvalues lying close in the unperturbed spectrum where g = 0. Let the unperturbed spacing between the two eigenvalues be denoted ∆ and consider the matrix which connect the two states when the symmetry-breaking perturbation is introduced: The distribution, P C , of the matrix elements, c, is Gaussian with zero mean and variance (gσ) 2 /2. In the numerical simulation shown in Fig. 4 the eigenvectors come from the "middle" of the eigenvalue spectrum where the level density is almost constant, and we thus assume that the level density for each diagonal block is equal to a constant which we denote R 1 . In this case the spacing, ∆, comes from the distribution where the variance ∆ 2 0 = 1/(2πR 2 1 ). In the two-dimensional approximation the eigenvectors have the asymmetry numbers and thus the distribution of the asymmetry numbers becomes When a = 0 the expression reduces to which is in perfect agreement with the numerical simulation shown in Fig. 4 for which σ = 4/300 and R 1 = 140. The majority of the eigenvectors, which have values of |a| ≈ 1, can be described using perturbation theory. The shift in the position of the peak of the distribution away from a = 1 is to first order proportional to g 2 , since the average correction to a given state from a state from the other block is proportional to g. A. Acoustic resonances in thin plates In a homogeneous and isotropic three-dimensional medium, sound waves obey the elastomechanical wave equation for the vectorial displacement field u: where λ and µ are the Lamé coefficients, ρ is the density, and we have assumed no external forces. Equation (9) allows for two types of wave motion: longitudinal and transverse. (In the literature the transverse modes have names like shear or secondary, and the longitudinal modes are often called pressure or primary.) Longitudinal waves always travel faster than transverse waves. For aluminium, which we consider in this paper, the difference is approximately a factor of 2. In the bulk, the two types of waves propagate independently. However, upon reflection at a boundary, mode conversion takes place: an incident wave that is purely longitudinal or transverse will, in general, give rise to two reflected waves, one longitudinal and one transverse. Moreover, their angles of reflection will be different due to their different velocities, as dictated by Snell's law. We now briefly present some facts about elastic waves in thin infinite plates, see, e.g., [11] and the recent studies in Ref. [8,12]. Three types of modes exist in an infinite isotropic plate, when considered at frequencies below the first critical frequency, i.e., when one half of a transverse wavelength is larger than the thickness of the plate. The flexural modes have displacement mainly normal to the plane of the plate, but they also have a small in-plane component. These modes are anti-symmetric with respect to reflection through the middle plane of the plate. (In the literature the flexural modes are sometimes called bending modes.) The in-plane modes are symmetric with respect to reflection through the middle plane of the plate and consist of two mode types. The in-plane transverse modes have displacement exactly in the plane of the plate, and the in-plane longitudinal modes have displacement mainly in the plane of the plate, but they also have a small out-of-plane component. Now consider a finite plate. As mentioned above, the boundaries introduce mode conversion. For a finite plate there is thus the possibility of a coupling between the different mode classes. In Ref. [8] it was concluded, first, that the flexural modes are uncoupled from the in-plane modes and, second, that the in-plane longitudinal modes couple to the in-plane transverse modes. The densities of flexural modes and in-plane modes were calculated theoretically and found to be of the same order [8]. These results explain the spectral fluctuation statistics measured in Ref. [8] where resonances, i.e., both flexural and in-plane modes, of a quarter of a thin Sinai stadium plate were investigated. In Sec. III C we explain how to separate the flexural and in-plane modes experimentally using their measured widths. This technique allows us to measure the number of modes of the two types separately and to compare these numbers to the theoretical predictions of Ref. [8]. It also enables us to study the spectral statistics and the width distributions for the two classes of modes independently and to find out if the flexural modes are in fact uncoupled from the other modes. B. Acoustic systems and experimental technique For the experiments, we use two aluminium plates of different thickness cut in the shape of the three-leaf clover shown in Fig. 5. This billiard, which was first considered in Ref. [13], was chosen because it is known to be classically chaotic and, when R ≥ r, it has no continuous families of periodic orbits [14]. Thus, we have chosen r = 70 mm and R = 80 mm. The area of the plates was 8250 ± 100 mm 2 , and the circumference was 390 ± 3 mm. The plates were 1.5 mm and 2 mm thick. The choices of r and R and the thickness are important for the experiment in so far as they determine the relative densities of the two mode classes and also the total number of modes. In our case, these parameters were chosen to give many modes for the purpose of producing significant statistics while keeping the density of in-plane modes approximately equal to the density of flexural modes in the frequency range (300 kHz -600 kHz) where our transducers are most effective. Aluminium was chosen for the plates because it is isotropic and very easy to machine, while maintaining a high Q value; at 500 kHz the Q value measured in vacuum is around 10 4 . There are isotropic materials with much higher Q values, such as fused quartz. However, fused quartz is more difficult to machine and thus not suitable for the symmetry-breaking experiment, where one must remove material from the plate many times in a controlled way. The elastic constants for the two plates cannot be found in standard tables of material properties, since they are not pure aluminium but a special alloy. However, the elastic constants for this alloy were determined by experiment in Ref. [15]. We shall use the values from Ref. [15] for Young's modulus E =70 ± 1 GPa and Poisson's ratio ν =0.330 ± 0.005. The density is 2.698 g/cm 3 [16]. The corresponding bulk sound velocities are 3123 m/s for transverse waves and 6200 m/s for longitudinal waves. The experimental setup is in many ways the same as that used for previous experiments as reported in [17]. We use an HP 3589A spectrum/network analyser to measure transmission spectra of acoustic resonators via piezoelectric transducers. The plate rests horizontally, supported by three gramophone diamond styli. This ensures a very small contact area between the plate and the rest of the world, thus making the vibrations of the plate as close to free as possible. The diamond styli are glued to cylindrical piezo ceramics that are polarised along the symmetry axis (z-axis). One such combination functions as transmitter, the two others as receivers. One may wonder if our experimental technique can really measure all modes. In particular, one could question if the in-plane modes, for which the displacement is mainly (or exactly) in the plane of the plate, are detected by our transducers. This question was answered in Ref. [8], where the same experimental technique is used. The authors find that all modes are detected. To understand this, one can imagine what happens microscopically when strain is passed from the plate to the piezoelectric component through the diamond stylus. Obviously, there can be no slip between the tip of the stylus and the plate. If there were indeed slip, there would also be friction. The diamond would then quickly drill a hole in the plate, and this is not observed in the experiments. In fact, after many days of oscillations at frequencies of several hundred kHz, the plate is completely intact. Since the base of the diamond stylus is fixed to the piezo electric component, the diamond stylus undergoes a wiggling motion which deforms the piezo electric component in a complicated way, including compression along the z-axis. In both of the experiments the temperature was room temperature, i.e., it was not kept constant but could fluctuate by a few degrees. Obviously, the temperature is important in these measurements, since both the size of the plate and the elastic constants depend on the temperature, and changes in these parameters affect the eigenfrequencies. However, for aluminium thermal expansion is the dominant effect, and to first order eigenfrequencies shift locally by the same amount. Since we are not interested in single eigenfrequencies but only in differences between them, this shift has no influence on our results. The plate is placed in a vacuum chamber, which allows control of the pressure of the air surrounding the plate. At pressures lower than 10 −2 Torr air damping is insignificant compared to intrinsic losses and losses to the supports. Therefore, we shall refer to such low pressure as "vacuum". When the pressure is increased, the flexural modes, that have large out-of-plane oscillations, are strongly affected, since the plate then functions like a loudspeaker generating sound waves in the air. As a result, the amplitudes of the flexural modes decrease with increasing pressure, and the widths of the resonance peaks increase. This is demonstrated in Fig. 6, which shows a section of the transmission spectrum measured for the three-leaf clover in vacuum, at a pressure of 0.5 atm, and at atmospheric pressure. Note that one can label most of the modes into flexural and in-plane by eye. C. The separation experiment The first experiment was designed to separate the modes into flexural and in-plane types so that the spectral statistics could be studied separately for each class. To get a statistically significant result, many eigenfrequencies are needed, and it is crucial to find all the levels so that the results are free from missing level effects. For this reason we performed the following measurement sequence. The acoustic transmission spectrum for the plate of thickness 2 mm was measured in the range 300 kHz -540 kHz. The measurement was carried out first in vacuum, then at a pressure of 0.5 atm, and finally at 1 atm, see Fig. 6. In each case, the measurement was performed twice, using two independent receivers. This procedure gave 6 resonance spectra. Then, the system was subject to a perturbation, when a mass of 14 mg, corresponding to 314 ppm, was removed from one face of the plate using a piece of fine sand paper. After this, the above procedure was repeated, giving another 6 resonance spectra. Then, in the same way, another perturbation was made, this time removing 43 mg of material, corresponding to 965 ppm. Again, the measurement sequence was carried out, giving a total of 18 resonance spectra. The perturbations done to the system are small enough that it is possible to follow every resonance peak through all 18 spectra, but large enough that near-degeneracies in one set of spectra are destroyed by the perturbations, giving well-resolved peaks in the next set of spectra. This technique allows us to find all resonances. There are no missing levels. We would like to establish a simple and reliable criterion that permits us to separate the spectrum into flexural and in-plane modes. To this end, each resonance peak is fitted using the so-called "skew Lorentzian" approach [18]. This fit yields a number of parameters of which only the resonance frequency and the width, Γ, are of interest. In Fig. 7 we show the distribution of widths obtained from this fitting procedure for increasing values of the air pressure. It is evident that the widths of one group of modes increases with increasing pressure while the widths of the remaining modes is largely unaffected. We interpret these groups as flexural and in-plane modes, respectively. However, even at atmospheric pressure, it is not possible to separate the modes on the basis of resonance width alone. Since the width distribution does not allow us to separate the flexural modes from the in-plane modes with certainty, we must find a more reliable criterion. Therefore, we consider the individual resonance widths as function of pressure, see Fig. 8. The curves for the two resonances in Fig. 8 are typical for the measured modes and show that the curves are well approximated by straight lines. Consequently, it makes sense to label them by the slope of the best straight line fit. We then consider the distribution of these slopes, see Fig. 9. The distribution has two well-separated peaks. Large slopes correspond to flexural modes; small slopes correspond to in-plane modes. Based on this information, we choose the "separation" slope to be 11 Hz/atm. In the range 300 kHz to 540 kHz we find 1537 levels for the 2 mm plate, of which 781 are flexural and 756 are in-plane, judging from the separation criterion discussed above. Reference [8] presents an expansion of the exact dispersion relations for an infinite isotropic plate and also gives the corresponding expansion for the number of modes, i.e., the staircase function, for a finite, thin plate. Using this theoretical expansion, we expect 782 flexural modes and 753 in-plane modes. This is in perfect agreement with the measured numbers, given the uncertainty in the elastic constants of the aluminium alloy and in the dimensions of the plate. Since we can identify the character of individual modes, it is possible to consider the level spacing distribution and the ∆ 3 -statistic separately for each of the two classes. Figures 10 and 11 show the level spacing distribution and the ∆ 3 -statistic for each of the two mode classes compared with the GOE statistics. We find that both the level spacing distribution and the ∆ 3 -statistic for the flexural modes agree with the GOE statistics. This result confirms numerical calculations by Bogomolny and Hugues showing that the flexural modes of a chaotic billiard have GOE fluctuation statistics, see Ref. [12]. The ∆ 3 -statistic for the in-plane modes lies above the GOE curve. This is a bit surprising, because mode conversion is expected to be a strong effect, see, e.g. Ref. [19], which should guarantee that all in-plane modes are strongly coupled and obey GOE statistics. We note that the deviation from the GOE curve seen in the ∆ 3 -statistic does not appear in the spacing distribution; the spacing distribution for the in-plane modes looks much like the spacing distribution for the GOE. The same feature is seen for the random matrix model for systems with an approximate symmetry, see the results for g = 0.2 on Figs. 1(c) and 2. If we think of mode conversion as a mechanism that breaks the longitudinal-transverse "symmetry" for in-plane modes, our results could indicate that this symmetry is not completely broken. An issue to consider in this context is the value of the wavelength, λ, compared to the size, l, of the system. The ratio l/λ is a measure of how "semiclassical" our system is. Roughly, l = 100 mm. Random matrix results are only expected to apply when l/λ ≫ 1. For flexural modes, the typical wavelength is 5 mm, so l/λ = 20. For travelling in-plane waves, the typical transverse wavelength is 7 mm and the typical longitudinal wavelength is 13 mm. Roughly, this leads to l/λ = 10. Thus, in our experiments we have the two length scales separated by at least an order of magnitude. Nevertheless, the factor of 2 between l/λ for flexural and in-plane modes shows that the flexural modes are more "semiclassical" than the in-plane modes, which is another possible explanation for the slight difference observed in the fluctuation properties. We emphasise that the main results of this section are, first, that the flexural and the in-plane modes can be separated and, second, that each of the two mode classes behave as one class of strongly-coupled modes. The fact that the ∆ 3 -statistic lies slightly above the GOE curve for the in-plane modes is a small correction to this picture. In the following section, we regard the in-plane modes as one class of strongly-coupled modes. D. The symmetry-breaking experiment The second experiment was designed for a detailed study of the transition from two independent mode classes to one mode class. The transition takes place as the mirror-symmetry through the middle plane of the plate is broken. For this experiment, we used the three-leaf clover plate of thickness 1.5 mm and gradually cut a slit on one side of the plate, as shown in Fig. 12. For the cutting of the slit in the plate we used a computer-controlled milling machine and chose steps in the thickness of 1/40 mm. In our case, this amounted to about 18 mg of material for each increment of the depth of the slit. The mass of the intact plate was 32.8870 g. First, the frequency spectrum was measured for the intact plate in vacuum and at atmospheric pressure. The procedure of cutting and measuring the frequency spectrum at atmospheric pressure was then repeated 9 times. In all measurements the frequency range was 456 kHz -533 kHz and only one receiver was used. The justification for using just one receiver for this experiment is as follows: Removal of material from the plate corresponds to a small perturbation. One can therefore easily follow each resonance peak through the entire scenario, and although a resonance peak can sometimes disappear in one spectrum because the receiver is accidentally placed on a nodal line, it always reappears in subsequent spectra. Thus, the results of this symmetry-breaking experiment are protected against missing level effects. As in the previous experiment, the resonance peaks are fitted and we calculate the distribution of widths, focusing first on the intact plate. In the plot for atmospheric pressure, the modes are separated into two classes: those that have widths smaller than 22 Hz and those that have widths larger than 22 Hz, see Fig. 15. This sets the criterion for separation of the flexural modes from the in-plane modes. We note that for the 1.5 mm plate it is possible to perform the separation purely on the basis of the widths measured at atmospheric pressure. This was not the case for the 2 mm plate. In general, we expect that the widths of the flexural modes at some value of the pressure will depend on many parameters. Among these, the thickness of the plate and the typical wavelength play important roles. However, comparing our two experiments, all of the parameters are the same except for the thickness. The average width for the in-plane modes is about the same in the two cases. At a pressure of 1 atm, the mean width for the flexural modes for the 2 mm plate is around 35 Hz and for the 1.5 mm plate the mean width is 42 Hz. This indicates that damping from the air is larger for thinner plates. We consider first the plate before any material has been removed and find 600 levels in the frequency range 456 kHz -533 kHz. According to our separation rule, this time based solely on the width distribution measured at atmospheric pressure, 310 modes are flexural and 290 are in-plane. Using again the expansion for the number of modes given in Ref. [8], there should be 311 flexural modes and 285 in-plane modes, in perfect agreement with our results. As in Sec. III C we have obtained the level spacing distribution and the ∆ 3 -statistic for the two mode classes separately. We find the same spectral statistics for the 1.5 mm plate as for the 2 mm plate. Figures 13 and 14 show the level spacing distributions and the ∆ 3 -statistics for all the modes for increasing depth of the symmetry-breaking slit. The experimental data are fitted with results for the random matrix model of Sec. II. We have used N 1 = N 2 = 150 and σ 2 = 64/N 2 . Table I summarises the results for the theoretical fits to the spectral statistics for the symmetry breaking experiment. The spectral statistics are well described by the model, and the best fits to the level spacing distribution and to the ∆ 3 -statistic yield consistent values for the coupling strength g. In Fig. 15 the measured width distributions are compared with the distributions calculated numerically using Eq. (3). To model the width distribution P (Γ) for all modes, we use the asymmetry distribution for the eigenvectors and assume that the in-plane modes and the flexural modes have Gaussian width distributions P I and P F . For each of the different cases we fix the value of the coupling strength, g, to the value obtained from the spectral statistics. For the intact plate, see Fig. 15(a), we fitted the width distribution by minimising χ 2 , and found the mean values Γ 0 I = 12.2 Hz and Γ 0 F = 42.0 Hz, and the standard deviations σ I = 2.8 Hz and σ F = 5.8 Hz. The mean values for the fit agree with the measured average width of 27.1 Hz, see Tab. I. The average width depends on the slit depth as shown in Tab. I, and increases, e.g., by 2.1 Hz when the cut increases from 0 mg to 37.4 mg. To take effects like this into account we have fitted the width distributions by varying the four parameters Γ 0 I , Γ 0 F , σ I , and σ F . The only parameter which changed significantly from case to case was the average width of the flexural resonances, Γ 0 F . This seems reasonable since we expect that the modes with large out-of-plane components are damped most by surface perturbations like the cut. For the width distributions shown in Fig. 15(b), (c), and (d) we therefore held Γ 0 I , σ I , and σ F fixed, whereas Γ 0 F was varied so that the average width equalled the measured average shown in Tab. I. The overall features of the width distribution as function of slit depth are described by the random matrix model. As the slit depth increases, the strength of the width distribution between the two peaks increases while the strength of the peaks decreases. Notice that the value of P (Γ) around Γ = 27.5 Hz increases linearly with g in agreement with Eq. (8). IV. DISCUSSION AND CONCLUSIONS We have presented experimental results for acoustic resonances in two thin aluminium plates of three-leaf clover shape. For both plates we found that the measured number of flexural and in-plane resonances were in very good agreement with the theoretical Weyl formula. The two classes of modes were separated using their width or the dependence of the width on the pressure of the air surrounding the plate. The spectral statistics for the flexural modes were in perfect agreement with the GOE result in both cases whereas the spectra of the in-plane modes seemed to be slightly less rigid than the GOE. The random matrix model of systems with an approximate symmetry modelled the experimental data on the spectral statistics and wave function information from the mixing experiment well. Both the level spacing distribution, the ∆ 3 -statistic, and the distribution of widths were fitted consistently by the numerical random matrix results. The qualitative changes in the width distribution as the depth of the cut was increased could thus be ascribed to the complex mixed nature of the acoustic wave functions. The successful description of the statistics of the frequency spectrum and the widths of the thin acoustic plates may be extended to include other features. The presence of both a kinetic energy term and an interaction term in the random matrix model is natural not only in the modelling of the mixing process but also to describe the Thouless energy of acoustic resonators due to the localisation of wave functions. In this way the model represents an extension of the simplest random matrix models, like the GOE, to include several important features present in real physical systems. 9. The distribution of slopes dΓ/dp has two well-separated peaks, which makes it possible to separate the flexural and in-plane modes. We choose a "separation" slope of 11 Hz/atm. A few inaccurate fits gives rise to the small number of negative slopes.
2019-04-14T01:56:45.446Z
2000-11-30T00:00:00.000
{ "year": 2000, "sha1": "fb4cdb3849b977ec660d31b09cf314218f13b25f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "97e7706157916aaa5585b6fab92bfabd26a2f3ff", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248508865
pes2o/s2orc
v3-fos-license
The EORTC QLU-C10D discrete choice experiment for cancer patients: a first step towards patient utility weights Background The European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Utility-Core 10 Dimensions (QLU-C10D) is a novel cancer-specific preference-based measure (PBM) for which value sets are being developed for an increasing number of countries. This is done by obtaining health preferences from the respective general population. There is an ongoing discussion if instead patients suffering from the disease in question should be asked for their preferences. We used the QLU-C10D valuation survey, originally designed for use in the general population, in a sample of cancer patients in Austria to assess the methodology’s acceptability and applicability in this target group before obtaining QLU-C10D patient preferences. Methods The core of the QLU-C10D valuation survey is a discrete choice experiment in which respondents are asked to give preferences for certain health states (described by a relatively large number of 10 quality of life domains) and an associated survival time. They therewith are asked to trade off quality of life against life time. As this might be a very burdensome task for cancer patients undergoing treatment, a cognitive interview was conducted in a pilot sample to assess burden and potential additional needs for explanation in order to be able to use the DCE for the development of QLU-C10D patient preferences. In addition, responses to general feedback questions on the survey were compared against responses from a matched control group from the already completed Austrian general population valuation survey. Results We included 48 patients (mean age 59.9 years; 46% female). In the cognitive interview, the majority indicated that their experience with the survey was positive (85%) and overall clarity as good (90%). In response to the general feedback questions, patients rated the presentation of the health states less clear than matched controls (p = 0.008). There was no difference between patients and the general population concerning the difficulty in choosing between the health states (p = 0.344). Conclusion Despite the relatively large number of DCE domains the survey was manageable for patients and allows going on with the QLU-C10D patient valuation study. Supplementary Information The online version contains supplementary material available at 10.1186/s41687-022-00430-5. Introduction Health utilities are a core parameter in health economic evaluations. They represent the "value" a specific population assigns to certain health states and are anchored at 1 (representing full/best imaginable health) and 0 (representing being dead). Negative values are possible for Page 2 of 10 Gamper et al. Journal of Patient-Reported Outcomes (2022) 6:42 health states considered to be worse than dead. These values are used to adjust survival time for the quality of life that time is spent in. The respective outcome parameter in economic studies that combines survival time with quality of life (i.e. health utility) is the so-called Quality-Adjusted Life Year (QALY) [1,2]. One broadly applied method of obtaining health utilities is the use of preference-based measures (PBMs). A PBM consists of two elements: 1. A health state description system (several dimensions of health/health-related quality of life (HRQOL) with different levels of impairment) which is administered like a questionnaire; and 2. Utility weights for each health state described by that health state description system. Utility weights are developed in valuation studies in which representatives of a certain perspective on health, disease and treatment provide their preferences for sets of health states comprising different levels of functional status and symptom burden. There is, though, some controversy which perspective ought to be considered-the one of patients suffering from the disease being investigated and being familiar with related impairments or the one of the general population representing the tax payers' perspective [3][4][5]. The main argument for the latter is that in a publicly funded health system with the main aim of maximizing health for society the tax payers' perspective is imperative [6]. The opposing view emphasises the importance of having experienced health states before being able to value them, which on the downside, bears the risk of bias through adaptation as well as potential personal benefit [7]. There is some agreement that patient preferences are relevant in the context of clinical decision making [6,8] while decision making in the context of health resource allocation requires general population preferences [7]. Related to this issue is the discussion concerning the relevance of disease-specificity of the health states that need to be valued. Most PBMs, such as the widely used EuroQoL 5-Dimensions (EQ-5D) [9], are generic, i.e. their health state description systems do not depict a specific condition and therefore allow obtained utilities to be used for comparisons across diseases. For certain medical conditions, generic PBMs may not be sufficiently relevant or sensitive [10,11]. In the field of oncology, the availability of the cancerspecific PBM European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Utility-Core 10 Dimensions (QLU-C10D) [12] facilitates the generation of cancer-specific utilities. It is based on the widely used EORTC Quality of Life Questionnaire Core 30 (QLQ-C30) and comprises 13 of the 30 questions of the parent instrument representing 10 HRQOL domains (physical, role, social, and emotional functioning, pain, fatigue, insomnia, appetite loss, nausea, and bowel problems). Valuation studies are currently being performed in various countries in a concerted approach of two international research initiatives, the Multi-Attribute Utility Cancer (MAUCa) Consortium and the EORTC Quality of Life Group (QLG), using a standardised methodology. In line with the mission statement of the EORTC QLG to increase the incorporation of the patient's perspective into outcome assessment in oncology, we aim to obtain QLU-C10D utilities not only from the general population but also from cancer patients themselves. For QLU-C10 valuations a standardised methodology is in place. Preferences are obtained using a discrete choice experiment (DCE) in which respondents are asked to make choices between (hypothetical) health scenarios described by the 10 QLU-C10D domains and different survival times. The DCE has proven to produce reliable results in general population samples [13][14][15] and QLU-C10D values sets so far have been completed for Australia [16], Germany [17], Austria, Italy, and Poland [18], France [19], UK [20] and Canada [21]. The number of 10 domains results in quite complex DCE tasks in which respondent have to consider a range of health issues and weigh these against a survival time. Considering the complexity of the tasks and cancer patients' compromised health states and difficult personal situations, dealing with trade-offs between HRQOL and hypothetical survival times in the DCE may be emotionally very burdensome and cognitively more challenging for them than for the general population respondents. Therefore, before obtaining patient preferences for the QLU-C10D we aim at investigation the developed valuation methodology for its acceptability and applicability in cancer patients. We investigated the DCE-survey in a pilot sample of cancer patients using a mixed-methods approach. We performed this study in Austrian patients since the QLU-C10D patient valuations likewise will be performed in Austria (Austrian general population valuation already has been completed [18]). Patient sample Cancer patients were recruited in 2017 at the Medical University of Innsbruck, Austria. We aimed to include patients of different age, diagnosis groups, and treatment modalities. Eligibility criteria for patients were a diagnosis of cancer, age > 18, sufficient command of German, no overt cognitive impairments, and written informed consent. Clinical data was gathered from medical charts comprising information on the diagnosis (ICD-10), treatment approach (curative/palliative), previous and current treatment modalities (e.g. surgery, chemotherapy, or radiotherapy), current medication, and comorbidities (e.g. heart problems, arthritis/rheumatism, or asthma). Sociodemographic information was collected as part of the valuation survey (see below). All respondents provided written informed consent. Ethical approval was obtained from the Medical University of Innsbruck [AN215-0016]. All patients who agreed to participation in the QLU-C10D valuation survey completed the entire interview. Survey completion and cognitive interviews took 30-45 min. Matched general population controls The general population control group was drawn from the QLU-C10D Austrian general population valuation which has been performed in 2017 [22]. Recruitment and assessment were performed by Survey Engine (www. surve yengi ne. com), a company specialized in the webbased conduction of DCEs using internet panels. For the present study, we matched a control group from the 1000 Austrian general population respondents to the patient sample according to age, sex, and education to obtain a case control ratio of 1:4. QLU-C10D valuation survey For QLU-C10D valuations a standardised methodology is in place. The survey comprises questions on sociodemographic information, 16 DCE choice sets (selected out of a total of 960; described below), self-report questionnaires on health status (QLQ-C30, Kessler-10, EQ-5D-3L), and feedback questions on the clarity of health state presentations (assessed on a 5-point Likert scale from 'very clear' to 'very unclear'), the difficulty in comparison to other surveys (respondents are asked to compare to any other survey they might have participated in which could be none in the case of cancer patients or could be questionnaire studies they might be familiar with; response options are 'easier' , 'similar' , 'more difficult' , and 'can't tell';), the difficulty to make a decision between the health states (assessed on a 5-point Likert scale including the options 'very difficult' , 'difficult' , 'neither/nor' , 'easy' , 'very easy'), and the strategy on how a decision was reached (options: 'no strategy' , 'focus on a few aspects' , 'focus on highlighted aspects' , 'focus on most aspects' , 'focus on all aspects' , and 'other strategy'). All survey material is provided in the supplementary material. The survey is administered web-based. Each DCE choice set comprises two hypothetical QLU-C10D health profiles (i.e., the 10 HRQOL domains with different levels of impairments on 4 levels from "not at all" to "severe") and survival times in that health states (one, two, five, or ten years). To keep cognitive burden on a manageable level, impairments on only five domains differed between the two options in each choice set (highlighted in yellow). The respondents are asked to select their preferred health profile (see example in Fig. 1). More details on the DCE and the valuation survey can be found in prior publications [12][13][14]16]. The QLU-C10D valuation methodology has been intensively investigated, including testing the impact of different graphical presentations, impact of ordering of attributes, and test-retest reliability [13][14][15]. Mixed-methods approach for pilot testing in cancer patients Based on Collins [23], Mullin et al. [24] and Atkinson et al. [25] the following aspects were assessed with regard to the applicability of the QLU-C10D valuation methodology in cancer patients: comprehension, i.e. understanding of the task (e.g. How clear/unclear is the purpose of the survey/this explanation to you? Could you repeat this in your own words?), retrieval, i.e. the information processing strategy including recall of information (e.g. Do you have a particular strategy?), judgement, i.e. the process of formulating an answer to each question (e.g. How easy/difficult was your choice?), response (e.g. How do you feel about your choice?), and burden, i.e. the perceived importance or reasonability of the task which is linked to collaboration motivation (e.g. How relevant do you consider this tasks to be? Would you consider the tasks suitable for other patients as well? What would you change?). This was achieved by employing a mixed-methods approach. The qualitative part comprised a cognitive interview with verbal probing [26] with cancer patients covering the mental process in capturing the provided information and in giving responses. The interview was performed alongside the completion of the QLU-C10D valuation survey. The quantitative part encompassed the comparison of responses to the feedback questions incorporated in the survey between patients and matched control group respondents. Survey material has been provided in the supplementary files (Additional file 1: Appendix A). Analyses and sample size considerations Sample size considerations were based on the recommendations of Lancaster et al. [27], and specifically on Morse [28] and Glaser and Strauss [29] focussing on the concept of content saturation for qualitative approaches. The concept of content saturation with approx. 30 participants was again confirmed in a content analysis involving 560 studies by Mason et al. [30]. Qualitative data was analysed based on the Grounded Theory Approach, described by Glaser [31]. Interview data were independently reviewed by two researchers performing inductive coding using Microsoft Excel. Quantitative data was analysed using Chi-square tests for comparing frequencies (feedback questions). To show that included patients and general population controls indeed differed with regard to functioning and health status we also compared EORTC QLQ-C30 scores (assessed as part of the valuation survey-see above). This was done using Mann-Witney-U tests. A significance level of 0.05 was applied. Statistical analyses were conducted using IBM SPSS 23.0. Sample characteristics We included a total of 48 cancer patients (mean age 59.9 years, SD 13.5; 46% female). Diagnoses were mixed (breast, haematological, lung, neuroendocrine, thyroid, gastrointestinal, colorectal, and other) and all but four patients were under active therapy (radiotherapy, chemotherapy, or nuclear therapy). Sample characteristics are shown in Table 1. We matched 192 respondents from the general population sample according to age, sex, and education. Hence, regarding sociodemographic parameters, participants only differed significantly regarding marital status, with more patients being single and more participants of the control group being divorced (p < 0.001). Patients and general population controls differed with regard to functioning and health status measured by the EORTC QLQ-C30. Compared to the general population controls, patients' HRQOL was significantly worse on 11 of 15 domains of the EORTC QLQ-C30 (see Fig. 2). Differences that were statistically significant met the criteria for clinical relevance according to Cocks et al. [32]. We identified large differences (> 19 points) regarding social functioning, and medium differences (> 8 points) regarding physical and role functioning, as well as regarding fatigue, nausea/vomiting, pain, sleep disturbances, appetite loss, and diarrhoea. Global quality of life of the cancer patients differed also on a medium level (approx. 13 points) from the general population controls. As EORTC QLQ-c30 data was missing for eight patients, we conducted a sensitivity analysis by imputing normative data from the Austrian general population [33]. Results did not change. Results from cognitive interviewing Overall comprehension of the DCE survey was good with 90% of the patients finding the task clarity to be good. The retrieval processes for the decision process included the subjective relevance of the attributes (i.e. decision strategy) in the DCE (i.e. HRQOL domains and survival time); 46% stated that the survival time was the most important attribute and 38% made their decision based on HRQOL impairments, with pain being most often explicitly named. Overall, patients considered the task to be positive (85%), mainly because they considered quality of life research to be an important topic in medicine (38%). The most frequent suggestions for survey improvement were providing a clearer presentation of the different health states (17%) and providing additional explanation and instructions (19%). Potential inappropriateness for some patient groups was mentioned by 4 patients only (8%), who suggested to avoid the term "dying" in the survey in order to make it suitable for other patients. Detailed results of the cognitive interviews are presented in Fig. 3. Results on patient utilities cannot be reported from this small number of respondents, but required finalizing the field study. Results from quantitative comparision Regarding comprehension of the DCE survey, patients found the presentation of the health states to be less clear than the general population sample (p = 0.008). Fewer patients rated the tasks as "very clear" while about the same percentage of patients and controls rated the tasks as "clear". The percentage of patients rating the tasks as "unclear" with 24% was clearly higher than in the controls (7%) Patients furthermore differed from the general population with regard to their ratings on the survey's difficulty in comparison to other survey (p < 0.001). This difference was mainly due to a much higher percentages of patient who used the "can't tell" category. There was no difference between the groups concerning how difficult it was to choose between the health states in each choice set (p = 0.344). Likewise, no statistically significant difference with regard to stated strategy for choosing between health states was found (p = 0.104). Details can be seen in Fig. 4. Discussion Our study results approve the QLU-C10D valuation survey for use in cancer patients due to the following reasons based on Collins [23], Mullin et al. [24] and Atkinson et al. [25]: Sufficient comprehension can be considered based on clarity ratings gathered from qualitative interviews and quantities comparisons with controls the following aspects were assessed with regard to the applicability of the QLU-C10D valuation methodology in cancer patients: comprehension, i.e. understanding of the task (e.g. How clear/unclear is the purpose of the survey/this explanation to you? Could you repeat this in your own words?), retrieval, i.e. the information processing strategy including recall of information (e.g. Do you have a particular strategy?), judgement, i.e. the process of formulating an answer to each question (e.g. How easy/ difficult was your choice?), response (e.g. How do you feel about your choice?), and burden, i.e. the perceived importance or reasonability of the task which is linked to collaboration motivation (e.g. How relevant do you consider this tasks to be? Would you consider the tasks suitable for other patients as well? What would you change?). Results form qualitative and quantitative approaches considered together, it has been shown that the imposed burden, despite including personal trade-offs between HRQOL and survival time, was perceived manageable by patients and that although patients were in a compromised health state, which may add to cognitive burden, the survey was in general acceptable for them and they were able to manage the rather complex DCE tasks involving 10 health domains and an expected survival time. In overall difficulty was not considered to be an issue of concern by the patients in the interviews. Patients did not comment on the time frames of survival in the hypothetical scenarios, and few were bothered by the term "dying" in the tasks and would have suggested to avoid this before giving the survey to other patients. Lack of clarity of information/explanation was identified by some patients. Patients' feedback differed from that of the general population controls in very few aspects. A high percentage of patients could not make a comparison between the difficulty of the present survey and other surveys. This is not surprising as internet panel members are much more survey-trained and participants recruited via Survey Engine usually are even familiar with DCEs. For the same reason a higher percentage of patients may have rated task clarity lower than controls did. As overall clarity ratings were very highand patients in the interview stated that additional effort, time and explanation made the tasks easier to understand clarity does not seem to be an issue of concern but will need to be addressed in the presentation of the survey to patients. Most importantly though, there was no difference in overall perceived task difficulty between patients and general population respondents. In overall, task difficulty ratings did not raise severe concerns and appeared neither too difficult nor too easy. Our considerations with regard to difficulty were that task that are "too easy" might comprise some sort of dominant choices, i.e. health situations which do not require the respondent to make a trade-off (e.g. a situation with good QOL and longer survival vs a situation with poor QOL and shorter survival) whereas tasks that are too hard might result in fatigued respondents who make guesses and mistakes. The numbers we found compare well to the results for the general population [14] and are within the range of acceptable difficulty level reported in the literature, which is, though a bit scarce. The lack of qualitative research in DCE research in general has been pointed out in a systematic review by Vass et al. [34]. We identified a study by Mulhern et al. [35] comparing a DCE with a time-trade off (TTO) for EQ-5D-5L health states that showed that 57% It is a limitation of our study that cannot provide information from qualitative interviews from the general population sample as well for a direct comparison. This would shed additional light on potential differences between patient and general population perceptions of the DCE tasks and the type of study in general. A further limitation is that we draw on a convenience sample of patients and it was not possible to include patients in a very compromised health state. The interview was lengthy and may have posed a questionable high burden on these patients. Their perspective on the issues in questions though will be important though in further research on patient valuations and may require a different approach of obtaining it. In order to further reduce the burden for patients while maintaining comparability of the survey with the general population, adaptations for the field study may include improvements of information and explanation while keeping all the survey elements as they are. This can be done by setting up a website and a telephone hotline to provide additional information and explanation to patients and to address potential emotional burden. To date, there is no consensus if there is actually a systematic difference in health valuations depending on who is the source of information (general population or patients), two meta-analyses reporting conflicting results [39,40]. Yet, potential consequences for health economic evaluations need to be considered and further explored. An important argument to investigate patient preferences for disease-specific PBMs is that respondents from the general population usually can relate to generic HRQOL issues, such as pain, but may have more difficulty imagining and hence valuing the impact of severe fatigue, for example, on a purely hypothetical basis. Conclusion Patient valuations for the QLU-C10D will contribute to the ongoing discussion on the need of rethinking which population is more relevant for providing health preferences to estimate utilities [41][42][43]. The results presented here add to the scarce literature on patient valuations and provide a positive outlook with regard to feasibility and acceptance in this specific target group.
2022-05-04T13:48:00.334Z
2022-05-04T00:00:00.000
{ "year": 2022, "sha1": "abadcbc53339e507684cc1deef457d297c0656d8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "abadcbc53339e507684cc1deef457d297c0656d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270673531
pes2o/s2orc
v3-fos-license
MAFLD Pandemic: Updates in Pharmacotherapeutic Approach Development With around one billion of the world’s population affected, the era of the metabolic-associated fatty liver disease (MAFLD) pandemic has entered the global stage. MAFLD is a chronic progressive liver disease with accompanying metabolic disorders such as type 2 diabetes mellitus and obesity which can progress asymptomatically to liver cirrhosis and subsequently to hepatocellular carcinoma (HCC), and for which to date there are almost no approved pharmacologic options. Because MAFLD has a very complex etiology and it also affects extrahepatic organs, a multidisciplinary approach is required when it comes to finding an effective and safe active substance for MAFLD treatment. The optimal drug for MAFLD should diminish steatosis, fibrosis and inflammation in the liver, and the winner for MAFLD drug authorisation seems to be the one that significantly improves liver histology. Saroglitazar (Lipaglyn®) was approved for metabolic-dysfunction-associated steatohepatitis (MASH) in India in 2020; however, the drug is still being investigated in other countries. Although the pharmaceutical industry is still lagging behind in developing an approved pharmacologic therapy for MAFLD, research has recently intensified and many molecules which are in the final stages of clinical trials are expected to be approved in the coming few years. Already this year, the first drug (Rezdiffra™) in the United States was approved via accelerated procedure for treatment of MAFLD, i.e., of MASH in adults. This review underscores the most recent information related to the development of drugs for MAFLD treatment, focusing on the molecules that have come furthest towards approval. Introduction The global prevalence of metabolic-associated fatty liver disease (MAFLD), estimated to be around one billion, is rapidly increasing, hand in hand with the growing prevalence of obesity and type 2 diabetes mellitus (T2DM) [1][2][3].MAFLD is a chronic progressive disease marked by an excessive accumulation of fat in the liver (5% or above of liver's weight) associated with a metabolic disorder such as obesity/overweight and insulin resistance [4,5].MAFLD was known formerly as non-alcoholic fatty liver disease (NAFLD).Based on the many study reports indicating that the majority of NAFLD patients also have some type of metabolic disorder, such as T2DM, insulin resistance, obesity, dyslipidaemia or hypertension, and conversely that the majority of patients with a metabolic disorder will sooner or later develop NAFLD, it became clear that NAFLD and metabolic disorder share a common pathological pathway and are practically inseparable conditions [1,6].Aiming to better capture the underlying pathogenesis and affected patients and to reshape treatment strategies, it was proposed that NAFLD be renamed as MAFLD in 2020 by a group of world-leading hepatology experts [7].The focus of this new terminology is on the inclusion of "positive" disease criteria for MAFLD diagnosis: overweight/obesity or T2DM or at least two metabolic risk factors in patients having normal/lean weight defined by the criteria for the patient's specific ethnic group, which should be present in addition to evidence of steatosis detected via biopsy, imaging or biomarkers in the blood [8].Moreover, the MAFLD definition does not rule out patients with excessive alcohol consumption or with another type of chronic liver disease [7,9].After the new terminology was proposed, several studies aimed to objectively investigate the utility of this renaming [10,11]. In a study by Lin et al. in 2020, MAFLD and NAFLD criteria were compared and it was found that the MAFLD terminology emerges as more practically feasible for identifying patients at higher risk of disease exacerbation [10].Another study conducted on 3709 patients in Japan reported recently in 2024 that the prevalence of MAFLD in NAFLD patients was 96.7% and that waist circumference criteria for NAFLD and metabolic syndrome matched 96.2%, leading the authors to conclude that in the Japanese population, patients with NAFLD can be reclassified as having MAFLD [11].Generally, MAFLD is considered to be caused by a combination of various risk factors such as metabolic syndrome, oxidative stress, gut microbiota imbalance and genetic factors such as genetic polymorphisms and epigenetic alterations [1].MAFLD encompasses a diverse range of liver diseases and can progress from simple steatosis, i.e., metabolic-associated fatty liver (MAFL), to metabolicassociated steatohepatitis (MASH), cirrhosis and hepatocellular carcinoma (HCC), which is one of the most leading causes of liver transplantation worldwide (Figure 1) [12][13][14].Unfortunately, MAFLD can remain undetected for years because the symptoms often occur very late, when the patient has already developed cirrhosis [15].Due to this complex multifactorial etiology, the molecular mechanisms and biomarkers involved in MAFLD are not yet fully understood, making the search for an appropriate pharmacological treatment challenging [16]. The primary treatment approach in MAFLD patients is to integrate a healthy lifestyle including a Mediterranean diet, physical activity and weight loss, which can contribute to the reduction of liver steatosis and fibrosis [2,17].However, studies have shown that most of the patients cannot reach the target weight required to reduce liver fibrosis, which is the most important mortality prognostic factor in MAFLD [17].It is worth emphasizing that a normal BMI does not indicate a healthy metabolic status of the patient, as MAFLD can also occur in lean patients.These patients have a poor metabolic profile compared to the healthy population in regard to elevated blood pressure, glucose, HbA1c, triglycerides, LDL and decreased HDL [18,19].All in all, the non-pharmacological approach is essential but not sufficiently effective as a stand-alone measure.Consequently, there in an urgent need for an approved pharmacological treatment, which is not yet available, with the exception of a just recently approved drug [2,20].Namely, in March 2024, a thyroid hormone receptors β (THR-β) agonist, resmetirom (Rezdiffra™), became the first drug approved by the Food and Drug Administration (FDA) in the United States for the treatment of non-cirrhotic MASH in adults with moderate to progressed liver fibrosis in combination with diet and exercise [20]. The insight into the current most promising therapeutic options that could be integrated into daily clinical practice and guidelines for MALFD in the near future will be further discussed in this article. 1 Pathophysiology of Liver Fibrosis Through its anti-lipolytic function in lipid metabolism, insulin promotes deposition of free fatty acids (FFAs) and triglycerides (TGs) in adipose tissue [21,22].Under condition of insulin resistance, insulin is unable to inhibit adipose lipolysis, which in turn leads to the release of FFAs from adipose tissue and their excessive accumulation and formation of TGs in liver [21].For this reason, insulin resistance is one of the leading causes of fatty liver and has become one of the important therapeutic targets in MAFLD treatment [23,24].Most of the FFAs (60%) in the liver come from adipose tissue, while the rest from hepatic de novo lipogenesis and from diet.Excessive accumulation of FFAs is considered to be the main trigger of the pathogenic pathway of liver fibrosis [17].Surplus of FFAs in the liver generates lipotoxic lipids that cause endoplasmic reticulum stress, oxidative stress, inflammation and apoptosis of hepatocytes, resulting in formation of reactive oxygen species [25,26]. In such an environment, Kupffer cells (macrophages in the liver sinusoids) release the profibrotic factor transforming growth factor-β (TGF-β), which activates the hepatic stellate cells.Engaged hepatic stellate cells then migrate to the site of injury, secrete an extracellular matrix and form a fibrotic tissue in the liver [17,26,27].While fibrinogenesis, i.e., the formation of a "scar" in the wound, is a normal physiological healing process, it becomes pathogenic if it occurs persistently [15].Grade of liver fibrosis reflects the severity of liver damage, and extensive liver fibrosis is a main hallmark of compensated or uncompensated cirrhosis and HCC (Figure 1) [28,29].The assessment of liver fibrosis in MAFLD patients could therefore be used for the evaluation of treatment response [30]. Thyroid Hormone Receptor β (THR-β) Agonists The thyroid gland secretes thyroid hormones, which are key regulators of numerous physiological processes such as cell growth, fetal development and carbohydrate, protein and fat metabolism, and therefore these hormones have an impact on practically every organ, in particular on the liver [31,32].Stimulation of thyroid hormone receptors β (THRβ), which are mainly expressed in the liver, by thyroid hormones enhances lipid metabolism and FFA mobilization, leading to a reduction in low-density lipoprotein (LDL) cholesterol and TG levels, hepatic steatosis and fibrosis [26,32,33].Clinical evidence for this close relation between thyroid hormones and MAFLD could include the fact that hypothyroidism is more common in patients with MAFLD [34,35].Additionally, in patients who progress to MASH, in parallel with the steatosis increasing, the activity of THR-β receptors in liver decreases, i.e., the receptors become less sensitive to thyroid hormones [36]. With the aim to improve liver condition, a drug targeting the liver, the THR-β agonist resmetirom (Rezdiffra™), was developed.Compared to triiodothyronine (T3), this orally administered drug is around 28 times more selective for THR-β than for THR-α and has low uptake in extrahepatic tissues [34,37].Based on the previously conducted open-label extension study (OLE, NCT02912260), which enrolled 31 patients with mildly elevated liver enzymes and revealed a reduction in fibrosis, LDL and TG levels in these patients, the phase 3 study called MAESTRO-NASH (NCT03900429) was initiated [26,34].This ongoing 54-month study is designed as a placebo-controlled, double-blind RCT (randomized clinical trial) and has enrolled a total of 966 patients until week 52.The biopsy results demonstrated that compared to the placebo group, in which patients were advised on healthy nutrition and exercise, a larger number of patients on resmetirom therapy showed resolution or no aggravation of MASH or liver fibrosis [38,39].Harrison et al. reported that 25.9% of patients on 80 mg resmetirom therapy and 29.9% of patients on 100 mg resmetirom therapy showed resolution of MASH and no aggravation of liver fibrosis versus 9.7% of patients in the placebo group.Additionally, patients in the 80 mg and 100 mg resmetirom group (24.2% and 25.9%, respectively) showed benefit in liver fibrosis and no aggravation of MASH in comparison with the placebo group (14.2%) [38,39].Following the publication of significant results after one year of the MAESTRO-NASH study, the US FDA granted accelerated approval of resmetirom under the trade name Rezdiffra in March 2024 for the indication of non-cirrhotic MASH in adult patients with mild to advanced liver fibrosis (corresponding to fibrosis stages F2 to F3) in combination with diet and physical activity [20,39].So far, Rezdiffra has demonstrated a good safety profile, with diarrhea and nausea reported as the most common side effects.However, the sponsor still must complete 54 months of this study in order to demonstrate clinical benefit in terms of liver-related outcomes along with an acceptable safety profile [39,40]. Another orally administered THR-β agonist targeting the liver, VK2809, is still under investigation and has not yet reached the approval phase but shows promising potential [41,42].Its efficacy and safety have been evaluated in two double-blind, randomized clinical trials (RCTs): a 12-week phase 2a study (NCT02927184), which was completed in 2019, and an ongoing 52-week phase 2b study (VOYAGE, NCT04173065), which is expected to be completed in June 2024 [43,44].The phase 2a study was conducted in 59 patients with MAFLD and hypercholesterolemia and showed a significant reduction in LDL-C, other hepatic lipid content (such as lipoprotein A and apolipoprotein B) and alanine aminotransferase (ALT) levels.In May 2023, at 12 weeks of the phase 2b study conducted in patients with MASH and fibrosis (biopsy-proven), the primary endpoint was met as the results showed a decreased hepatic fat content [26,42].So far, both studies have shown a good safety profile with mostly mild adverse events (AEs) reported [42]. Fibroblast Growth Factor 21 (FGF-21) Agonists The action of the hormone fibroblast growth factor 21 (FGF-21) in the liver leads to a reduction in liver fat, as it stimulates fatty acid oxidation and the secretion of triglycerides and very low-density lipoproteins (VLDL) and inhibits de novo lipogenesis [45].Due to the short half-life of human FGF-21, the development of FGF-21 analogs requires structural modifications to increase stability and avoid rapid elimination from the body [46].An FGF-21 analog, pegozafermin (BIO89-100), was developed as subcutaneous injection for the therapy of MASH as well as severe hypertriglyceridemia [47].Being pegylated, pegozafermin has a prolonged half-life compared to FGF-21, so it only needs to be administered once every 14 days [45].Loomba et al. (2023) reported that in a phase 2b placebo-controlled RCT (NCT04929483) which enrolled a total of 222 patients, pegozafermin demonstrated fibrosis improvement in patients with MASH [48].Based on these encouraging results, pegozafermin entered phase 3 clinical trials (NCT06318169) in March 2024 to evaluate the safety and efficacy of pegozafermin in patients with MASH and fibrosis.A total of 1050 patients will be enrolled in the study, which is expected to be completed in 2029 [47].Efruxifermin, which exerts an agonistic effect on FGF-21, is another promising drug from this group.This drug is a fusion protein with increased stability in the body, which consists of a human IgG1-Fc domain and two altered FGF-21 [49].The first results of the placebo-controlled phase 2b RCT (NCT04767529) named HARMONY were published in December 2023.This study was conducted in patients with MASH with fibrosis stage F2 or F3, with the primary endpoint of assessing improvement in at least one fibrosis stage without worsening of MASH after 24 weeks.Analysis of this part of the study showed that this outcome was achieved in 19% of patients in the placebo group compared to 36% of patients in the 28 mg efruxifermin group and 33% in the 50 mg efruxifermin group, leading the study investigators to the conclusion that efruxifermin improves liver fibrosis [49][50][51].In light of these favorable findings, efruxifermin has entered the two ongoing placebo-controlled phase 3 RCTs that began in late 2023.One of these is the SYNCHRONY histology study (NCT06215716), which is expected to recruit 1000 participants and to be finalized by March 2027.The main goal of this study is to investigate the improvement of at least one grade of liver fibrosis with MASH resolution after 52 weeks of treatment with efruxifermin.A further RCT called SYNCHRONY Real-World (NCT06161571) will investigate the safety and tolerability of efruxifermin in 700 MAFLD patients until October 2026 [52]. Incretin and Glucagon Receptor Agonists The endogenous incretin hormones glucagon-like peptide-1 (GLP-1) and glucosedependent insulinotropic polypeptide (GIP) are secreted by the L-cells and K-cells of the intestine and are responsible for very strong insulin secretion after a meal [53,54].A viable therapeutic option for the treatment of MAFLD is the antidiabetic drug semaglutide, which is available on the market either as a subcutaneous injection or as an oral drug [55].Semaglutide is an agonist of the GLP-1 receptor which, when binding to the GLP-1 receptor, triggers various signaling mechanisms that lead to insulin secretion and a reduction in glucagon, which in turn results in a reduction in blood glucose levels [56].This drug is also approved for the treatment of obesity, as it can promote weight loss by regulating appetite and inhibiting gastric emptying [56,57].Last year, a meta-analysis by Zhu et al. reported that semaglutide significantly reduced hepatic steatosis, inflammation, hepatocellular ballooning and liver stiffness, while the effect on reducing fibrosis stage is still uncertain [58].According to RCTs (NCT02970942 and NCT03357380), semaglutide demonstrated positive effects on the liver as it decreased ALT level, liver inflammation and steatosis [59,60].Semaglutide is currently being investigated in the ESSENCE phase 3 clinical study designed as a randomized, placebo-controlled trial (NCT04822181), which is enrolling a total of 1200 patients and is expected to be completed by 2029.The study investigates MASH outcomes without fibrosis exacerbation and improvement of liver fibrosis without MASH exacerbation in non-cirrhotic MASH patients [61]. The dual GLP-1 and glucagon receptor agonist efinopegdutide (MK-6024) is a subcutaneously administered drug that was developed for the treatment of MAFLD and is currently undergoing clinical trials [62].This drug has already obtained fast-track designation from the US FDA.This is a procedure aimed to accelerate the development and evaluation of drugs for the treatment of serious conditions in order to meet urgent medical needs as soon as possible [63].In a randomized phase 2a clinical trial (NCT04944992), the efficacy of efinopegdutide in reducing hepatic fat in MAFLD patients was compared with that of semaglutide, resulting in a significantly greater reduction in hepatic fat with efinopegdutide than with semaglutide [62].In view of these promising results, the investigation of efinopegdutide continues in a randomized, double-blind, placebo-controlled phase 2b clinical trial (NCT05877547).The aim of this ongoing study, with an estimated duration until the end of 2025, is to assess its efficacy in resolution of MASH without aggravation of liver scarring in a total of 300 non-diabetic patients with histologically proven precirrhotic MASH [63,64].Tirzepatide is a novel drug that is approved for T2DM and obesity and is being intensively trialed for other indications, including MAFLD, as it has already shown beneficial effects on MAFLD biomarkers in patients with T2DM.This drug belongs to the "twincretins" group as it is a dual GLP-1 and GIP receptor agonist [54,65,66].At the beginning of 2024, the phase 2 RCT SYNERGY-NASH (NCT04166773) was completed, which was conducted in MASH patients to evaluate the safety and efficacy of tirzepatide.The primary endpoint was to determine whether tirzepatide leads to resolution of MASH and no worsening of fibrosis, and the secondary endpoint was the change in fibrosis stage, liver fat content and body weight [54].The study sponsor announced the positive results of the clinical trial and reported about up to 74% of patients meeting the primary endpoint, compared to around 13% in the placebo group.However, the detailed study results are yet to be published [67]. Sodium-Glucose Cotransporter 2 (SGLT2) Inhibitors One of the top candidates for treatment of MAFLD are oral antidiabetic drugs sodiumglucose cotransporter 2 (SGLT2) inhibitors, commonly called "flozins" [1,68,69].They lower blood glucose by inhibiting SGLT2 in the proximal renal tubule, thus blocking the reabsorption of glucose into the bloodstream and promoting its excretion via the urine [68].SGLT2 inhibitors are generally considered safe; the most commonly reported adverse effects are infections of the genitourinary tract, hypotension and diabetic ketoacidosis, which are associated with their mechanism of action [70].Furthermore, SGLT2 inhibitors have recently been found to have cardio-and kidney-protective effects due to their antiphlogistic and antifibrotic mode of action, which has led to their marketing authorization for nondiabetic indications, namely heart failure and chronic kidney disease [71][72][73].Reduction of inflammation, steatosis and fibrosis have been suggested as beneficial effects on the liver, for which the clinical efficacy of SGLT2 inhibitors is being investigated in numerous RCTs [1,74,75].Among all SGLT2 inhibitors, dapagliflozin and empagliflozin have come the longest way (Figure 2) [65].The safety of dapagliflozin is still being assessed in a placebocontrolled phase 3 RCT (NCT05308160), which is expected to enroll a total of 75 patients with MAFLD diagnosed with steatosis grade 2 or higher detected by FibroScan device, and to be finished by the end of April 2024 [65,76].In another placebo-controlled phase 3 RCT, the so-called DEAN study (NCT03723252), a total of 154 patients with biopsy-confirmed MASH and metabolic risk factors were recruited in China, with the primary endpoint being histological improvement after 12 months of dapagliflozin therapy.Secondary endpoints included MASH resolution, liver fibrosis and steatosis, inflammatory biomarkers and metabolic factors (weight, waist circumference, blood pressure, HbA1c).This study has just been completed and the results of the study are still being awaited [61,[77][78][79].An ongoing phase 4 clinical trial (NCT05459701) evaluating the effect of dapagliflozin on the liver of patients with type 2 diabetes mellitus and MAFLD compared to the placebo group over a period of 6 months, with leptin, adiponectin and vascular cell adhesion molecule 1 (VCAM-1) as primary outcomes, is expected to end in an upcoming period [80,81].A double-blind, placebo-controlled phase 4 RCT (NCT04642261), which had the primary goal of assessing whether empagliflozin can reduce liver steatosis measured by magnetic resonance imaging-proton density fat fraction (MRI-PDFF) in non-diabetics with MAFLD at week 52, enrolled a total of 98 patients.Secondary endpoints included liver transaminases, fasting glucose, body mass index (BMI)/weight and waist circumference.The results of this study were published in 2024 by Cheung et al., who reported that patients receiving empagliflozin lost significantly more body weight and waist circumference and had lower fasting blood glucose than the control group receiving placebo drug [82,83]. Peroxisome Proliferator-Activated Receptor (PPAR) Agonists A family of nuclear receptors, the peroxisome proliferator-activated receptors (PPARs), are involved as transcription factors in various processes of lipid and glucose metabolism and inflammation [84,85].Moreover, they appear to be involved in fibrotic processes, as agonism of the PPAR isotope gamma (γ) leads to an inhibition of the pro-fibrotic effect of hepatic stellate cells in the liver, which are activated in the course of NASH progression [86].Generally, in MAFLD, PPARs are dysregulated, and the effect of the agonist for PPAR α/γ receptor, pioglitazone, is associated with improvement in steatosis, inflammation and hepatic biomarkers, making it a very compelling choice for MAFLD therapy [84,87,88].This drug belongs to the drug group of thiazolidinedione commonly called "glitazone" and is indicated for T2DM due to its efficacy in lowering blood glucose levels by enhancing insulin resistance [89].Moreover, treatment with pioglitazone is already included in several guidelines as a possible treatment option for patients with T2DM and proven MASH [90].On the other hand, it should be pointed out that the clinical use of pioglitazone is limited due to its potential side effects including weight gain, bladder cancer, fluid retention which can cause congestive heart failure in patients having cardiomyopathy and an enhanced risk of bone loss and distal bone fractures in postmenopausal women, and therefore careful selection of patients for pioglitazone therapy is required [87,91].The assessment of the efficacy of oral pioglitazone on hepatic steatosis and liver function tests over 24 weeks and its safety was conducted in a placebo-controlled RCT (NCT01068444) in 90 Taiwanese participants with MASH.In this study, a significant drop in ALT, MAFLD activity score (MAS) and liver fat was observed, and pioglitazone was found to be generally safe.In addition, 46.7% of patients in the pioglitazone group showed an amelioration of MASH without aggravation of fibrosis [89,92].A meta-analysis analyzing the available trials on pioglitazone confirmed that pioglitazone is an effective treatment for prediabetic or diabetic patients with MAFLD, as it reduces steatosis, liver function and inflammation.However, the effect on fibrosis stage is not yet clear, as no significant improvement in liver fibrosis was observed [93].Currently, scientists at the University of Florida in the United States are also investigating the efficacy and safety of pioglitazone in the RCT known as AIM 2, but at a low dose (15 mg per day) and in patients with T2DM and MASH, with the plan of enrolling 166 patients and a duration until the end of 2027 (NCT04501406) [89]. Another member of the PPAR family being considered for the treatment of MAFLD is the dual PPAR-α and PPAR-γ agonist saroglitazar.In March 2020, following successful completion of trials in India, this drug was approved for MASH in India, but not yet in the other countries [89,94].In the United States, saroglitazar has been granted marketing authorization for the treatment of dyslipidemia and hypertriglyceridemia in diabetic patients [95].Furthermore, this drug showed confirmed beneficial effects on the liver in a placebo-controlled RCT (NCT03061721) named EVIDENCES IV, conducted in the United States on 106 patients with MAFLD or MASH.Liver fat content (measured with MRI-PDFF), ALT, adiponectin, triglyceride and insulin resistance were significantly reduced at a dose of 4 mg saroglitazar magnesium [96].Saroglitazar magnesium is currently being investigated among patients with MASH and fibrosis in an ongoing placebo-controlled RCT (NCT05011305), which is expected to end next year with a total of 240 enrolled participants.The primary objective of this study is to investigate the resolution of MASH without worsening fibrosis after 52 weeks of saroglitazar therapy.One of the secondary objectives is to determine whether saroglitazar can improve liver fibrosis without worsening liver inflammation, steatosis or ballooning [86,97]. The fact that PPARs are currently a very appealing target in MAFLD is also confirmed by the new competitor lanifibranor.This drug is a pan-PPAR agonist that can bind to different regions of the PPAR α-, δand γ-ligand domain and thereby act as an agonist on all three PPAR isoforms [98,99].Owing to this mechanism of action, lanifibranor may have a more potent effect on reducing inflammation, liver fibrosis and metabolic risk factors in MAFLD patients than a single PPAR agonist [100,101].The NATIVE study is a phase 2b placebo-controlled RCT (NCT03008070) with a total enrollment of 247 participants, in which lanifibranor showed superiority over placebo in reducing the SAF-A score (activity part of the Steatosis Activity Fibrosis score) by at least two points without deterioration of fibrosis after 24 weeks of therapy [102,103].Those favorable outcomes led to the launch of the NATIV3 study, a phase 3 placebo-controlled RCT (NCT04849728), which aims to enroll 1000 patients with active NASH and liver fibrosis (stage F2/F3) and to be completed by October 2026 (Figure 2).This study is designed to measure the resolution of MASH with improvement in fibrosis at week 72.The positive results of this study could bring lanifibranor a significant step closer to approval [104,105].In Table 1 we have presented these two studies for lanifibranor together with the latest studies for other study drugs mentioned prior in this review.Table 1.Completed and ongoing RCTs for metabolic-associated fatty liver disease (MAFLD) treatment. 1The primary endpoint was already achieved. 2Detailed study results are pending.RCT = randomized clinical trial, N/A = not applicable as no published results yet, SAF-A (activity part of the Steatosis Activity Fibrosis score), TGs = triglycerides, LDL = low-density lipoprotein, ALT = alanine aminotransferase.Adapted and actualized from Sangro et al. [69]. Review Limitation To reflect the accepted new terminology and to facilitate readability of this paper, the new term MASH has been used instead of NASH.This is also the main limitation of this review, as the RCTs mentioned included patients based on NASH criteria, which are not the same as those for MASH.The main difference is that NASH patients do not necessarily have at least one of the five cardiometabolic risk factors (obesity, T2DM, hypertension, elevated plasma triglycerides and decreased plasma HDL) [106].It is therefore uncertain whether the study results would be significantly different if patients with MASH criteria were included in the study.On the other hand, it is important to emphasize that the term steatohepatitis as well as the criteria for fibrosis stages were preserved in both terminologies.In addition, patients with steatosis without cardiometabolic risk factors or other possible causes are considered cryptogenic and candidates for possible MAFLD who would benefit from regular reassessment [8,106,107].The sponsor of the MAESTRO-NASH study (NCT03900429) for resmetirom, which involved patients with NASH and fibrosis, announced along with the study results that the new terminology for NASH is MASH [20].However, to correctly adopt the MASH terminology, studies would need to enroll patients with MASH, which is expected in future RCTs for this indication. Future Directions In the future, grouping MAFLD patients according to their liver histology (steatosis, steatohepatitis and fibrosis) may be less useful than with respect to main pathological mechanism involved, which is more suitable for predicting disease outcome [8].A similar conclusion was reached in a large prospective cohort study in China, which showed that the higher mortality of MAFLD patients depended on the presence of various metabolic risk factors such as type 2 diabetes mellitus, obesity and other comorbidities.Based on this study result, the authors concluded that in the future it may be necessary to subcategorize MAFLD depending on the severity of a patient's metabolic syndrome symptoms in order to ensure more effective treatment [108].In addition, given that MAFLD is a multisystemic disease, a multidisciplinary approach is required, and it is likely that monotherapy (in combination with lifestyle modification) to treat MAFLD will not provide complete treatment success.To achieve this, combination therapy targeting different pathological pathways and organ systems needs to be more intensively explored [17,109].Nevertheless, there is still a relatively long way to go before such combination therapies can be introduced into clinical practice, as there are several points to consider in addition to extensive regulatory requirements: Firstly, the use of monotherapy in patients must be well established; secondly, clinical trials must be very well designed and carried out on a large number of participants; and thirdly the dose and safety of such preparations must be examined extremely carefully [110].Last but not least, the treatment of patients who do not respond to "conventional" treatment or are not compliant must also be considered, so that it would be useful to elaborate a more personalized treatment approach [111]. Conclusions MAFLD is a potentially life-threatening liver disease with a substantial impact on global health.Lifestyle intervention remains crucial, but as a single approach is generally not sufficiently effective.Hence, there is an immense need for approved treatment options to enhance patient outcomes.Researchers around the globe, especially in the United States, are making great efforts to develop new molecules for MAFLD or to take a "shortcut" in drug development and regulatory approval processes and expand the indication of drugs already approved for another medical condition.The most trialed agents appear to be hypoglycemics, with study results expected in the next few years.However, the main drivers towards marketing authorization are the agents which can significantly ameliorate liver fibrosis, which is a critical factor in the progression of MAFLD.Some drugs, such as pioglitazone, have been shown to be effective in lowering MAFLD parameters, but the evidence of their effect on fibrosis is still lacking.All in all, the race among pharmaceutical companies seeking marketing authorization for MAFLD is becoming increasingly intense; however, the need for approved pharmacotherapies and treatment approaches is yet to be fulfilled. Figure 2 . Figure 2. Molecules trialed for metabolic-associated fatty liver disease sorted by their most current randomized clinical trial phase.
2024-06-23T15:23:30.409Z
2024-06-21T00:00:00.000
{ "year": 2024, "sha1": "51225093912e7fd125fbf8b139da6fba8c2c2cf6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1467-3045/46/7/376/pdf?version=1718962868", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ee4a4a1632481c00d926a1bfa47192797d6877ff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3589361
pes2o/s2orc
v3-fos-license
Effect of implant placement depth on the peri-implant bone defect configurations in ligature-induced peri-implantitis: An experimental study in dogs Background The subcrestal placement of implant platform has been considered a key factor in the preservation of crestal bone, but the influence of implant placement depth on bone remodeling combined with peri-implantitis is not fully understood. The aim of this study was to assess the effect of the crestal or subcrestal placement of implants on peri-implant bone defects of ligature-induced peri-implantitis in dogs. Material and Methods Eight weeks after tooth extraction in six beagle dogs, two different types of implants (A: OsseoSpeed™, Astra, Mölndal, Sweden; B: Integra-CP™, Bicon, Boston, USA) were placed at either crestal or subcrestal (-1.5 mm) positions on one side of the mandible. Ligature-induced peri-implantitis was initiated four weeks after the installation of the healing abutment connections. After 12 weeks, tissue biopsies were processed for histological analyses. Results Supra-alveolar bone loss combined with a shallow infrabony defect was observed in crestal level implants while deep and wide infrabony defects were present in subcrestal level groups. Subcrestal groups showed significantly greater ridge loss, depths and widths of infrabony defects when compared to crestal groups (P<0.001). Conclusions Within the limitations of the animal study, it can be stated that the implants at subcrestal position displayed greater infra-osseous defect than implants at crestal position under an experimental ligature-induced peri-implantitis. Key words:Subcrestal, peri-implantitis, histology. Introduction Subcrestal implant placement in esthetic areas has been a common treatment modality in order to maintain the mucosa texture and tonality, as well as provide sufficient space to achieve an ideal emergence profile (1,2). Meanwhile, data from biomechanical analysis have indicated that increased implant placement depth could reduce the strain levels in peri-implant bone (3). Different types of implant-abutment connections have indicated different patterns of bone loss. Compared to external connections and internal screwed flat connection, conical internal connection has exhibited higher stability (4), improving resistance to micro-movement, reducing bacterial microleakage and preventing the loss of crestal bone. Animal models using implants with morse tapered implant-abutment interface (IAI) have previously indicated a positive impact on bone contact with the neck of the implant when positioned at a subcrestal level (2,(5)(6)(7). However, clinical studies utilizing implants with tapered internal IAI inserted at subcrestal levels presented contradictory results with respect to peri-implant bone loss (8)(9)(10)(11)(12). In a retrospective study, Lee et al. showed that the failure rate for the implants placed at the margin level was significantly greater than implants placed ~2mm subcrestally (8). Conversely, results from a 36-month prospective split-mouth clinical trial (9) and a 3-month prospective randomized controlled clinical trial (10) indicated no statistically significant differences in crestal bone loss around implants placed at crestal and subcrestal levels. Moreover, results from a prospective 60-month followup study showed peri-implant bone loss was significantly greater in subcrestal implants with platform-switched morse taper connection (11,12). Previous studies have documented greater peri-implant probing depth, biologic width and epithelial dimension around subcrestal implants compared to crestal implants or super-crestal implants (5,6,13). It has been shown that the dental hygiene prophylaxis played an important role in maintaining the soft tissue and crestal bone levels around subcrestal implants (9). Long-term bone levels around dental implants are maintained with proper oral hygiene (14). Since peri-implant inflammation induced by poor plaque control might compromise the success of dental implants, it was of interest whether implant placement depth would affect the peri-implant bone remodeling during the development of peri-implant infections. Despite the favorable effect of the morse-tapered IAI connection, limited information is available about whether different morse-tapered IAI connections result in different peri-implant bone loss. To the best of our knowledge, no study currently exists comparing the histological bone loss between tapped-in morse-taper IAI and screwed-in morse-taper IAI at crestal and subcrestal positions under inflamed condition. The experimental peri-implantitis model in dogs was widely used for evaluating the pathogenesis of periimplantitis, in which the bone defect configurations was similar to the naturally occurring bone defects in humans (15). Therefore, the primary aim of this study was to histologically evaluate the effect of insertion depth on peri-implant bone defects under ligature-induced peri-implantitis in a canine model. While the secondary aim was to explore potential differences in bone defect configurations due to the implant type, tappedin morse-taper IAI or screwed-in morse-taper IAI. The null hypothesis stated that the vertical positioning of implant along with the IAI connection would not affect peri-implant bone defect configurations in ligature-induced peri-implantitis. Material and Methods -Animals Ethics approval was obtained by the Medical Ethical Committee for Animal Investigations of Peking University Health Science Center in Beijing, China, registered under number LA2010-032 and all procedures were done according to the ARRIVE guidelines (16). Upon receiving approval, six male beagle dogs, 1-2 years old and weighting 10-12.5kg, were acquired and housed individually in standard cages under ambient temperature 20-25 •C, relative humidity 30-70%. All dogs were fed a soft diet and water ad libitum during the experiment. All surgical procedures were performed under general anesthesia, using intravenous sodium pentobarbital (30mg/kg). Sample size was based on the calculation of a mean difference of 1.0 mm in infra-osseous defect between groups, SD 0.6 mm, significance level of (α) 5% and 80% power. -Study design The outline of the experiment is presented in Figure 1 with the study consisting of three experimental phases. In phase 1, mandibular premolar and molar (P2-M1) were extracted bilaterally. After eight weeks of healing, a twenty-four (N=24) titanium implants were placed with a predetermined random sequence at the four experimental sites in one side of the mandible (n = 4 implants per animal) and the implants were submerged for 12 weeks (Phase 2). Four weeks after the abutment connection, oral hygiene procedures were purposefully neglected and ligature-induced experimental peri-implantitis was initiated (Phase 3). All animals were euthanized according to the protocol after 12 weeks. Two of each implant, screwed-in tapered internal IAI and fluoride-modified TiOblast surface (A) (Osseo-Speed, 3.5 × 8 mm; Astra Tech Dental, Mölndal, Sweden) and tapped-in tapered internal IAI and plasmasprayed calcium-phosphate surface (B) (Integra-CP, 3.5 × 8 mm; Bicon Dental Implants, Boston, Massachusetts, USA) were inserted in one side of the mandible of each animal (N=24 implants). The study consisted of four experimental groups: (1) A placed crestally (AC); (2) B placed crestally (BC); (3) A placed 1.5mm subcrestally (AS); (4) B placed 1.5mm subcrestally (BS). -Experimental procedures During the first surgical procedure, after general anesthesia, a local anesthesia by 2% lidocaine hydrochloride with epinephrine at 1:100,000 was administered prior to any extraction. Roots of P2-M1 were extracted individually after they were sectioned in the buccolingual direction. Resorbable 4-0 sutures (VICRYL, Ethicon, Johnson & Johnson, Langhome, PA) were used to suture the flaps and an antibiotic (penicillin G procaine 40,000 IU/kg, intramuscular) and analgesic were administered once every 24 hours for 7 days after extraction. The wound areas were cleaned daily during the first week after surgery with a 0.12% chlorhexidine solution. After eight weeks, implant surgery was performed; full-thickness mucoperiosteal flaps were raised in the mandible, the ridge was flattened under copious irrigation with sterile saline, and osteotomies were prepared according to manufacturers' recommendation. Meticulous care was taken to maintain a ~10mm distance between dental implant centers. Each implant type, A and B were placed at crestal and subcrestal (~1.5 mm) position on one side of the mandible of each dog. Anterior and posterior positions between implant systems were interpolated to avoid any site bias while the anterior and posterior positions of crestal and subcrestal groups within same implant system were assigned at random. Cover screws and/or plug inserts of respective implant manufacturer were placed. The flaps were sutured with 4-0 nylon sutures and the sutures were removed after 10 days. Antibiotic and analgesic was administered as aforementioned. After 12 weeks of healing the implants were surgically uncovered. The cover screws were removed and replaced by healing abutments. Special attention was taken to avoid any occlusal contact. Ten days after the procedures, implant sites were irrigated with 0.12% chlorhexidine every second day. Subsequently, a plaque control program, which included the cleaning of implants and teeth using a toothbrush every second day was initiated. -Experimental peri-implantitis Four weeks after the abutment placement, experimental peri-implantitis was initiated. Oral hygiene procedures were neglected and cotton ligatures were placed submarginally around the abutments to facilitate plaque accumulation and to induce plaque-associated periimplant inflammation. Ligatures were examined once a week without forcing them into an apical position. Plaque accumulation continued for a 12-week period. -Histological preparation Twelve weeks after ligature placement, dogs were euthansized and samples retrieved en bloc for histologic and histomorphometric analyses. Sacrifice was performed under general anesthesia by over-dose via intravenous injections of sodium pentobarbital and perfused through the carotid arteries with 4% formaldehyde. The mandibles with the implants were remove and initially fixed in 4% formaldehyde solution, which then were block-resected using an oscillating saw such that the peri-implant mesial and distal soft tissues remained intact. Gradual dehydration was accomplished using a series of alcohol solutions (70-100%). Subsequently, samples were embedded in a methacrylate-based resin (Technovit 9100, Heraeus Kulzer GmbH, Wehrheim, Germany) for non-decalcified sectioning. From each implant site, one buccal-lingual section and one distal section (~300μm thickness) was obtained and further reduced to a final thickness of about ~30μm by means of a series of SiC abrasive papers in a polishing machine under water irrigation. The buccal-lingual sections were stained in toluidine blue and the distal sections were stained with a Goldner trichromic staining for the visualization of soft tissue. -Histomorphometric analysis All sections were referred to optical microscopy for histomorphologic evaluation. Slides had the following landmarks identified (Fig. 2): IAI, implant-abutment interface; fBIC, first bone-to-implant contact, and Ridge, the bone crest. The parameters assessed were: (1) vertical bone loss, linear distance from IAI to fBIC (IAI-fBIC); (2) ridge loss: the ridge loss was calculated as Ridge-IAI + initial insertion depth (i.e. 0 or +1.5mm); (3) depth of infrabony defect, linear distance from ridge to the fBIC (Ridge-fBIC); (4) horizontal bone loss (15). Morphometrical analyses were performed by one calibrated examiner (BH), who was not blinded due to the nature of the study. Before the analyses, a calibration procedure was initiated and revealed that repeated measurements of n = 6 different sections were similar at >95% level. -Statistical analysis The SPSS software (SPSS 18.0, Chicago, IL, USA) and the R software (version 3.0.1; R foundation for Statistical Computing, Vienna, Austria) were used for statistical analysis. Using the implant as the statistical unit (n = 6), the mean values, standard deviations, and median for each variable was calculated for each implant in each animal. The R-library "nparLD 2.1" (17) was used to perform the Brunner-Langer nonparametric analysis of longitudinal data in factorial experiments. Effects of IAI placement depth, implant type, and their interaction on all parameters were assessed. The alpha (α) error was set at 5%. -Clinical findings Healing was uneventful for all implants. Clinically, plaque accumulation was associated with hyperplasia and redness of the mucosa after ligature-induced plaque formation. The marginal alveolar bone loss was confirmed by radiographic evaluation. -Histological evaluation Supra-alveolar bone losses were seen in all planes of the sections. In buccal aspects, supra-alveolar bone losses were prominent and majority of implants (20/24 implants) presented supra-alveolar bone loss without infrabony defect (Fig. 3). In the lingual and distal as- pects, supra-alveolar bone losses were less pronounced compared to the buccal aspects (P < 0.05). The buccal orientation had significantly larger IAI-fBIC in comparison to lingual and distal orientation (P < 0.001), and the lingual and distal orientation did not result in a significant difference (P > 0.05). As a result, lingual and distal measurements of each implant were averaged for using in the analyses comparing implant subgroups. Results of the histometric measurements are presented in Table 1. In the results of ANOVA-Type Statistic for IAI-fBIC, ridge loss, Ridge-fBIC and HBL (Table 2), there was no significant interaction between IAI placement depth and implant type (P > 0.05). Regarding bone defect configurations, frequency distributions of Class Ia-e and Class II defects (ridge to IAI) in four groups are summarized in Table 3. In particular, 50% of implants in subcrestal groups presented without Class II defects due to the nature of subcrestal placement of IAI, even though the ridge loss was more pronounced. Bone defects were most frequently of Class Ic (75%) and following by Class Ie (25%) in subcrestal groups. In crestal groups, bone defects were most frequently of Class Ic (50%) and following by Class Ib (33% P < 0.001 and P < 0.001, respectively), with mean ridge loss, depths of infrabony defect (Ridge-fBIC), and the widths of infrabony defect (HBL) significantly greater for the subcrestal groups compared to the crestal groups. The main effect of IAI placement depth was not significant for vertical bone loss (IAI-fBIC) (P =0.938). The main effect of implant type was significant for ridge loss (P = 0.005), with mean ridge loss significantly greater for B implants compared to A implants. The main effect of implant type was not significant for vertical bone loss (IAI-fBIC), depths of infrabony defect (Ridge-fBIC), and the widths of infrabony defect (HBL) (P=0.098, P =1.000 and P = 0.120, respectively). Discussion In the present study, peri-implant bone defect around crestal implants and subcrestal implants subjected to ligature-induced peri-implantitis was analyzed. The results indicated that, irrespective of the implant type, implant placement depth had a significant effect on peri-implant bone defect configurations under experimentally induced peri-implantitis. When compared with the crestal groups, depth and width of peri-implant infrabony defect were significant greater in subcrestal implants. In this study, peri-implantitis was induced by ligature in dogs, which is a useful model for evaluating the pathogenesis of peri-implantitis (18). Subgingival bacterial accumulation after ligature placement led to soft tissue inflammation and bone loss. Available evidence indicated that the ligature-induced peri-implantitis bone defects in dogs were comparable with naturally occurring lesions observed in humans (18). With respect to the peri-implant bone defect configurations, the ridge loss, depth of infrabony defect (Ridge-fBIC) and the width of infrabony defect (HBL) around the subcrestal implants were significantly greater than the crestal level implants. Despite a more pronounced ridge loss, the subcrestal positioning of the implant helped maintain the ridge at the IAI level. Therefore, supra-alveolar bone loss combined with shallow circumferential infrabony defect was frequently observed in crestal implants while deep and wide infrabony defects were present in subcrestal implants. Compared to other studies, which inserted implants at crestal level under ligature-induced peri-implantitis (19,20), the depths and widths of infrabony defect obtained in this study show similar results. More marked depths and widths of infrabony defect around the subcrestal implants may due to two principal reasons. First, subcrestal implants were placed in more apical position initially, which lead to more advanced bone defect before ligature placement (5). This speculation is in agreement with the results of a systematic review, which concluded that subcrestal positioning of the IAI was associated with a higher net bone loss compared to implants placed in crestal posi- tion (21). However, it is not the only reason to explain the result of present study, as one should note that the subcrestal positioning of morse-taper IAI may help retain the bony coverage of the rough surface under non-inflamed conditions, which led to significant lower IAI-fBIC (2,5-7). As previous studies have shown, subcrestal positioning of morse-taper IAI accompany with narrow HBL (22). In the present study, the IAI-fBIC around subcrestal implants was comparable to the crestal implant group with more significant HBL being present in subcrestal implant group. Furthermore, bone loss was more pronounced in subcrestal implants compared to crestal implants during the period of ligatureinduced peri-implant infection (23). It may be attributed to the epithelium in subcrestal implants were larger than that in crestal implants (5,6), which led to more significant peri-implant probing depth after ligatureinduced plaque accumulates (23). Previous studies indicated that there was a positive correlation between the peri-implant probing depth and the level of periodontal pathogens (24), and the quality and quantity of the bacterial attacks were related to the severity of peri-implant destruction (25). The results of the present study are in agreement with the results of a recent prospective clinical study by Cassetta et al. (11) who inserted 576 implants in 270 patients and took the peri-apical radiographs at prosthetic loading and 60 month follow-up. They reported that periimplant bone loss was significantly higher in subcrestal implants with platform-switched morse taper connection. Unfortunately, the information of peri-implant soft tissue parameters such as plaque index, gingival index and probing depth were lacking, which makes it difficult to judge whether the oral hygiene had any effect. In contrast, the 36-month results from a prospective splitmouth clinical trial (9) showed that crestal bone loss around platform-switched implants placed at subcrestal levels were similar with implants placed at crestal levels under well oral hygiene maintenance. This data together with the results of the present study, indicate that implants inserted at the subcrestal position can function well in a healthy condition; however, in case of subgingival plaque accumulation, the bone defect seems to be different when compared with implants inserted in the crestal position. From a clinical perspective, the pattern of the bone defect may affect the approach and potential outcome of peri-implantitis treatment (26). Upon further evaluation of the results, it was observed that both commercial implant types had similar bone defect configurations under inflammation, except the ridge loss was greater in the B implant group. The difference in ridge loss should be interpreted with caution because previous study indicated greater affinity to ridge loss with B implants compared to the A implants prior to placement of ligature (23). No differences were found between the two different implants in terms of depths and widths of infrabony defect, although differences between the two implants included implant-abutment connection, neck shape, implant surface characteristics and thread design. It should also be noted that the present animal trial had several limitations. Firstly, in contrast to the animal model using spontaneous progression of ligatureinduced peri-implantitis, this study utilized a ligatureinduced prei-implantitis model. Therefore, the influence of ligatures cannot be entirely excluded. To minimize the influence of the ligatures, they were maintained without changing or adding during the experimental period, which reduced traumatic influence on the surrounding tissues and decreased the influence of the operator on the location of the ligature. Secondly, all implants were evaluated under unloaded conditions. Previous Study reported that bone resorption was more severe when the implant was overloaded in the presence of plaque-induced inflammation (27). Despite its limitation and its preliminary character, this study indicates that shape of peri-implantitis bone defects was influenced by the depth of implant placement. Subcrestal implants showed a significant infrabony defect while crestal implant presented supra-alveolar bone loss combined with shallow infrabony defect. Conclusions Within the limits of this study, it is concluded that implants placed at the subcrestal position displayed greater infra-osseous defects than those implants placed at the crestal position in a ligature induced peri-implantitis model.
2018-04-03T04:27:16.045Z
2017-12-24T00:00:00.000
{ "year": 2017, "sha1": "ed0c6dac37e028ff9d3c5a400b69bd0a4bc4e0f9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4317/medoral.22032", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89e101bbbf9d89ec1588f2457f27827fc58d42e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
115285640
pes2o/s2orc
v3-fos-license
Reducing of process model uncertainty towards smart machining systems * Ass. prof. NevenTrajchevski (neven.trajchevski@gmail.com) – Goce Delchev University, Macedonia; prof. Mikolaj Kuzinovski (mikolaj.kuzinovski@mf.edu.mk), assoc. prof. Mite Tomov (mite.tomov@mf.edu.mk) – Faculty of Mechanical Engineering, Cyril and Methodius University in Skopje, Macedonia; prof. dr. hab. inż. Piotr Cichosz (piotr.cichosz@pwr. edu.pl) – Department of Machine Tools and Mechanical Engineering Technologies, Faculty of Mechanical Engineering, Wroclaw University of Technology, Poland This paper presents an approach of empirical modeling of cutting process physical phenomena with measurement uncertainty parameters accompanied to the model exponents/ /coefficients. The approach is presented trough an example of creating a power mathematical model for average cutting temperature in turning with details about the uncertainty contributions from different experimental plans. The approach is proposed to be implemented as usual practice during empirical modeling, in order the resulting models to fit with the needs of the smart machining systems and the needs of interoperability between researchers. * Ass.prof.NevenTrajchevski (neven.trajchevski@gmail.com)-Goce Delchev University, Macedonia; prof.Mikolaj Kuzinovski (mikolaj.kuzinovski@mf.edu.mk),assoc.prof.Mite Tomov (mite.tomov@mf.edu.mk)-Faculty of Mechanical Engineering, Cyril and Methodius University in Skopje, Macedonia; prof.dr.hab.inż.Piotr Cichosz (piotr.cichosz@pwr.edu.pl) -Department of Machine Tools and Mechanical Engineering Technologies, Faculty of Mechanical Engineering, Wroclaw University of Technology, Poland This paper presents an approach of empirical modeling of cutting process physical phenomena with measurement uncertainty parameters accompanied to the model exponents/ /coefficients.The approach is presented trough an example of creating a power mathematical model for average cutting temperature in turning with details about the uncertainty contributions from different experimental plans.The approach is proposed to be implemented as usual practice during empirical modeling, in order the resulting models to fit with the needs of the smart machining systems and the needs of interoperability between researchers.KEYWORDS: uncertainty, empirical modeling, smart machining system, cutting temperature, cutting forces W pracy przedstawiono propozycję modelowania empirycznego zjawisk fizycznych w skrawaniu z uwzględnieniem parametrów niepewności pomiarowej oraz modelowych współczynników. Propozycję tę zaprezentowano na przykładzie modelu matematycznego temperatury skrawania, z podaniem danych dotyczących składowej niepewności z różnych planów eksperymentalnych. Postuluje się wdrożenie tego podejścia podczas modelowania empirycznego, tak aby otrzymane modele odpowiadały potrzebom inteligentnych systemów obróbki skrawaniem oraz potrzebom interoperacyjności między naukowcami. SŁOWA KLUCZOWE: niepewność, modelowanie empiryczne, inteligentny system obróbki, temperatura skrawania, siły skrawania The evolution of machining followed by the development of the computer aided design (CAD)/computer aided manufacturing (CAM)/computer aided engineering (CAE) systems and process monitoring and control modules brought to us the possibilities of faster machining with increased accuracy and precision.This is the result of implementing a number of new hardware and software tools from various manufacturers.However, if we compare it to the advances in other areas in the industries, we can face significant drawback as a result of the lack of standards in the area of interoperability of the vast number of modules, controllers, and software tools.This means that the advances achieved within some elements or by some manufacturers cannot be used widely due to the closed hardware and software components, as well as the copyrights.The current focus in the development in the field is the creation of smart machining systems (SMS) [2][3][4].As defined in [2], SMS is a machine that knows its capabilities to come up with the most efficient way of producing a correct part in the first time, every time and will check and monitor itself using the data to help close the gap between the designer, manufacturing engineer, and the shop floor.SMS are envisioned to overcome the drawbacks that we stated previously and to provide future development based on an open architecture system. In Fig. 1 is shown an example of SMS architecture.One focus of this paper is to propose what the approach of the creators of the knowledge base should be (which the Supervisory System is dependent upon), so that the research results would be used in the SMS and so that the scientific and the engineering practices would be interoperable. Our practice says that considering and accounting for the measurement errors and the empirical model uncertainty is as much important as the modeling itself and it can be the key element of finding common base between different researches, and it is essential to the SMS knowledge base.Otherwise the empirical modeling results most probably are being used by wrong or different interpretation. Empirical modeling and uncertainty towards SMS Тhe mutual need of the machine and the engineers, besides knowing the mathematical relations between the physical phenomena that occur and the machining process itself, is the need of knowing the limits within it are applicable and reliable.The smart knowledge-based adaptive optimization systems should select information, upon which they will make a decision from the list of numerical, theoretical, heuristics or empirical models.The more reliable these models will be the less need for selfmonitoring will be necessary.Herein we can try to identify the different approaches of the empirical modeling and their value regarding the interoperability and reliability.• The measurement uncertainty of the measurement equipment and the cutting process itself is considered.Although with this approach all the sources of the measurement uncertainty are considered and identified, the uncertainty is associated to the single measurement of the experimental plan and there is a lack of information on how that influences the accuracy of the mathematical models or the determined mathematical model coefficients. • A comprehensive approach of identifying and presenting of the empirical errors.Our view and proposal is to consider the measurement uncertainty of the measuring equipment, the measurement uncertainty of the cutting process itself, the uncertainty as a result of the mathematical modeling (of the experimental plan), the representation of the measurement uncertainty to be done within the final result of the experimental research -the mathematical model coefficients.In Fig. 2 such approach is presented, where all the sources of errors are accounted for in the uncertainty parameters, which are added to the final mathematical model coefficients. Furthermore, we will describe one research example and the experimental research stand showed in Fig. 2, which is intended for the research of the forces and the average cutting temperature during turning.In the figure we can note that during a single measurement of one of these quantities, it is considering the influence of the measuring equipment (a, b), the influence of the personal computer interface and software (c, d), the influence of the calibration (e, f ) and further when modeling the mathematical model by using full factorial experimental plan by the design of experiment methodology (DOE) the influence of the selected mathematical modeling procedure or plan (g, h) is considered. The research was performed and the results including the proposed approach are the subject of detailed description and publishing [6].However, for this paper it is of interest to present the final results and the significant influence of the selected experimental plan as one of the previously described neglected source of error. Experimental research Within the experimental measurements and investigation in [7] there have been recorded results about the relative expanded uncertainty of single measurement of the cutting force component and the average cutting temperature.Even if we compare them with other authors' results and by applying the best measurement practices, we can agree that the value of the relative expanded uncertainty is less than 10%, in our experiments for the cutting force measurement it was 8%, and for the average cutting temperature 2%.We believe that such values are not comprehensive enough to be taken as a measure of the error of the experimental research.Therefore, we use such distributed values of single measurements within the DOE experimental plan, which can have 20 single measurements for four factor full factorial design (2 4 + 4) or 11 single measurements for the replica (2 4−1 + 3), further to combine them and propagate such distributed single measurements values by the DOE regression matrix in order to find the exponents of the desired mathematical model and their uncertainties. The experiment was performed under the following conditions: the workpiece material carbon steel, EN C55; cutting tool holder Kennametal IK.KSZNR-064 25×25; cutting insert Hertel SNGN 120704 mixed ceramics MC2(Al 2 O 3 + TiC); cutting tool geometry κ r = 85º, κ r1 = 5º, γ 0 = −6º, The most common approach for empirical modeling is by measuring the physical phenomena of interest in the experimental hyperspace and then fitting the mathematical model based on these data. The discrepancy between experimental results from different researches is mostly a result of different metrological practices [1,5].And all these can be associated to how the researcher is approaching the experiment and the experimental errors.In our opinion the differences in views based on published papers can be summarized as follows: • The empirical errors are minimized.In this approach the empirical modeling is done by assuming that the cutting process parameters are the ones which are programmed, and the measured values are considered as accurate.The regression coefficient is usually generalized as the measure of the error of the experimental modeling when the adequacy test of the fitted model is positive. • The empirical errors are considered.In this approach the measurement uncertainty of the measuring equipment is considered.The drawback is that the uncertainty of the cutting process itself is not considered.calculated response surface compared to the confidence interval that can be presented within the DOE methodology or while analyzing the measurement uncertainty of single measurement.This is a significant explanation towards the differences in results between laboratories, and we consider these results as a contribution to expressing the real empirical model reliability.Moreover, this is the essential information which should be added to the mathematical models of the SMS, which should be able to use them in a proper way with real estimation of their possibility in the process of the optimization of the cutting process. Conclusion As a result of this work we can highlight that presenting the error in the experimental research in the machining process should be done by a comprehensive approach as presented in this paper so that the experimental research results would be proposed for use in the featured SMS and so that we would have a common base for interoperability between different laboratories.As a comprehensive approach, presenting the error of the experimental research not only of the single measurement but also of all the experimental investigation by calculating the measurement uncertainty of the mathematical model exponents/coefficients can be considered.These measurement uncertainties must include the uncertainty that arises from the cutting process itself beside the uncertainty of the measuring system. (1) The results of the investigation are presented in table II, where we can see the values of the fitted coefficients of the mathematical model and every one of these coefficients is accompanied by the uncertainty parameter of the expanded measurement uncertainty, which is propagated upon the same equation upon which the coefficient is calculated following the ISOGuide to the expression of uncertainty in measurement.α 0 = 6º, λ S = −6º, where the mathematical model is showed by (1), and where Θ is the average cutting temperature, v is the cutting speed, f is the feed rate, a p is the depth of cut and r ε is the cutting tool nose radius, c i are the mathematical model coefficients (exponents), while the cutting parameters are varied between two levels each, as presented in table I. Discussion By applying the proposed approach we have calculated the measurement uncertainties of the mathematical model exponents/coefficients, which can be considered as the final result of the experimental research.It presents a different approach of considering the measurement uncertainty regarding the presented measurement uncertainty of single measurement.There is a huge difference between the relative expanded uncertainties and the exponents of one experimental plan as showed in column 4. While for single measurement the relative expanded uncertainty was lower than 10%, in this case the relative expanded uncertainty can vary between 5 and 50%.This is a result of different propagation models generated from the regression matrices and it is showed that the measurement uncertainty of the single measurements influences differently on the mathematical model coefficients.Now, if we compare the results of the different experimental plans, column 4 and column 7 in table II, between full factorial plan and half replica, we can notice that there is a difference between the plans as a consequence of the different regression propagation models between the plans. It is important to note that the high relative expanded uncertainty of the coefficients of the mathematical model results in significantly higher confidence boundaries of the practice in the experimental research.As a final recommendation we can propose that experimental and measurement practice can be considered good if the single measurement relative expanded uncertainties are below 5% with the intention to fit model exponents with reliable uncertainty. ■ The presented very high relative expanded uncertainty gives the real picture of the empirical modeling, and properly raises the question about the reliability of the calculated exponents.Having the detailed budget of measurement uncertainty gained by the proposed approach allows to discover the sources of errors and can guide the researchers to lowering the contributions.Although this approach is not simple to apply, it is highly recommended to implement it as usual Fig. 2 . Fig. 2. Influence of the error sources on the mathematical modeling
2019-04-16T13:28:56.796Z
2018-10-08T00:00:00.000
{ "year": 2018, "sha1": "b96e54bc92280a6ad0358ac088a32a6782874367", "oa_license": "CCBY", "oa_url": "http://mechanik-science.com/index.php/mechanik/article/download/356/353", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b96e54bc92280a6ad0358ac088a32a6782874367", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
84838971
pes2o/s2orc
v3-fos-license
Analysis of community led total sanitation and its impacts on groundwater and health hygiene This study was carried out to determine the magnitude to which Community Led Total Sanitation (CLTS) approach leads to improved sanitation and its potential threats to groundwater quality and health of people. A comparative study was carried out between eight CLTS and non CLTS villages to measure the outcomes of CLTS approach. Water samples were collected to assess the level of contamination in groundwater sources near pits in villages where CLTS approach was adopted. Semi structured interviews, focused group discussions (FGDs) and transit walks were used for data collection. Results revealed the tendency of high level of groundwater contamination in CLTS as compared to non CLTS villages which might be because of pit latrines in the area. Water for hand washing is available, but the use of soap depends upon the economic status of the households. However, 5% increase in the hand washing practices was noticed during the field data collected in CLTS implemented village as compared to non CLTS villages. In addition, water borne disease prevalence was also noticed in CLTS villages, but some improvements were observed in terms of hygiene among the people in CLTS implemented villages. The findings showed that the drinking water quality is deteriorated in the study areas which could be linked to promotion of sanitation systems that do not break the pathogen cycle. INTRODUCTION Access to improved sanitation and safe drinking water is serious issue in many developing countries including Pakistan. Globally, 2.5 billion people lack access to improved sanitation while 748 million people lack access to improved drinking-water. It is estimated that 1.8 billion people use a source of drinking-water that is feacally contaminated. Worldwide, 1 billion people practice open defecation (UNICEF, 2014). The Joint Monitoring Program (JMP) developed a strategy for the progress in sustainable access to safe water and basic sanitation by 2025, but many countries are not on the track to achieve the targets (WHO, 2010). Community Led Total Sanitation (CLTS) is widely recognized as an innovative participatory approach to rural sanitation (Kar, 2005) with aim of open defecation free (ODF) communities by promoting community examination and action (Kumar and Shukla, 2008). Bacterial pathogens cause some of the most feared infectious diseases, such as cholera, typhoid and dysentery, which are quite common in Pakistan. Inadequate sanitation is estimated to cost Pakistan 3.94% of GDP. In Pakistan, only 9 out of 10 have access to water and more than 3 out of 10 do not have access to sanitation (Howard et al., 2001). However, the latest studies showed that hand washing with soap can play an important role in eradication of pneumonia to a greater extent (Curtis and Cairncross, 2003). In Indonesia, the CLTS has created the opportunity for communities to take better control over their sanitation and health outcome (Mukherjee and Shatifan, 2008). CLTS helps in eradication of the health incidences. In Pakistan, over 15 million people have no choice but to collect dirty water from unsafe sources and over 93 million people do not have access to adequate sanitation in Pakistan, over half of the population (Water Aid, 2015). Inadequate water supply, sanitation and hygiene have led to higher rate of water borne diseases which in turn increase the mortality and morbidity rates in Pakistan. Diseases related to water, sanitation and hygiene (WASH) account for 110 deaths of children under-5 every day in the country. Lack of sanitation facilities in schools is deterring children, particularly girls, from enrolling and staying in school. Girls' menstrual hygiene needs are rarely accommodated in schools, serving as a further deterrent (UNICEF, 2014). To explain variations in water quality, one would have to take into consideration hydrogeology, topography, soil condition and underground water levels. The distance between the source of contamination and the point of abstraction affects the removal and elimination of bacteria (Cave and Kolsky, 1999). The use of poorly constructed sewage treatment works and land application of sewage can lead to groundwater contamination close to water supply sources (Pedley and Howard, 1997). For this reason, not only coliform including Escherichia coli can be detected, but estimation must also be made of their numbers in order to assess the degree of pollution and hence the danger to health (Pant, 2004;Mehta, 2009;WSP, 2008) as found elsewhere, high contamination in drinking water in the CLTS villages CLTS is a very effective and innovative social communication process, which creates the right social pressure to ban open defecation totally and to adopt hygienic behavior (Halder, 2005). CLTS approach has widely been used in South Asia and African countries, but there are limitations in this approach that varies from country to country. In order to highlight the gaps, there is a need of explicit research with reference to each country. In Pakistan, there is a parallel approach which is Pakistan Approach to Total Sanitation (PATS), but still there are gaps which needed to be addressed. In CLTS approach, people make pit latrines which are not lined with proper sealing material and people use those latrines where they use water for anal cleaning. There is no research conducted on the potential impact of pits on ground water sources in Pakistan. Although, behavior change is the ultimate goal of CLTS, but the impact of health and hygiene needs to be find out in CLTS villages. The objective of this study was to know the impact of CLTS-based sanitation interventions on the ground water sources, hygiene and health of community. This research is expected to provide new insights for the improvements in the CLTS process in developing countries. RESEARCH METHODOLOGY The study area was carried out in district Mardan in the Khyber Pukhtunkhwa, Pakistan. The district lies from 34° 05' to 34° 32' north latitudes and 71" 48' to 72° 25' east longitudes. The total area of the district is 1632 km 2 . Generally, stream flows from north to the south. Most of the streams drain into Kabul River. Kalpani, an important stream of the district rises in the Baizai and flowing southward and finally joins Kabul River. The summer season is extremely hot. A steep rise of temperature observed from May to June. Even July, August and September record quite high temperatures. The temperature reaches to its maximum in the month of June, that is, 43.5°C (110.3°F). Most of the rainfall occurs in the month of July, August, December, and January ( Figure 1). Eight villages (four CLTS and four non CLTS) were purposely selected (Table 1). In each village, 10 households and 10 key persons, religious leaders and local health practitioners were purposely selected for interview. The total number of households in all villages was 344, whereas semi structured interviews were conducted with head of house of 182 households. This sample was calculated with 95% and with 5 confidence interval level. The data was also collected through transit walk and personal observations. The transit walk helped to identify the hygienic conditions and open defecation practices around the village. The data was later on analyzed through simple excel sheet by categorizing various observations regarding hygiene and hand washing. A total of 114 samples were collected from 38 underground drinking water sources from the selected villages. The distance between pit/septic tank and water sources was measured and water samples were collected to know the level of contamination from every source. Three water samples were collected from each source in order to get the more accurate results. Rotory pumps (bores), hand pumps and dug wells are the sources from samples collected for determination of contamination. The samples were RESULTS AND DISCUSSION Drinking water availability is an important component in the development of a rural community. CLTS approach aims at ending the open defecation through behavior change. The average distance between the water source and the pit in the CLTS village was measured and was found to be 10 feet on the average. In the past people use to make bathing areas and latrines near the wells to fetch water easily for bathing and ablution. Currently, some of the households have constructed pour flush latrines who could afford the costs. A shift in use and construction of latrines was found in the community. The contamination can be linked with the seepage of the waste water from the pit/septic tank towards the source. Most of the pits were built near the water sources for the easy access to water for anal cleaning. Most of the dug wells were not properly protected. The literature shows if groundwater is free of any microbial contamination, but may become rapidly contaminated if protective measures at the point of abstraction are not implemented and well maintained (Schmoll et al., 2006;Nawab and Esser, 2005). The water samples were analyzed in the laboratory and the results showed variations in the amount of E. coli. In Yaqoob Banda which is a CLTS village shows high rate of contamination as shown in Figure 2. Similarly, Said Azim Killi also showed high level of contamination and hence the water is not fit for drinking. Based on the aforementioned analysis, it can be assumed that CLTS villages shows high rate of contamination as compared to non CLTS villages. Relationship of pit and water source contamination (CLTS Village) The water samples taken from households which were near with dry pits showed that 87.5% water is unfit for drinking as shown in Figure 6. It can be assumed that the sources may be contaminated due to seepage from the pits into the water source. The seepage from pit is only possible if water is used for flushing and the soil is sandy, because sandy soil is more favorable for contaminant movement. According to the CLTS worker, "The water table in the area is so deep that we do not tell people to make raised pits". So, the water is almost safe from the pit contamination. It means that the contamination could be due to other reasons and pit cannot be the only reason for contamination of water sources. However, according to the reported study (UNICEF, 2014;Water Aid, 2015;Cave and Kolsky, 1999), there is growing concern about the likelihood of pit latrine effluent infiltrating into groundwater reservoirs for well water supply systems. Pit latrine contents leach downwards and down slopes for distances that vary per season and soil type (Chidavaenzi et al., 2000). Mehta (2009) and her team in India also found that in all the CLTS villages, water contamination was the highest. Comparison of bacteriological contamination In CLTS villages, 73% of water samples were contaminated, while in the non-CLTS villages 69% samples were found to be contaminated with E. coli as shown in Figure 2. There is variation in the level of contamination in which the CLTS villages showed slightly higher rate of contamination. There might be various factors of this contamination in CLTS villages which needs to be investigated further. Figure 3a shows E. coli colonies which were collected in CLTS villages. Although the colonies were also found in non CLTS villages (Figure 3b), but the frequency is much lower. The waterborne diseases were noticed in the villages which may be linked with the pathogens in the drinking water. As a result, the water sources are unsafe to be used for the drinking water purposes in both cases, but the occurrence is lower in non CLTS villages. Impacts of CLTS on hygiene Hygiene is the basic component of CLTS approach and this is achieved through behavior change. During an interview with a village Imam (religious leader) in the non-CLTS village, he says, "In the present age, latrine is the best option for good hygiene and clean environment, because latrine is a place that gives the safe place for defecation and also hygiene is one of the important aspects in Islam". It was observed that people know about various aspects of cleanliness, but some of the people cannot afford to buy soap because the use of soap is not the priority of the community. In CLTS people use only water for hand washing after defecation and before taking meals, but rarely use soap but this behavior is much lower in non CLTS villages. It shows that CLTS approach was good at bringing awareness about the hand hygiene in the community. The data was collected regarding the use of type of material for anal cleaning. The data showed that 36% of the people use water for anal cleaning in pours flush latrines, while 64% people use soil for the cleaning purpose during open defecation in the field. The data found that people in both types of villages use water for washing hands after urination if water is available. In the CLTS villages 60% children wash their hands and 40% don't wash their hands, where as in the non-CLTS villages nearly 44% children wash their hands and 55%don't wash their hands. It was found that 90% people washing their hands after defecation in CLTS villages while 85% people wash their hands after defecation in non CLTS villages as shown in Figure 4. According to the collected data, 60% of the public gathering places in the CLTS villages were clean, while in non-CLTS villages 22% public places were found clean. It shows that CLTS approach has greater role in improvement of hygiene in area which is a good indicator for behavior change. Impacts of CLTS on health The health data collected from the household was analyzed and it was found that the waterborne diseases are occurring mostly in the summer or with the season changes. This data was collected in the winter season so enteric fever, flue and chest infections were also very high in addition to water borne diseases. In CLTS villages, it was found that, 4% people have diarrhea, 81% have typhoid, 2% malaria, 2% scabies, and 5% are other diseases. In non-CLTS villages, 3.5, 1, 1, 1 and 16% of diarrheal, typhoid, malaria, scabies and other diseases, respectively, were found as shown in Figure 5. The waterborne disease that were found during the field included diarrhea, typhoid, malaria and scabies. These diseases were found both in the CLTS and non-CLTS villages with different frequencies. According to Pedley and Howard (1997), contaminated groundwater can contribute to high morbidity and mortality rates from diarrhoeal diseases and sometimes lead to epidemics. Normally, people have health problems especially fever and diarrhea. According to a person "The diseases are prevailing due to dirty places". It was found that most of water borne diseases was more common in summer season. Differences in these diseases have been shown in Figure 5. It was found that water borne disease incidences were more common in CLTS villages in comparison with the non CLTS villages. Other diseases were also found like unknown fever, tuberculosis and hepatitis. A CLTS specialist from local implementing partner organization shared that they use diarrhoeal diseases as an indicator to mobile community. For this purpose, they also collect the information from the 10% of the total household where they focus on women because women are involved in taking care of their children. According to CLTS specialist, 70% households are involved during triggering of CLTS in the villages. Conclusions This study was carried out in the perspectives of health, hygiene and groundwater quality situations in CLTS and non-CLTS villages. Results revealed an increase in groundwater contamination in CLTS implemented villages as compared to non-CLTS villages. Increased contamination was found in the water sources of those households who use pit latrines. It was observed that after CLTS intervention, people upgraded the pits into pour flush latrines as it was not that expensive and also a sign of status in the village. The impact on the hygiene is dependent upon the hygiene indicators and these indicators are dependent upon the behavior and the economic condition of the people. There was a positive change toward hygiene in CLTS implemented villagers. The impact of CLTS intervention on health incidences especially waterborne and water related diseases was not considerable as no changes in the improvement of health status of the people were found. People in the study area have so many cultural and economic challenges which weaken CLTS approach in the study area. After conducting this study, the researchers would like to recommend few points to be considered in future studies and CLTS implementing procedures for making this approach more results-oriented and sustainable. RECOMMENDATIONS (1) Soil structure and texture should be investigated before the CLTS interventions in rural areas (2) The distance between the water source and the pits must be determined before the implementation of CLTS approach to avoid the seepage (3) Water sources should be installed in the upstream, whereas latrines should be constructed at the downstream level to avoid flow of contaminants. (4) The pits should be sealed with clay or any other material to prevent seepage in order to protect the underground water from contamination over the period of time (5) Focus must be given to improvement of hygiene in addition to triggering for prevention of open defecation practices (6) Further research is needed to be conducted on adsorption and infiltration rate of black water in different soils in the country.
2019-03-18T21:01:50.692Z
2016-11-30T00:00:00.000
{ "year": 2016, "sha1": "ca2ed946c115724445d7c1fbcfea34dfad5ed9fd", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/IJWREE/article-full-text-pdf/FF0D06D61569", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "81b2877123bf3453d75f17c2584434e025d26b1f", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Geography" ] }
245556783
pes2o/s2orc
v3-fos-license
The giant steps in surgical downsizing toward a personalized treatment of vulvar cancer Abstract The present article aims to highlight the importance of changes of personalized surgical treatment for vulvar cancer. Current international literature regarding surgical treatment of vulvar cancer was evaluated. This included several studies and systematic reviews. Radical surgery approach, such as en bloc resection, was the first therapeutic option and the standard care for many years, even if burdened with a high complication rate and frequently disfiguring. Taussing and Way introduced radical vulvectomy approach with en bloc bilateral inguinal‐femoral lymphadenectomy; modified radical vulvectomy was developed, with a wide radical excision of the primary tumor. The role of inguinofemoral lymphadenectomy (mono or bilateral) changed in the years too, particularly with the advent of SLN biopsy as minimally invasive surgical approach for lymph node staging, in patients with unifocal cancer <4 cm, without suspicious groin nodes. More personalized and conservative surgical approach, consisting of wide local or wide radical excisions, is necessary to reduce complications as lymphedema or sexual disfunction. The optimal surgical management of vulvar cancer needs to consider dimensions, staging, depth of invasion, presence of carcinoma at the surgical margins of resection and grading, with the goal of making the treatment as individualized as possible. Background Vulvar cancer (VC) accounts for 5% of all gynecologic cancer, usually affecting patients aged over 65 years. 1,2 In the past decades, the incidence of VC in young women is alarming rising. 3 Squamous cell carcinoma is the most common histological type (up to 90%). 4 Human papilloma virus (HPV)-related dysplasia is typical of younger women; in older patients, there is a connection with vulvar dermatoses, such as lichen sclerosis. 5,6 The clinical presentation includes a visible or self-palpated lesion, frequently with pruritus, discharge, or bleeding. 7 The staging of vulvar cancer is surgical, based on the 2009 Federation International de Gynecology et Obstetrique (FIGO) and American Joint Committee on Cancer (AJCC) Seventh Staging Edition TNM staging. Vulvar biopsy is mandatory to assess stroll invasion; clinical and radiologic assessment of tumor dimension is mandatory too; moreover, surgical and/or radiological assessment of pelvic lymph node spread and distant metastasis is necessary. 8,9 The management of VC depends on disease stage. Surgical approach is determinate by tumor size and location, histologic and cytologic grade, depth of invasion, vascular space invasion and, particularly, nodal metastasis that represents the most important prognostic factor. 10,11 For early-stage disease, a pelvic magnetic resonance imaging (MRI) could be useful to define tumor dimension and locoregional disease spread, whereas for advanced-stage disease, a whole-body computed tomography (CT) scan or a whole-body positron emission tomography (PET)/CT scan should be considered for an accurate evaluation. 12,13 Moreover, every patient needs complete blood count, infectious screening, renal and hepatic function tests; a physical examination with cervical pap smear is mandatory too. 14 The identification of new molecular markers for prognostic purposes is needed. Epidermal growth factor receptor (EGFR) immunohistochemical overexpression/ gene amplification and p53 overexpression have been correlated with a worse prognosis. Programmed death ligand PDL-1 seems to be a useful target for new therapeutic approach. The positivity to certain molecular markers does not influence the surgical treatment. 15 Results: Surgical Treatment Over the last years, the approach to VC treatment has evolved from invasive surgery to more conservative approaches, becoming as personalized as possible, with the integration of new surgical techniques. In addition, the radical removal of the tumor can be achieved through a more tissue-sparing vulvar surgery. 16,17 Early-stage vulvar cancer Surgical management Early-stage VC includes FIGO Stages I and II, with tumor size ≤4 cm and stromal invasion ≤1 mm. Nodal spread is absent. Stages IA, IB, and II ≤4 cm are treated surgically. For tumors >1 mm invasion and dimensions up to 4 cm, surgical approach consists in a modified radical vulvectomy, with surgical lymph node assessment. This surgical technique includes superficial and deep fascia lata, including separate incisions for tumor and groin node dissection 18 ; in this way, radical vulvectomy approach with en bloc bilateral inguinal-femoral lymphadenectomy has been overcome, sparing several complications (Figures 1 and 2). In fact, the postoperative management of the traditional surgical approach was very difficult because of the onset of many complications and surgical sequelae (infection, necrosis, pain, functional and esthetic distortion, deterioration of sexual life and psychological health) 14 Di Saia and Hacker developed the concept of minimal resections margins, limited to the tumor. [20][21][22][23] These results have been confirmed by a large study conducted by the Gynecologic Oncology Group. 24 Safe margins are considered and are maintained from 1 to 2 cm (according to Heaps' study). 25 The resection of primary vulvar tumor aims to save organs, such as the urethra, clitoris, and anal sphincter, while maintaining an adequate surgical radicality for the patient; the site of incision depends on tumor location. 18,21 For substage VC IA ≤1 mm treatment consists of a wide local excision, adequate if margins are negative. The term "wide local excision" or "simple vulvectomy" (synonymous of wide local excision) is referred to a type of excision without the inclusion of deep fascia but limited to subcutaneous tissue; tumor margin is 1 or 2 cm above the primary vulvar tumor. 20,21 There are situations where close margins are more common (proximity to the clitoris, urethra, or vulva), but the National Comprehensive Cancer Network (NCCN) Guidelines recommend re-excision of positive margins or those classified as close (<8 mm). 26 If smaller margins are safe is subject of studies. 27 Moreover, postoperative reconstruction, based on patients' characteristics, after demolitive surgeries has improved esthetic result and psychological acceptance Resection margins The safety of the size of resection margin is debated. Non-pathological margins must be greater than 8 mm 25 Chan et al. suggested that no local recurrence has been registered after at least 8 mm margins distant. 30 The study of Woelber showed that the recurrence rate is the same for lesions with margins of less than 8 mm and at least 8 mm, demonstrating no impact of margins distance on progression free survival (PFS). 31 Arvas et al., assessing the margin status in 61 patients affected by vulvar cancer, analyzed those women with pathological margins ≤2 mm had an high risk of recurrence, compared with the group with >2 mm. The intermediate margins value (2-8 mm) was not a predictor of local recurrence. 32,33 The use of re-excision or adjuvant radiotherapy on the basis of close surgical margins alone (2-8 mm) should be carefully considered. 27 Höckel et al. proposed a novel approach for patients with vulvar cancer based on compartmental tumor spread and based on ontogenetic anatomy: in this prospective trial patients were treated with vulvar field resection and anatomical reconstruction, considering anatomy from embryonic development. The extent of deep vulvar resection is not defined with conventional surgical margins and this approach allows to preserve tissue for esthetic reconstruction. 23 However, current recommendations suggest surgical margins of 2 cm and final pathological margin of at least 1 cm. Sentinel lymph node (SLN) and groin treatment Surgical assessment of nodes can be achieved with bilateral SLN biopsy or inguinofemoral lymphadenectomy [IFLND]). Node's evaluation is necessary because the risk of occult nodal metastases is up to 30%. 34 Utilization of SLN represents one of the biggest steps for surgical treatment of vulvar cancer, avoiding complications of routine bilateral lymphadenectomy (risk for lower-extremity lymphedema (approximately 30%-70%). [35][36][37][38][39] This routine approach was changed by Gynecologic Oncology Group (GOG) study in 1987, avoiding groin node dissection in microinvasive VC, with a low risk of nodal metastases and in 1993 36 Homesley assessed that VC localized >2 cm from the midline, drains to ipsilateral groin nodes, and did not metastasize to contralateral part; in this way bilateral groin dissection became not mandatory. The advent of SLN biopsy provides new opportunities for patients, reducing lymphedema or lymphocists, out increasing the risk of groin recurrence. 40,41 SLN is the first lymph node that drains from tumor; GOG 173 and GROINSS-V-1 were the two multicenter observational studies that have analyzed the safety and feasibility of SLN as valid alternative to IFLND. 35,42 For midline vulvar tumors, bilateral SLN should be performed; whereas for lesions that are located ≥2 cm from the midline, unilateral node dissection is sufficient. 20 Currently SLN biopsy has become the standard care for surgical treatment of VC with size ≤4 cm and clinically and/or radiological negative inguinofemoral lymph node. In case of positive SLN, the postoperative management is debated: alternatives include completion lymphadenectomy or external beam radiation therapy (EBRT). The ongoing prospective trial (GOG 270/Groningen International Study on Sentinel Nodes in Vulvar Cancer (GROINSS-V-II) is evaluating if radiation therapy is safe in patients with SLN micrometastes (Table 1). [43][44][45][46][47][48][49] For women with diagnosis of vulvar cancer, the presence of lymph node metastases is the most important prognostic factor. 50 The radical lymph node (LND) dissection was used for years, although a very high morbidity (lymphedema, nerve injury) with compromised quality of life. 51 Moreover, histological analysis confirms the presence of lymph node metastases only in the 25%-35% of all patients; in this way the benefits from the LND procedure were limited SLN dissection as valid alternative to LND has been proposed to avoid overtreatment and to control complications. GROINSS-V is a prospective multicentric study: 400 patients with the same tumoral characteristics (size, stromal invasion, and negative preoperative diagnostic assessment) were treated with sentinel procedure. In patients with negative biopsy, systematic lymphadenectomy was omitted. Groin recurrence rate was only 2% after almost 3 years. No significative differences with patients with early-stage vulvar cancer treated with groin lymphadenectomy were noted. 37 The number of groin recurrence in sentinel-node negative patients seems to be comparable to the other reported for early-stage vulvar cancer treated with lymphadenectomy. So, the effect seems to be the same. 52 Oonk et al. demonstrated from the GROINSS-V data that even when only isolated cells are found in the sentinel node, the rate of no sentinel node metastasis is 4.1%, and in cases of metastasis of less than 5 mm, 11.7%. 42,43 GOG 173 is a prospective study in early-stage vulvar cancer, in which patients with SLN mapping followed by standard complete IFLND. The falsenegative rate of an SLN biopsy in GOG 173 was 2.7% in patients whose tumors were <4 cm. 41 Thanks to results of these studies, SLN was considered safe, sparing serious complications. A systematic review and meta-analysis of the cumulative data on SLN detection reported a pergroin detection rate of 87% and a false-negative rate of 6.4% and groin recurrence rates appeared to be similar only under optimal conditions (unifocal tumors <4 cm, clinically non-suspicious nodes in the groin, appropriate techniques, and procedures). 53 Recent studies checked safety and feasibility of sentinel node biopsy after vulvar surgery, confirming that this procedure after previous surgery is safe and reflects groin status. 42,54,55 However false-negative sentinel carries a high risk of mostly fatal groin recurrences. Particularly midline tumors larger than 2 cm have to be treated carefully, because they are mostly found in cases with groin recurrences after sole SLN. 56 In conclusion, patients with unifocal vulvar cancer, tumor size less than 4 cm, and clinically negative groin assessment can undergo SLN and vulvar surgery in a center with experienced team; if the sentinel node biopsy is positive, patient should undergo systematic IFLND. However, the optimal postoperative management of positive SLN is debated; in fact, adjuvant radiotherapy seems to be a valid alternative. The results of GROINSS-V-II trial show that for positive SLN with metastasis ≤2 mm radiotherapy is a valid therapeutic option instead of IFLND; toxicity is minimal. For patients with positive SLN and metastasis >2 mm, radiotherapy does not seem to be a safe alternative but systematic IFLND is the best option. 42 The current standard approach for detection of SLN includes the use of lymphoscintigraphy with technetium 99 m with intraoperative blue dye (methylene blue or indigo carmine), whereas the use of blue dye alone is not recommended. 53 Management of locally advanced vulvar cancer For women affected by VC, with unresectable disease, treatment of choice consists of radiotherapy (RT) combined with chemotherapy, usually cisplatin. Radical resection in the past was the standard care for the treatment of locally advanced VC; GOG 101 demonstrated that only 3% of patients with T3 and T4 tumors had residual unresectable tumors following chemoradiation. 57 Particularly, in tumors with negative node metastasis RT limited to the vulvar tumor alone can be sufficient; instead, it is necessary to involve the pelvis and groin in case of positive lymph nodes. In cases with groin nodes involvement, surgery would be the best choice, but RT is a valid alternative for fragile patients not eligible for surgery. Clinically suspicious nodes need to be confirmed by biopsy; if there is no radiographic or clinical evidence of nodal metastases, groin nodes should be evaluated by IFLND, because of the risk of false-negative. 44 For patients with Stage IIIB, IIIC, and IVA, chemoradiation to the vulvar tumor, groin, and pelvis is the gold standard. Additional surgery after this approach can be considered in cases of residual disease. Total pelvic exenteration is reserved for selected patients. In fact, this approach is an option for patients with involving of urethra, anus or vagina, and other organs. Surgical morbidity is high with median survival of 11 months. 58 A recent study by MD Anderson Cancer Center included reported a 5-year overall survival rate of 22%. 59 Women who have no other viable alternatives can benefit from this approach. Management of recurrence Recurrent disease occurs in 15%-35% of women with VC. Surgery can be an adequate treatment for recurrent disease limited to the vulvar area, with a cure rate up to 80%; the incidence of isolated local recurrence is 20%. The type of surgery is based on the location and dimensions of the recurrence (wide local excision, hemivulvectomy, or radical vulvectomy). 58,60 Different studies focused on exclusive surgical approach for local recurrence, with a rate of second recurrent of 25%-50%. 61 The management of groin recurrence is debated and difficult because patients die of recurrence. Surgery, followed by radiotherapy, is currently the treatment of choice. Surgery (IFLND or debulking surgery of groin recurrence), either alone or in combination with radiotherapy, has been investigated and patients with combined therapy (surgery and chemoradiotherapy) had a better overall survival. 62 Decision about the best treatment choice mainly depends on location of recurrence, performance status of patient, previous treatment, resulting in a tailormade approach. Discussion Surgical treatment of VC has changed in the last years. The standard mutilating radical vulvectomy has evolved, promoting a conservative and personalized approach. The approach to groin surgery is deeply changed too. Wide local excision and modified vulvectomy are surgical options that preserve women' s quality of life, reducing side effects like lymphedema, sexual dysfunction, urinary complications, and psychological compromission. No randomized clinical trial has been conducted to compare wide local excision to radical vulvectomy. Oncologic safety seems to be equal. 63 Patients with early stage unifocal squamous cell cancer of the vulva (<4 cm) and no suspicious and/or enlarged lymph nodes at imaging should be considered for SLN biopsy. 52 In recent years, quality of life of patients undergoing surgery for vulvar cancer has become a central topic in different studies, particularly risk of lymphedema, causing discomfort heaviness and reduced mobility. A prospective trial by GOG 32 demonstrated that the incidence of lower limb lymphedema is 65% at 6 months after IFLND. On the contrary, in case of SLN this rate is 2%. 34 The objective of GOG study 244 is to evaluate the incidence and risk factors for lymphedema associated with surgery for gynecologic malignancies, but there were too few VC patients for certain results, therefore with lack of exhaustive results. 64 In conclusion, surgery is the primary treatment of vulvar cancer. Early-stage disease has a very good prognosis and treatment should be individualized. The procedure should only be performed by an experienced multidisciplinary team, and in well-selected patients. Individualization of surgical treatment makes it possible to improve the quality of life and psychological state of these women, without sacrificing security and safety.
2021-12-29T06:16:55.502Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "36049ab21422e592566c42bf95c7781e7500e24a", "oa_license": "CCBY", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "a7cbabc9d40e43388aa3687e0103955ed3a20c46", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269715590
pes2o/s2orc
v3-fos-license
Synthesis and characterization of a green and recyclable arginine-based palladium/CoFe2O4 nanomagnetic catalyst for efficient cyanation of aryl halides The utilization of magnetic nanoparticles in the fields of science and technology has gained considerable popularity. Among their various applications, magnetic nanoparticles have been predominantly employed in catalytic processes due to their easy accessibility, recoverability, effective surface properties, thermal stability, and low cost. In this particular study, cyanuric chloride and arginine were utilized to synthesize an arginine-based oligomeric compound (ACT), which was supported on cobalt ferrite, resulting in a green catalyst with high activity and convenient recyclability for the cyanation reaction of aryl halides. The Pd/CoFe2O4@ACT nanomagnetic catalyst demonstrated excellent performance in the cyanation of various aryl iodides and bromides, yielding favorable reaction outcomes at a temperature of 90 °C within a duration of 3 hours. The synthesized nanoparticles were successfully characterized using various techniques, including FTIR, FE-SEM, EDX/MAP, XRD, TEM, TGA, BET, and ICP-OES. Moreover, the Pd/CoFe2O4@ACT catalyst exhibited remarkable catalytic activity, maintaining an 88% performance even after five consecutive runs. Analysis of the reused catalyst through SEM and TEM imaging confirmed that there were no significant changes in the morphology or dispersion of the particles. Ultimately, it was demonstrated that the Pd/CoFe2O4@ACT nanomagnetic catalyst outperformed numerous catalysts previously reported in the literature for the cyanation of aryl halides. Introduction Biocatalysts have played a signicant role in scientic research on sustainable chemistry as a source of inspiration.A type of active biocatalytic reaction system is synthesized by functionalized superparamagnetic nanoparticles with biologically active materials, which have high chemical stability, low cost, 1 low toxicity, 2 and can be easily separated, recovered, and reused. 3In addition, arginine is an important biological molecule due to its wide range of physiological and medicinal functions, which serves as a precursor for synthesizing several biologically signicant substances, including amino acids 4 proteins containing glutamate, polyamines, urea, nitric oxide, and proline. 5he present study employed cyanuric chloride and arginine to create an arginine-based oligomer (ACT).In addition, in recent times, nanocatalysts have shown successful performance in various reactions, but their practical application has been limited by the cumbersome process of catalyst recovery through ltration, leading to the loss of solid catalysts. 6To overcome this challenge and improve recyclability, magnetic nanocatalysts have been developed.Magnetic nanoparticles have emerged as a strong and high-surface-area support for heterogeneous catalysts.The magnetic properties of these nanocatalysts enable easy separation and recovery using an external magnetic eld, which can optimize operational costs and improve the purity of the nal product. 7,8One of the signicant types of magnetic nanoparticles owing to its excellent cubic magneto crystal is cobalt ferrite (CoFe 2 O 4 ).In catalysis, ferrites are said to be efficient materials.0][11] Lately, scientists have been focusing on creating nanocatalysts using noble metals, as they exhibit outstanding catalytic performance, possess nanoscale structures, showcase favorable electronic/optical properties, and offer large surface areas. 12,13Among these metals, palladium, Ni, Rh, and Ir can be mentioned.][16][17] For example, in 2021, 2,4-dichlorophenol (2,4-DCP) was electrochemically dechlorinated using magnetic Pd/CoFe 2 O 4 catalysts by Xue and Feng. 18One of the most important reactions catalyzed by palladium is the cyanation reaction, which involves the introduction of a cyano group onto an aryl halide (Ar-X) and has signicant signicance in synthetic and industrial chemistry.This is due to the fact that resulting aryl nitriles from the cyanation reaction serve as crucial intermediates that can be further transformed into various functional groups, including carboxylic acids, imines, esters, amines, tetrazoles, aldehydes, and amides.In addition, they are used as versatile organic compounds in chemical and pharmaceutical industries, for example in the synthesis of herbicides, agrochemicals, and dyes. 19Recently, the exploration of transition metalcatalyzed cyanation reactions involving aryl halides, utilizing metals such as Ni, 20 Cu, 21 and Pd, 22 has attracted considerable attention.Additionally, a range of protocols employing different cyano sources like NaCN, 23 KCN, 24 TMSCN, 22 Zn(CN) 2 (ref.25), and CuCN 26 have been reported for transition metal-catalyzed cyanation reactions.Nevertheless, most of them are confronted with signicant drawbacks, including toxic metal cyanides in stoichiometric amounts and the requirement of harsh reaction conditions.To overcome this problem, several less-toxic metal-free cyanide sources including malononitrile, butyronitrile, acetonitrile, and benzyl cyanide have been reported. 27,280][31][32] However, the main limitations of these catalysts are their recovery and reuse.In 1973, Takagi et al. 33 introduced the initial application of palladium-catalyzed cyanation, utilizing potassium cyanide in DMF at temperatures ranging from 140 °C to 150 °C for a duration of 2 to 12 hours, specically targeting bromo-and iodoarenes.Subsequently, in 1986, Chatani and Hanafusa 34 presented an alternative method for cyanation by employing TMSCN as a cyanide source and Et 3 N as a solvent, focusing on various aryl iodides and using Pd(PPh 3 ) 4 as a catalyst.Later on, alternative sources of cyanide such as Zn(CN) 2 or CuCN were utilized in palladium-catalyzed reactions.However, both of these sources generate signicant amounts of heavy metal waste. 35,36In this study, ACT was selected as an integral component of the catalyst because the synthesized ACT structure, in addition to the presence of a high number of NH groups, has triazine rings, and, due to the presence of these rings, ACT is an active site for guest species and a suitable substrate for chemical reactions as a catalyst.1,3,5-Triazine (and its derivatives) is a very versatile entity, from synthetic (covalent bonds) and supramolecular (coordination, H-bonds, and p-interactions) points of view.Triazine derivatives have proven their great potential in this emerging area of material chemistry, for their p-interaction abilities and their tendency to be involved in intricate H-bond networks.The strong p-p stacking of triazine rings in ACT with aromatic substrates makes reactants more accessible toward Pd active sites, thereby accelerating coupling reactions. 37,38As a safe and effective catalyst, Pd/CoFe 2 O 4 @ACT nanoparticles were investigated for the cyanation of aryl halides.It was found that using this catalytic system is an inexpensive, simple, environmentally friendly, and efficient method for cyanation reactions.In addition, the Pd/CoFe 2 O 4 @ACT nanoparticles are magnetic and can be easily separated and recovered by an external magnet and can be used ve times without signicant activity loss, which demonstrates the practical application of nanocatalysts.The noble metal palladium has also been used, and its superiority in the cyanation reaction of aryl halides compared to other metals has been proven in previous works.Moreover, in this work, using a cyanide source and a less toxic solvent, benzyl cyanide and acetonitrile, respectively, and mild conditions of a temperature of 90 °C and 3 hours, we have obtained products with good performance.Over recent decades, various transition metals have been utilized to catalyze the cyanation reactions of aryls and aryl halides using different cyano sources.Despite many studies on benzonitrile synthesis reactions, we found only one report with a homogeneous palladium catalyst and benzyl cyanide as the cyanide source, 39 so we were motivated to investigate this reaction under heterogeneous catalytic conditions. Materials and methods First, a detailed review of the equipment used for the present study is presented.Then, a thorough and concise explanation of synthesis processes is provided in the following. General remarks Material requirements were met by purchasing materials from Aldrich (China) and Merck (Germany) companies without any further purication.Thin-layer chromatography was used to monitor the reaction.Using silica-gel 60 F-254 as a matrix, TLC was conducted on glass plates.A Nicolet FT-IR 100 spectrometer was used to obtain infrared (IR) spectra.A Philips X-pert 1710 was used at room temperature to obtain X-ray diffraction (XRD) data.In addition, an energy-dispersive X-ray (EDAX) analysis of the nanoparticles was performed using a TESCAN MIRA III FE-SEM to determine their size and morphology.A Philips EM 208S at 120 kV was used to perform transmission electron microscopy (TEM).In the range of 25-800 °C, a thermal gravimetric analyzer was used to perform thermogravimetric analysis (TGA). Synthesis of ACT In separate processes, 2 mmol of arginine and 1 mmol of cyanuric chloride were dissolved in tetrahydrofuran (5 mL) within a round-bottomed ask under ultrasonication conditions for 10 minutes.The two solutions were then combined, and subsequently, 1 mmol of potassium carbonate was added to the mixture at 60 °C under ultrasonication conditions for 3 hours.The mixture was further stirred at 70 °C for 12 hours.The white solid product was then separated with a centrifuge, eluted with tetrahydrofuran and ethanol, and dried at 80 °C in a vacuum oven.Fig. 1 provides a visual representation of the procedure. Synthesis of CoFe 2 O 4 @ACT Aer dissolving 1.5 mmol of cobaltous nitrate hexahydrate and 3 mmol of iron(III) nitrate nonahydrate in 10 mL of deionized water within a round-bottomed ask under ultrasonication for 10 minutes, the two solutions were combined.Subsequently, 0.6 g of ACT was added to the mixture, which was then sonicated using an ultrasonic probe for 30 minutes.To create an alkaline medium with a pH of 12, sodium hydroxide (0.2 M) (20 mL, 0.16 g) was added, and the resulting solution was placed in an autoclave for 24 hours at 120 °C.A magnet was used to separate the end product, which was a light-brown solid, from the medium, and then, deionized water and ethanol were used to purify it.A vacuum oven was used to dry it at 80 °C. Synthesis Pd/CoFe 2 O 4 @ACT First, 0.2 g of CoFe 2 O 4 @ACT was dispersed in deionized water within a round-bottomed ask, and subsequently, 0.11 g of Pd(OAc) 2 was added to the solution, which was then sonicated for 30 minutes under appropriate conditions.Next, 0.26 g of NaBH 4 was introduced into the mixture and stirred at an ambient temperature (25 °C) for a period of 24 hours.A magnet was used to isolate the product, which was a dark brown solid, and ethanol and deionized water were used to elute it.Finally, the product was dried at 80 °C in a vacuum oven to obtain the desired end product. General procedure for the cyanation of aryl halides As a solvent, we used 3 mL of acetonitrile to dissolve 1 mmol of aryl halide, 1.5 mmol of benzyl cyanide, 5 mmol of sodium hydroxide, and 0.03 g of catalyst within a test tube, and the mixture was stirred at 90 °C for three hours.The progress of the reaction was monitored using thin-layer chromatography.The mixture was cooled to room temperature aer completion of the reaction.An ethyl acetate extraction was then performed on the product aer the magnetic catalyst was separated with a magnet.he broad absorption bands shown in the region 3000-3300 cm −1 correspond to the carboxyl group in ACT (red curve).The bands observed at about 3300 cm −1 correspond to stretching vibrations from the adsorbed water and free or TGA.Thermogravimetric analysis (TGA) is a valuable tool to measure the organic content and thermal stability of various substances.As shown in Fig. 5, ACT was decomposed from approximately 220 °C, whereas CoFe 2 O 4 @ACT decomposed from approximately 300 °C.In Pd/CoFe 2 O 4 @ACT, one event is attributed to the loss of adsorbed water molecules at up to 100 °C, while those observed at 400-585 °C were associated with the thermal decomposition of the organic moiety.Moreover, the thermogram establishes that this catalyst was stable up to 400 °C. Catalyst preparation SEM. Fig. 6(a and b) illustrates typical SEM images of the Pd/ CoFe 2 O 4 @ACT nanoparticles synthesized by the hydrothermal reaction, this analysis being used for investigating the morphology, surface, and size of nanoparticles.Due to the anisotropic growth of crystals on the surface, large bulks appeared on the surface with agglomeration.Moreover, it was observed that the appearance of Pd/CoFe 2 O 4 @ACT is shapeless sphere-like, and the particle size of Pd/CoFe 2 O 4 @ACT is about 52-57 nm. TEM.The morphology and particle size of the Pd/CoFe 2 -O 4 @ACT photocatalyst were studied from the TEM images of nanoparticles, shown in Fig. 7(a and b).The TEM image indicates that particle size and morphology distribution are uniform for nanoparticles prepared by the hydrothermal treatment method.The size of the smallest nanoparticles was determined at about 50 nm, and the morphology of the particles was sphere-like. EDAX.EDX (or EDS) measurement results indicate the quantitative presence of C, O, N, Fe, Pd, and Co in the samples and, from this analysis, no extra impurities are present in the nanoparticles.EDX analysis of the as-synthesized catalyst is shown in Fig. 8. The EDX analysis revealed that the Pd/CoFe 2 O 4 @ACT nanocatalyst was predominantly composed of carbon (32.37%), oxygen (26.86%), and nitrogen (20.31%) according to Table 1.Additionally, cobalt, palladium, and iron were present in minor amounts, accounting for 5.71%, 6.24%, and 8.52% of the total composition, respectively.The images (Fig. 9a-f) are labeled as Fe-KA, C-K, O-K, Co-KA, N-K, and Pd-LA for Pd/CoFe 2 O 4 @ACT that present all the key elements of C, N, O, Fe, and Pd, demonstrated clearly with elemental mapping images (Fig. 9) without the presence of any signature of substituted metals; these also demonstrate the uniform dispersion of Pd on Pd/CoFe 2 O 4 @ACT. BET. N 2 sorption isotherms show the morphological properties of Pd/CoFe 2 O 4 @ACT (curve a), CoFe 2 O 4 @ACT (curve b), and ACT (curve c) in Fig. 10.The surface area of Pd/ CoFe 2 O 4 @ACT (curve a), CoFe 2 O 4 @ACT (curve b), and ACT (curve c) was 0.97, 0.67, and 0.55 m 2 g −1 , respectively.The average pore diameter was determined using the Barrett-Joyner-Halenda (BJH) method, and it was obtained as 44.78, 50.84, and 105.32 Å, respectively.One of the important properties of nanomaterials is porosity, which is applied for catalytic usage due to the catalytic activity being improved by a high surface area.According to the IPUAC classication, the N 2 sorption isotherm of Pd/CoFe 2 O 4 @ACT is a type II isotherm, indicative of a nonporous or microporous material. Initially, we focused our attention on the catalytic activity of the Pd/CoFe 2 O 4 @ACT catalyst, which was comprised of several catalysts (Table 2) in the cyanation reaction.The synthesized Pd/CuFe 2 O 4 @ACT, Pd/NiFe 2 O 4 @ACT, and Pd/CuBi 2 O 4 @ACT nanoparticles (Table 2, entries 2, 3, and 4) were evaluated, which could afford the product in 65%, 50%, and 25% yields, respectively.In contrast, the synthesized Pd/CoFe 2 O 4 @ACT nanocatalyst (Table 2, entry 1) was suitable for this reaction due to its high reaction efficiency with a yield under same conditions of 88%. Then, the amount of catalyst was checked for this reaction (Table 3), and 0.03 g of catalyst was found as optimum (Table 3, entry 1).Hence, no change in the reaction yield was observed with the increasing amount of catalyst. As shown in Table 4, the effect of several solvents with different polarities on the cyanation reaction was studied.DMF, DMSO, toluene, and H 2 O were checked, but a trace amount of product was obtained.In the presence of ethanol and THF, the amount of product was improved, and an enhancement of reaction yield was observed when acetonitrile was used as a solvent to 88%. Aerward, several bases were tested, and Table 5 summarizes their results.Low reaction yield was observed when the reaction was carried out in the absence of a base (Table 5, entry 4), while the amount of product was enhanced using Cs 2 CO 3 and NaOAc as a base (Table 5, entries 2 and 1).NaOH was determined as the base of choice for this reaction. In addition, by changing the temperature of the reaction to 90 °C, the amount of product was enhanced to 95% (Table 6, entry 4).Moreover, a higher temperature than 90 °C was checked, and the yield of the reaction did not improve.Different amounts of benzyl cyanide were tested (Table 7).An amount of 1.5 mmol of benzyl cyanide seemed sufficient to afford a high yield.Thus, the amount of product decreased with an amount of benzyl cyanide lower than 1.5 mmol. Aerward, the substrate scope for the cyanation reaction was investigated, and the effect of varying the aryl halide was also explored when reacted with benzyl cyanide (Table 8).Moreover, the scope of the reaction is presented in Table 8.A diverse range of functional groups was analyzed in optimized reaction conditions and compared to electron withdrawing and electron donating groups, such as NO 2 , OCH 3 , OH, Br, and NH 2 .This led to a satisfactory yield of product.Probably, 1-bromo-2nitrobenzene was tolerated sterically hindered, which afforded a moderate yield (Table 8, entry 11).Furthermore, 1-iodo-4methoxybenzene, 1-iodo-4-nitrobenzene, and 1-iodo-4hydroxybenzene underwent the reaction to afford a good yield (Table 8, entries 13, 8, and 9).In addition, a comparison of reaction efficiencies between aryl iodides and aryl bromides reveals the former's superiority, attributed to the weaker C-I bond in contrast to the C-Br bond. 14,27dditionally, in the utilization of the Pd/CoFe 2 O 4 @ACT nanoparticles, it was observed that electron-rich or electron-decient substitutions were not pivotal factors affecting product yields.Furthermore, analysis from Table 8 indicates that aryl halides with O-substitution (2d and 2h) yielded lower results than their counterparts with P-substitution (2b and 2e), suggesting a hindrance effect as a contributing factor. Further, for the cyanation of aryl halides, Pd/CoFe 2 O 4 @ACT was compared with other reported catalysts, as shown in Table 9.We have developed a catalyst system (Pd/CoFe 2 O 4 @ACT) that has almost the same efficiency as other reported systems in a shorter time and at a lower temperature, which highlights the superiority of our catalyst. Mechanism Moreover, an acceptable mechanism for the cyanation reaction is shown in Fig. 13.At rst, Pd(0) was produced by reducing Pd(II).Then, the oxidative addition of Pd(0) to the aryl halide CX bond was carried out by producing a Pd(II) complex.NaOH attacked benzyl cyanide and removed its hydrogen to produce more strong nucleophiles of benzyl cyanide.Then, the benzyl cyanide nitrogen attacked the complex, and halogen was removed.Next, the halogen attacked and changed the ligand of the complex.Finally, the nal product was acquired by reductive elimination. 39 Study of reusability and leaching test of the catalyst Based on the ndings of this study, Fig. 14 shows how Pd/ CoFe 2 O 4 @ACT was investigated during the cyanation reaction of 1-iodo-4-nitrobenzene in the presence of benzyl cyanide under optimal conditions.Aer the reaction, Pd/CoFe 2 O 4 @ACT was separated by an external magnet, washed with ethanol, and dried.Then, the catalyst was reused ve times.The recovered catalyst was studied in the h step with SEM, TEM, and ICP analyses to check the stability of the catalyst under reaction conditions.TEM and SEM analyses aer recycling the catalyst ve times showed that its particle size, shape, and morphology are not much different from those of the fresh catalyst, which indicates the strength of the catalyst.The recovered nanocatalyst at the h stage (0.38 mmol g −1 ) did not exhibit signicant palladium leaching when compared to the original catalyst, according to ICP analysis.In another study, when the yield was 58%, the catalyst was magnetically separated, so the reaction could run without it.Aer the scheduled time, no further progress in the reaction was observed, which means that there was no catalyst washout.Additionally, according to Fig. 15, the SEM and TEM images of the catalyst aer being used 5 times show no signicant changes in the particle size, shape, and morphology compared to the fresh catalyst. Conclusion In summary, Pd/CoFe 2 O 4 @ACT as a nanomagnetic catalyst was successfully designed and synthesized.For the synthesis of this nanomagnetic and green catalyst, cyanuric chloride, and arginine were immobilized on CoFe 2 O 4 .The nanocatalyst was investigated using FTIR, XRD, BET, SEM, TGA, TEM, ICP-OES, and EDX/MAP.Characterization studies showed that the particle size of the synthesized magnetic nanoparticles (Pd/ CoFe 2 O 4 @ACT) is about 52-57 nm.Moreover, these nanoparticles were applied as a green and heterogeneous catalyst for the cyanation reaction of aryl halides using benzyl cyanide as a source of cyanide.The desired products were obtained in a short period of time, at a low temperature, and with a high yield.The main advantages of this method are good yield, simple work-up, stability of the catalyst, and recyclability of the catalyst for ve cycles without signicant palladium leaching.Furthermore, the Pd/CoFe 2 O 4 @ACT nanomagnetic catalyst performed better than several previously reported catalysts for cyanation of aryl halides. adsorbed water on the surface of CoFe 2 O 4 . 41M-O (M]metal) stretching bands appear at approximately 578-580 cm −1 for Pd/ CoFe 2 O 4 @ACT (orange curve) and CoFe 2 O 4 (purple curve).42This result is in agreement with the formation of the Pd/ CoFe 2 O 4 @ACT nanoparticles. Fig. 15 ( Fig. 15 (a) SEM and (b) TEM analysis of the catalyst after the 5th cycle. Table 1 Elements in Pd/CoFe 2 O 4 @ACT based on EDX analysis Table 2 Comparative activity of some selected catalysts toward cyanation reaction a Table 3 Pd-catalyzed cyanation using various amounts of catalyst a b Isolated yield. Table 4 Effect of solvent on the reaction yield a Table 5 Pd-catalyzed cyanation using several bases a Table 6 Pd-catalyzed cyanation of aryl halides using several temperatures a Table 7 Pd-catalyzed cyanation of aryl halides using several amounts of benzyl cyanide a Table 8 Synthesis of benzonitrile derivatives catalyzed by Pd/ CoFe 2 O 4 @ACT a a b Isolated yield. Table 9 Comparison of Pd/CoFe 2 O 4 @ACT nanomagnetic catalyst with other reported cyanation catalysts a Isolated yield.
2024-05-12T05:06:17.312Z
2024-04-25T00:00:00.000
{ "year": 2024, "sha1": "03d33f15dbd3b962fa0ca8f2fd72c6c2b1eee623", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "03d33f15dbd3b962fa0ca8f2fd72c6c2b1eee623", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
119171652
pes2o/s2orc
v3-fos-license
On Exceptional Times for generalized Fleming-Viot Processes with Mutations If $\mathbf Y$ is a standard Fleming-Viot process with constant mutation rate (in the infinitely many sites model) then it is well known that for each $t>0$ the measure $\mathbf Y_t$ is purely atomic with infinitely many atoms. However, Schmuland proved that there is a critical value for the mutation rate under which almost surely there are exceptional times at which $\mathbf Y$ is a finite sum of weighted Dirac masses. In the present work we discuss the existence of such exceptional times for the generalized Fleming-Viot processes. In the case of Beta-Fleming-Viot processes with index $\alpha\in\,]1,2[$ we show that - irrespectively of the mutation rate and $\alpha$ - the number of atoms is almost surely always infinite. The proof combines a Pitman-Yor type representation with a disintegration formula, Lamperti's transformation for self-similar processes and covering results for Poisson point processes. Main Result The measure-valued Fleming-Viot diffusion processes were first introduced by Fleming and Viot [21] and have become a cornerstone of mathematical population genetics in the last decades. It is a model which describes the evolution (forward in time) of the genetic composition of a large population. Each individual is characterized by a genetic type which is a point in a type-space E. The Fleming-Viot process is a Markov process (Y t ) t≥0 on M 1 E = ν : ν is a probability measure on E for which we interpret Y t (B) as the proportion of the population at time t which carries a genetic type belonging to a Borel set B of types. In particular, the number of (different) types at time t is equal to the number of atoms of Y t with the convention that the number of types is infinite if Y t has absolutely continuous part. Fleming-Viot superprocesses can be defined through their infinitesimal generators The first summand of the generator reflects the genetic resampling mechanism whereas the second summand represents the effect of mutations. Several choices for A have appeared in the literature. In the present work we shall work in the setting of the infinite site model where each mutation creates a new type never seen before. Without loss of generality let the type space be E = [0, 1]. Then the following choice of A gives an example of an infinite site model with mutations: (1.2) for some θ > 0. The choice of the uniform measure dy is arbitrary (we could choose the new type according to any distribution that has a density with respect to the Lebesgue measure), all that matters is that the newly created type y is different from all other types. With A as in (1.2), mutations arrive at rate θ and create a new type picked at random from E according to the uniform measure, therefore the corresponding process is sometimes called the Fleming-Viot process with neutral mutations. Let us briefly recall two classical facts concerning the infinite types Fleming-Viot process described above (for a more complete picture we refer to the monograph of Etheridge [17]) for the uniform initial condition Y 0 : (i) If there is no mutation, then, for all t > 0 fixed, the number of types is almost surely finite. (ii) If the mutation parameter θ is strictly positive, then, for all t > 0 fixed, the number of types is infinite almost surely. A beautiful complement to (i) and (ii) was found by Schmuland for exceptional times that are not fixed in advance: Theorem 1.1 (Schmuland [37]). P ∃ t > 0 : #{types at time t} < ∞ = 1 if θ < 1, 0 if θ ≥ 1. Schmuland's proof of the dichotomy is based on analytic arguments involving the capacity of finite dimensional subspaces of the infinite dimensional state-space. In Section 6 we reprove Schmuland's theorem via excursion theory. In the series of articles [5], [6], [7], Bertoin and Le Gall introduced and started the study of Λ-Fleming-Viot processes, a class of stochastic processes which naturally extends the class of standard Fleming-Viot processes. These processes are completely characterized by a finite measure Λ on [0, 1] and a generator A. Similarly to the standard Fleming-Viot process, these processes can be defined through their infinitesimal generator (Lφ)(µ) = 1 0 y −2 Λ(dy) µ(da)(φ((1 − y)µ + yδ a ) − φ(µ)) (1.3) and the sites of atoms are again called types. For A = 0, the generator formulation only appeared implicitly in [6] and is explained in more details in Birkner et al. [10] and for A as in (1.2) it can be found in Birkner et al. [9]. The dynamics of a generalized Fleming-Viot process (Y t ) t≥0 are as follows: at rate y −2 Λ(dy) a point a is sampled at time t > 0 according to the probability measure Y t− (da) and a point-mass y is added at position a while scaling the rest of the measure by (1 − y) to keep the total mass at 1. The second term of (1.3) is the same mutation operator as in (1.1). For a detailed description of Λ-Fleming-Viot processes and discussions of variations we refer to the overview article of Blath and Birkner [8]. In the following we are going to focus only on the choice Λ = Beta(2 − α, α), the Beta distribution with density for α ∈ ]1, 2[, and mutation operator A as in (1.2). The corresponding Λ-Fleming-Viot process (Y t ) t≥0 is called Beta-Fleming-Viot process or (α, θ)-Fleming-Viot process and several results have been established in recent years. The (α, θ)-Fleming-Viot processes converge weakly to the standard Fleming-Viot process as α tends to 2. It was shown in [10] that a Λ-Fleming-Viot process with A = 0 is related to measure-valued branching processes in the spirit of Perkin's disintegration theorem precisely if Λ is a Beta distribution (this relation is recalled and extended in Section 2.3 below). If we chose α ∈ ]1, 2[ and Y 0 uniform on [0, 1], then we find the same properties (i) and (ii) for the one-dimensional marginals Y t unchanged with respect to the classical case (1.1), (1.2). In fact, for a general Λ-Fleming-Viot process, (i) is equivalent to the requirement that the associated Λ-coalescent comes down from infinity (see for instance [2]). Here is our main result: contrary to Schmuland's result, (α, θ)-Fleming-Viot processes with α ∈ ]1, 2[ and θ > 0 never have exceptional times: be an (α, θ)-Fleming-Viot superprocess with mutation rate θ > 0 and parameter α ∈ ]1, 2[. If Y 0 is uniform, then P ∃ t > 0 : #{types at time t} < ∞ = 0 for any θ > 0. One can get a first rough understanding of why this should be true by the following heuristic: Kingman's coalescent comes down from infinity at speed 2/t, i.e. if N t is the number of blocks at time t then N t ∼ 2/t almost surely when t → 0. It is known that (see [6] or more recently [28]) the process (N t , t ≥ 0) has the same law as the process of the number of atoms of the Fleming-Viot process. For a Beta-coalescent with parameter α ∈ (1, 2) we have N t ∼ c α t −1/(α−1) almost surely as t → 0 (see [3,Theorem 4]). Therefore Kingman's coalescent comes down from infinity much quicker than Beta-coaelscents. Since the speed at which the generalized Fleming-Viot processes looses types roughly corresponds to the speed at which the dual coalescent comes down from infinity, it is possible that (α, θ)-Fleming-Viot processes do not loose types fast enough, and hence there are no exceptional times at which the number of types is finite. Auxiliary Constructions To prove Theorem 1.2 we construct two auxiliary objects: a particular measure-valued branching process and a corresponding Pitman-Yor type representation. Those will be used in Section 5 to relate the question of exceptional times to covering results for point processes. In this section we give the definitions and state their relations to the Beta-Fleming-Viot processes with mutations. All appearing stochastic processes and random variables will be defined on a common stochastic basis (Ω, G, G t , P) that is rich enough to carry all Poisson point processes (PPP in short) that appear in the sequel. 2.1. Measure-Valued Branching Processes with Immigration. We recall that a continuous state branching process (CSBP in short) with α-stable branching mechanism, α ∈ ]1, 2], is a Markov family (P v ) v≥0 of probability measures on càdlàg trajectories with values in R + , such that where for ψ : R + → R + , ψ(u) := u α , we have the evolution equation For α = 2, ψ(u) = u 2 is the branching mechanism for Feller's branching diffusion, where P v is the law of the unique solution to the SDE driven by a Brownian motion (B t ) t≥0 . On the other hand, for α ∈ ]1, 2[, ψ(u) = u α gives the so-called α-stable branching processes which can be defined as the unique strong solution of the SDE driven by a spectrally positive α-stable Lévy process (L t ) t≥0 , with Lévy measure given by Note that strong existence and uniqueness for (2.3) follows from the fact that the function x → x 1/α is Lipschitz outside zero, and hence strong existence and uniqueness holds for (2.3) until X hits zero. Moreover X, being a non-negative martingale, stays at zero forever after hitting it. For a more extensive discussion on strong solutions for jumps SDEs see [23] and [33]. The main tool that we introduce is a particular measure-valued branching process with interactive immigration (MBI in short). For a textbook treatment of this subject we refer to Li [31]. Following Dawson and Li [12], we are not going to introduce the MBIs via their infinitesimal generators but as strong solutions of a system of stochastic differential equations instead. On (Ω, G, G t , P), let us consider a Poisson point process N = (r i , x i , y i ) i∈I on (0, ∞) × (0, ∞) × (0, ∞) adapted to G t and with intensity measure Throughout the paper we adopt the notatioñ i.e.Ñ is the compensated version of N . It was shown in [12] that the solution to (2.3) has the same law as the unique strong solution to the SDE Now we are going to switch to the measure-valued setting. The real-valued process X in (2.3), (2.5) describes the evolution of the total mass of the CSBP starting at time zero at the mass X 0 = v. We are going to consider all initial masses v ∈ [0, 1] simultaneously, constructing a process (X t ) t≥0 taking values in the space M F Then the measure-valued branching process (X t ) t≥0 can be constructed in such a way that for each v, (X t (v)) t≥0 solves (2.5) with X 0 = F (v), and with the same driving noise for all v ∈ [0, 1]. In what follows, we deal with a version of (2.5) including an immigration term only depending on the total-mass X t (1): is the cumulative distribution function of a finite measure on [0, 1] and we assume (G) g : R + → R + is monotone non-decreasing, continuous and locally Lipschitz continuous away from zero. Moreover, a solution (X t ) t≥0 is strong if it is adapted to the natural filtration F t generated by N . Finally, we say that pathwise uniqueness holds if for any two solutions X 1 and X 2 on (Ω, G, G t , P) driven by the same Poisson point process. The proof of Theorem 2.2 relies on ideas from recent articles on pathwise uniqueness for jump-type SDEs such as Fu and Li [23] or Dawson and Li [12]. Our equation (2.6) is more delicate since all coordinate processes depend on the total-mass X t (1). The uniqueness statement is first deduced for the total-mass (X t (1)) t≥0 and then for the other coordinates interpreting the total-mass as random environment. To construct a (weak) solution we use a (pathwise) Pitman-Yor type representation as explained in the next section. 2.2. A Pitman-Yor Type Representation for Interactive MBIs. Let us denote by E the set of càdlàg trajectories w : R + → R + such that w(0) = 0, w is positive on a bounded interval ]0, ζ(w)[ and w ≡ 0 on [ζ(w), +∞[. We recall the construction of the excursion measure of the α-stable CSBP (P v ) v≥0 , also called the Kuznetsov measure, see [30,Section 4] or [31,Chapter 8]: For all t ≥ 0, let K t (dx) be the unique σ-finite measure on R + such that where we recall that the function (u t (λ)) t≥0 is the unique solution to the equation We also denote by Q t (x, dy) the Markov transition semigroup of (P v ) v≥0 . Then there exists a unique Markovian σ-finite measure Q on E with entrance law (K t ) t≥0 and transition semigroup (Q t ) t≥0 , i.e. such that for all 0 < t 1 < · · · < t n , n ∈ N, Q(w t 1 ∈ dy 1 , . . . , w tn ∈ dy n , t n < ζ(w)) = K t 1 (dy 1 ) Q t 2 −t 1 (y 1 , dy 2 ) · · · Q tn−t n−1 (y n−1 , dy n ). (2.7) By construction and under Q, for all s > 0, conditionally on σ(w r , r ≤ s), (w t+s ) t≥0 has law P ws . The σ-finite measure Q is called the excursion measure of the CSBP (2.3). By (2.8), it is easy to check that for any s > 0 In Duquesne-Le Gall's setting [15], under the σ-finite measure Q with infinite total mass, w has the distribution of ( a (e)) a≥0 under n(de), where n(de) is the excursion measure of the height process H and a is the local time at level a. For the more general superprocess setting see for instance Dynkin and Kuznetsov [16]. We need now to extend the space of excursions as follows: i.e. D is the set of càdlàg trajectories w : R + → R + such that w is equal to 0 on [0, s(w)], w is positive on a bounded interval ]s(w), s(w) + ζ(w)[ and w ≡ 0 on [s(w) + ζ(w), +∞[. For s ≥ 0, we denote by Q s (dw) the σ-finite measure on D given by i.e. Q s is the image measure of Q under the map Let us consider a Poisson point process ( where F and I are the cumulative distribution functions appearing in (2.6). An atom (s i , u i , a i , w i ) is a population that has immigrated at time s i whose size evolution is given by w i and whose genetic type is given by a i . The coordinate u i is used for thinning purposes, to decide wether or not this particular immigration really happened or not. If I(1) = 1, then in the special case of branching mechanism ψ(λ) = λ 2 and constant immigration rate g ≡ θ, the total-mass process X t = X t (1) for (2.6) also solves for which Pitman and Yor obtained the excursion representation in their seminal paper [36]. Remark 2.4. The recent monograph [31] by Zenghu Li contains a full theory of this kind of Pitman-Yor type representations for measure-valued branching processes, see in particular Chapter 10. We present a different approach below which shows directly how the different Poisson point processes in (2.6) and in (2.13) are related to each other. The most important feature of our construction is that it relates the excursion construction and the SDE construction on a pathwise level. Observe that an immediate and interesting corollary of Theorem 2.3 is the following: Corollary 2.5. Let g be an immigration mechanism satisfying assumption (G) and let (X t ) t≥0 be a solution to (2.6). Then almost surely, X t is purely atomic for all t ≥ 0. In the proof of our Theorem 1.2 we make use of the fact that the Pitman-Yor type representation is well suited for comparison arguments. If g can be bounded from above or below by a constant, then the righthand side of (2.6) can be compared to an explicit PPP for which general theory can be applied. 2.3. From MBI to Beta-Fleming-Viot Processes with Mutations. Let us first recall an important characterization started in [6] and completed in [12] which relates Fleming-Viot processes, defined as measure-valued Markov processes by the generator (1.3), and strong solutions to stochastic equations. Theorem 2.6 (Dawson and Li [12]). Let Λ be the Beta distribution with parameters (2 − α, α). Suppose θ ≥ 0 and M is a non-compensated Poisson point process on is an (α, θ)-Fleming-Viot process started at uniformly distributed initial condition. Existence and uniqueness of solutions for this equation was proved in Theorem 4.4 of [12] while the characterization of the generator of the measure-valued process Y is the content of their Theorem 4.9. We next extend a classical relation between Fleming-Viot processes and measure-valued branching processes which is typically known as disintegration formula. Without mutations, for the standard Fleming-Viot process this goes back to Konno and Shiga [27] and it was shown in Birkner et al. [10] that the relation extends to the generalized Λ-Fleming-Viot processes without immigration if and only if Λ is a Beta-measure. Our extension relates (α, θ)-Fleming-Viot processes to (2.6) with immigration mechanism g(x) = θx 2−α and for θ = 0 gives an SDE formulation of the main result of [10]. The proof of the theorem is different from the known result for θ = 0. To prove that X S −1 (t) (1) > 0 for all t ≥ 0, Lamperti's representation for CSBPs was crucially used in [10]. This idea breaks down in our generalized setting since the total-mass process X t (1) is not a CSBP. Our proof uses instead the fact that for all θ ≥ 0 the total-mass process is self-similar and an interesting cancellation effect of Lamperti's transformation for selfsimilar Markov processes and the time-change S. In [1] we study (a generalized version of) the total mass process (X t (1), t ≥ 0) and we show that the extinction time T 0 = inf{t ≥ 0 : X t (1) = 0} is finite almost surely if and only if θ < Γ(α). Otherwise T 0 = ∞ almost surely. We will see in the proof of Theorem (2.7) that in both cases lim t→∞ S −1 (t) = T 0 a.s. Theorem 2.7 thus gives some partial information on the behavior of X t t≥0 near the extinction time T 0 : converges weakly to the unique invariant measure of (Y t , t ≥ 0). As t → T 0 , almost surely, there exists a (random) sequence of times t 1 < t 2 < . . . < T 0 tending to T 0 such that the sets The first part is a direct consequence of Theorem 2.7 and of the convergence of the (α, θ)-Fleming-Viot process (Y t , t ≥ 0) to its unique invariant measure. The second part is a straightforward application of the so-called lookdown representation of (Y t , t ≥ 0). A sketch of the proof is given in Section 7. Proof of Theorems 2.2 and 2.3 Recall that (s i , u i , a i , w i ) i∈I is a Poisson point process on R 3 + ×D with intensity measure Γ given as in (2.12), and that we use the notation (2.11). We are going to show that for all v ∈ [0, 1] there exists a unique càdlàg process Then we are going to construct a PPP N with intensity dr is well defined when t > 0 and tends to F (v) as t ↓ 0. Therefore, for each v the process 3.1. The Pitman-Yor Type Representation with Predictable Random Immigration. We start by replacing the immigration rate (g(Z s− (1))) s>0 in the right-hand side of (3.1) with a generic (F t )-predictable process (V s ) s≥0 , that we assume to satisfy this will be useful when we perform a Picard iteration in the proof of existence of solutions to (2.6) and (3.1). Then we consider Then we want to show that there is a noise N on (Ω, G, G t , P) such that Z is a solution of an equation of the type (2.6). Definition of N . Let us consider a family of independent random variables (U r ) r≥0 such that U r is uniform on [0, 1] for all r ≥ 0. We also assume that (U r ) r≥0 are independent of the PPP (s i , u i , a i , w i ). Then, for all atoms (s i , u i , a i , w i ) in the above PPP, we define the following point process therefore each N i uses a separate family of (U r ) r∈(r i j ) j∈J i . We note that N i is not expected to be a Poisson point process. Almost surely we have a i = a j for all i = j. For each k ∈ N we set We consider a PPP N • = (r • j , x • j , y • j ) j with intensity measure ν given by (2.4) and independent of (( The filtration we are going to work with is We are going to prove the following Proof. For f = f (r, x, y) ≥ 0 we now set Since V is predictable and we can write then we obtain that (L k · ) k is predictable. Hence, I(t) is F t -measurable and for 0 ≤ t < T We will need the following two facts: (1) Conditionally on w k t and s k ≤ t the process w k ·+t has law P w k t (this follows for instance from (2.7)). (2) Let (w t , t ≥ 0) be a CSBP started from w 0 with law P w 0 . Let M = (r i , x i , y i ) be a point process which is defined from w and a sequence of i.i.d. uniform variables on [0, 1] as N k is constructed from w k and the U r i j . Then for any positive function f Let us start with the case s k ≤ t. Using the above facts we see that Let us now consider the case s k > t. where we need to introduce the indicator that r − > s k + to get a sum of CSBP started from a positive initial mass and thus be in a position to apply the above fact. We conclude that Therefore by the definition (3.6 By [24, Theorem II.6.2], a point process with deterministic compensator is necessarily a Poisson point process, and therefore the proof is complete. Proposition 3.1 tells us how to construct a Poisson noise N from the (s i , u i , a i , w i ). Let us now show that Z solves (2.6) with this particular noise. Proof. Using an idea introduced by Dawson and Li [11], we set for n ∈ N * Note that Q({w 1/n > 0}) < +∞ for all n ≥ 1, so that Z n t is P-a.s. given by a finite sum of terms. Moreover, by the properties of PPPs, ((s i , u i , a i , w i ) : w i 1/n > 0) is a PPP with intensity (δ 0 (ds) ⊗ δ 0 (du) ⊗ F (da) + ds ⊗ du ⊗ I(da)) ⊗ 1 (w 1/n >0) Q(dw). Moreover Z n t ↑ Z t as n ↑ +∞ for all t ≥ 0. Now we can write (3.8) Let us concentrate on M n first. We can write, for s i + 1 n ≤ t, where N i is defined in Since Q({w 1/n > 0}) < +∞, only finitely many {A i,n } i such that u i ≤ V s i are non-empty P-a.s and, moreover, the {A i,n } i are disjoint. Then by (3.6) ]0,t]×R + ×R + 1 A i,n (y, r) xÑ (dr, dx, dy) We need first the two following technical lemmas. Proof. Recall that ν α (dx) = c α x −1−α dx. We set J Proof. First recall from (2.9) that E z 1 n Q(dz) = 1 for all n. The proof of (1) is based on the estimate 1 e x ≤ 1 − e −x for x ∈ [0, 1] which follows from differentiating both sides. Of course, the inequality also implies that We apply this estimate to the excursion measure: which goes to zero as argued above. Proof. We have obtained above the representation First, let us note that and moreover and the latter union is disjoint. If we set n (y, r) xÑ (dr, dx, dy) and by Lemma 3.3 where the last equality follows by (2.9). By our assumptions on V the right hand side in the above display converges to 0, as n → ∞. Hence (3.11) also converges to 0, as n → ∞. Let us now deal with (J n t ) ≥0 . Note that we can write The righthand side tends to zero as n → ∞ by Lemma 3.4. Analogously which again tends to 0 as n → ∞ by Lemma 3.4. Therefore and, passing to a subsequence, we see that a.s. (observe that in fact we don't need to take a subsequence since Z n t is monotone nondecreasing in n). Therefore we have obtained the desired results. The proof of Proposition 3.2 is complete. Proof of Theorem 2.3. Let us first show uniqueness of solutions to (3.1). Let v = 1. If (Z i t , t ≥ 0) for i = 1, 2 is a càdlàg process satisfying (3.1) with v = 1, then taking the difference we obtain where the second equality follows by (2.9). By the Lipschitz-continuity of g and the Gronwall Lemma we obtain Z 1 = Z 2 a.s., i.e. uniqueness of solutions to (3.1). The next step is to use an iterative Picard scheme in order to construct a solution of (3.1) (and thus of (2.6)). Let v := 1, and let us set Z 0 t := 0 and for all n ≥ 0 By recurrence and monotonicity of g, Z n+1 t ≥ Z n t and therefore a.s. there exists the limit Z t := lim n Z n t . To show that Z is actually the solution of (3.1) we show first that it is càdlàg (by proving that the convergence holds in a norm that makes the space of càdlàg processes on [0, T ] complete) and then by proving that (3.1) holds almost surely for each fixed t ≥ 0. Let us first show that Z n is a Cauchy sequence for the norm Z = E(sup t∈[0,T ] |Z t |) for which first we set By an analog of Proposition 3.2 we can construct a PPP N n,k with the intensity measure We show now that the right hand side in the latter formula vanishes as n → +∞ uniformly in k. Indeed Then by recurrence E(Z n+1 t ) ≤ Ce tL and by monotone convergence we obtain that i.e. the sequence T 0 E(Z n s ) ds is Cauchy and we conclude that Z n → Z in the sense of the above norm and therefore Z is almost surely càdlàg. The above argument also show that holds almost surely for each fixed t and therefore for all t ≥ 0, i.e. Z is a solution of (3.1) for v = 1. Setting V s := g(Z s− (1)) and applying Proposition 3.2, we obtain (3.1) and the proof of Theorem 2.3 is complete. 3.3. Proof of Theorem 2.2. Let us start from existence of a weak solution to (2.6); by Theorem 2.3 we can build a process (Z t (v), t ≥ 0, v ∈ [0, 1]) and a Poisson point process N (dr, dx, dy) such that (3.1) and (2.6) hold. Now, we set Finally, in order to obtain existence of a strong solution, we apply the classical Yamada-Watanabe argument, for instance in the general form proved by Kurtz [26,Theorem 3.14]. Proof of Theorem 2.7 We consider the immigration rate function g(x) = θx 2−α , x ≥ 0. Now g is not Lipschitzcontinuous, so that Theorem 2.3 does not apply directly. However, by considering g n (x) = θ(x ∨ n −1 ) 2−α , we obtain a monotone non-decreasing and Lipschitz continuous function for which Theorem 2.3 yields existence and uniqueness of a solution (X n t (v), t ≥ 0, v ≥ 0) to (2.6). We now define T 0 := 0, T n := inf{t > 0 : X n t (1) = n −1 } and By construction, T 0 := sup n T n is equal to inf{s > 0 : X s (1) = 0}, and moreover X t (1) = 0 for all t ≥ T 0 . By pathwise uniqueness, if n ≥ m then X n t (v) = X m t (v) on {t ≤ T m }, and therefore (X t (v), t ≥ 0, v ≥ 0) is a solution to (2.6) for g(x) = θx 2−α with the desired properties. Pathwise uniqueness follows from the same localisation argument. To prove that the right-hand side of (2.15) is well-defined, i.e. the denominator is always strictly positive, we are going to apply Lamperti's representation for self-similar Markov process. A positive self-similar Markov process of index w is a strong Markov family (P x ) x>0 with coordinate process denoted by (U t ) t≥0 in the Skorohod space of càdlàg functions with values in [0, +∞[, satisfying the law of (cU c −1/w t ) t≥0 under P x is given by P cx (4.1) for all c > 0. John Lamperti has shown in [29] that this property is equivalent to the existence of a Lévy process ξ such that, under P x , the process (U t∧T 0 ) t≥0 has the same law as We now use Lamperti's representation to find a surprisingly simple argument for the wellposedness of (2.15). Proof. In Lemma 1 of [1] it was shown that, if L is a spectrally positive α-stable Lévy process as in (2.3), solutions to the SDE trapped at zero induce a positive self-similar Markov process of index 1/(α − 1). The corresponding Lévy process ξ has been calculated explicitly in [1, Lemma 2.2], but for the proof here we only need that ξ has infinite lifetime and additionally a remarkable cancellation effect between the time-changes. Since, by Lemma 1 of Fournier [22], the unique solution to the SDE (4.2) for X 0 = 1 coincides in law with the unique solution to we see that the total-mass process (X t (1)) t≥0 and exp ξ A −1 (t) t≥0 are equal in law up to first hitting 0. Applying the Lamperti transformation for t < T 0 yields so thatS and A are reciprocal for t < T 0 . Plugging this identity into the Lamperti transformation yields For the second equality we used left-continuity of X(1) at T 0 which is due to Section 3 of [29] because the Lévy process ξ does not jump to −∞. Using that ξ t > −∞ for any t ∈ [0, ∞), from(4.3) we see thatS explodes at T 0 , that isS(T 0 ) = ∞. Since S andS only differ by the factor α(α − 1)Γ(α), it also holds that S(T 0 ) = ∞ so that X S −1 (t) (1) > 0 for all t ≥ 0. We can now show how to construct on a pathwise level the Beta-Fleming-Viot processes with mutations the measure-valued branching process. Step 1: We have To verify the third equality, first note that due to Lemma II.2.18 of [25] the compensation can be split from the martingale part and then can be canceled by the compensator integral since integrating-out the y-variable yields To replace the jumps governed by the PPP N by jumps governed by M note that by the definition of M we find, for suitable test-functions h, the almost sure transfer identity or in an equivalently but more suitable form , y X r− (1) N (dr, dx, dy) Since the integrals are non-compensated we actually defined M in such a way that the integrals produce exactly the same jumps. Let us now rewrite the equation found for R in such a way that (4.5) can be applied: The stochastic integral driven by N can now be replaced by a stochastic integral driven by M via (4.5): By monotonicity in v, R t (v) ≤ 1 so that the du-integral in fact only runs up to 1 and the second indicator can be skipped: This is precisely the equation we wanted to derive. Step 2: The proof is complete if we can show that the restriction , y X S(r)− (1) N (dr, dx, dy) which, by predictable projection and change of variables, equals , y X S(r) (1) c α x −1−α dr dx dy . Now we substitute the three variables r, x, y (in this order), using C α = 1 α(α−1)Γ(α) c α for the substitution of r and the identity for the substitution of x to obtain Proof of Theorem 1.2 Let us briefly outline the strategy for the proof: In order to show that the measurevalued process Y, P-a.s., does not posess times t for which Y t has finitely many atoms, by Theorem 2.7 it suffices to show that P-a.s. the same is true for the measure-valued branching process X. In order to achieve this, it suffices to deduce the same property for the Pitman-Yor type representation up to extinction, i.e. we need to show that Interestingly, this turns out to be easier due to a comparison property that is not available for Y. We start the proof with a technical result on the covering of a half line by the shadows of a Poisson point process defined on some probability space (Ω, G, G t , P ). Suppose (s i , h i ) i∈I are the points of a Poisson point process Π on (0, ∞) × (0, ∞) with intensity dt ⊗ Π (dh). For a point (s i , h i ) we define the shadow on the half line R + by (s i , s i + h i ) which is precisely the line segment covered by the shadow of the line segment connecting (s i , 0) and (s i , h i ) with light shining in a 45 degrees angle from the above left-hand side. Shepp proved that the half line R + is almost surely fully covered by the shadows induced by the points (s i , h i ) i∈I if and only if The reader is referred to the last remark of [38]. For our purposes we need the following variant: i.e. almost surely every point of R + is covered by the shadows of infinitely many line segments. Proof. The proof is an iterated use of Shepp's result for the sequence of restricted Poisson point processes Π k obtained by removing all the atoms (s i , h i ) with h i > 1 k from Π, i.e. restricting the intensity measure to [0, 1 k ]. Since Shepp's criterion (5.2) only involves the intensity measure around zero, the shadows of all point processes Π k cover the half line. Consequently, if there is some t > 0 such that t is only covered by the shadows of finitely many points (s i , h i ) ∈ Π, then t is not covered by the shadows generated by Π k for some k large enough. But this is a contradiction to Shepp's result applied to Π k . Now we want to apply Shepp's result to the Pitman-Yor type representation. We want to prove that (5.1) holds for any θ > 0. Let us set for all > 0 Then it is clearly enough to prove that for all > 0 In order to connect the covering lemma with the question of exceptional times, we use the comparison property of the Pitman-Yor representation to reduce the problem to the process Z explicitly defined by it is obvious by the definition of Z and Z that We are now prepared to prove our main result. Proof of Theorem 1.2. Due to (5.4) we only need to show that almost surely v → Z t (v) has infinitely many jumps for all t > 0 and arbitrary > 0. To verify the latter, Lemma 5.1 will be applied to a PPP defined in the sequel. If Π denotes the Poisson point process with atoms (s i , w i , u i ) i∈I from which Z t (v) is defined, then we define a new Poisson point process Π l via the atoms where (w) := inf{t > 0 : w t = 0} denotes the length of the trajectory w. In order to apply Lemma 5.1 we need the intensity of Π l . Using the definition of Q and the Laplace transform duality (2.8) with the explicit form we find the distribution Differentiating in h shows that Π l is a Poisson point process on R + × R + × R + with intensity measure Plugging-in the new definitions leads to There is one more simplification that we can do. Let us define Π l, as a Poisson point process on (0, ∞) × (0, ∞) with intensity measure then by the properties of Poisson point processes we have the equality in law Then (5.6) yields Now we are precisely in the setting of Shepp's covering results and the theorem follows from Lemma 5.1 if (5.2) holds. Shepp's condition can be checked easily for Π l, for (5.7) independently of θ and . A Proof of Schmuland's Theorem In this section we sketch how our lines of arguments can be adopted for the continuous case corresponding to α = 2. The proofs go along the same lines (reduction to a measurevalued branching process and then to an excursion representation for which the covering result can be applied) but are much simpler due to a constant immigration structure. The crucial difference, leading to the possibility of exceptional times, occurs in the final step via Shepp's covering results. Proof of Schmuland's Theorem 1.1. We start with the continuous analogue to Theorem 2.2. Suppose W is a white-noise on (0, ∞) × (0, ∞), then one can show via the standard Yamada-Watanabe argument that there is a unique strong solution to In fact, since the immigration mechanism g is constant, pathwise uniqueness holds. For every v ∈ [0, 1], (X t (v)) t≥0 satisfies for a Brownian motion B. Recalling (2.2), we see that (6.1) is a measure-valued process with branching mechanism ψ(u) = u 2 and constant-rate immigration. The Pitman-Yor type representation corresponding to Theorem 2.3 looks as follows: in the setting of Section 2.2, we consider a Poisson point process (s i , u i , w i ) i on R + × R + × D with intensity measure (δ 0 (ds)⊗F (du)+ds⊗I(du))⊗Q s (dw), where the excursion measure Q is defined via the law of the CSBP (2.2) with branching mechanismψ(λ) = λ 2 . Then the analog of Theorem 2.3 is the following: can be shown to solve (6.1); this result, for fixed v, goes back to Pitman and Yor [36]. The calculation (5.5), now using that u t (λ) = λ −1 + t −1 is the unique non-negative solution For the analogue for Theorem 2.7 we define now the process with S(t) = t 0 X s (1) −1 ds. It then follows again from the self-similarity that R is welldefined and from Itō's formula that R is a standard Fleming-Viot process on [0, 1]. The arguments here involve a continuous SDE which has been studied in [12]: where W is a white-noise on (0, ∞) × (0, 1). It was shown in Theorem 4.9 of [12] that the measure-valued process Y associated with (Y t (v), t ≥ 0, v ∈ [0, 1]) solves the martingale problem for the infinitely many sites model with mutations, i.e. Y has generator (1.1) with the choice (1.2) for A. Finally, in order to prove Schmuland's Theorem 1.1 on exceptional times it suffices to prove the same result for (6.2). We proceed again via Shepp's covering arguments as we did in Section 5. The crucial difference is that the immigration is already constant θ so that (5.3) becomes superfluous. The role of the Poisson point process Π l, is played by Π θ,l with intensity measure Π θ,l (dt, dh) = dt ⊗ θ h 2 dh. Plugging into Shepp's criterion (5.2), by Lemma 5.1 and we find that there are no exceptional times if θ ≥ 1. Conversely, let us assume θ < 1. Recalling that for θ = 0 the Fleming-Viot process has almost surely finitely many atoms for all t > 0, we see that the first term in (6.2) almost surely has finitely non-zero summands for all t > 0. Hence, it suffices to show the existence of exceptional times for which the second term in (6.2) vanishes. Arguing as before, this question is reduced to Shepp's covering result applied to Π θ,l : (6.4) combined with (5.2) leads to the result. Proof of Corollary 2.8 The fact that the (α, θ)-Fleming-Viot process (Y t , t ≥ 0) converges in distribution to its unique invariant distribution and that this invariant distribution is not trivial (i.e. it charges measures with at least two atoms) seems to be one of those folklore results for which it is hard to point at a precise reference (however, the existence and unicity of the invariant measure of (Y t , t ≥ 0) is proved in [32]). Here we sketch an argument that relies on the so-called lookdown construction of (Y t , t ≥ 0). The lookdown construction was introduced by Donnelly and Kurtz in [13] and later expanded in [14] by the same authors. The case of Fleming-Viot processes with mutations (in the infinite site model) was treated by Birkner al. [9]. Let us very briefly describe how the lookdown construction works (for more details we refer to [9]). The idea is to construct a sequence of processes (ξ i (t), t ≥ 0), i = 1, 2, . . . which take their values in the type-space E (here E = [0, 1]). We say that ξ i (t) is the type of the level i at time t. The types evolve by two mechanisms : -lookdown events: with rate x −2 Λ(dx) a proportion x of lineages are selected by i.i.d. Bernoulli trials. Call i 1 , i 2 , . . . the selected levels at a given event at time t. Then, ∀k > 1, ξ i k (t) = ξ i 1 (t−), that is the levels all adopt the type of the smallest participating level. The type ξ i k (t−) which was occupying level i k before the event is pushed up to the next available level. -mutation events: On each level i there is an independent Poisson point process (t (i) j , j ≥ 1) of rate θ of mutation events. At a mutation event t (i) j the type ξ i (t (i) j −) is replaced by a new independent variable uniformly distributed on [0, 1] and the previous type is pushed up by one level (as well as all the types above him). The point is then that exists simultaneously for all t ≥ 0 almost surely and that (Ξ t , t ≥ 0) = (Y t , t ≥ 0) in distribution. Fix n ∈ N, and define a process (π t , t ≥ 0) with values in the partitions of {1, 2, . . .} by saying that i ∼ j for π (n) t if and only ξ i (t) = ξ j (t). It is well known that this is an exchangeable process. Recall from Corollary 2.5 that for each t ≥ 0 fixed, Ξ t is almost surely purely atomic. Alternatively this can be seen from the lookdown construction since at a fixed time t > 0, the level one has been looked down upon by infinitely many level above since the last mutation event on level one. We can thus write where the a i are enumerated in decreasing order. It is also known that the sequences (a i (t), i ≥ 1) of atom masses and (x i (t), i ≥ 1) of atom locations are independent. The a i (t) are the asymptotic frequencies of the blocks of π(t) which are thus in one-to-one correspondence with the atoms of Ξ t . Furthermore the sequence (x i (t), i ≥ 1) converges in distribution to a sequence of i.i.d random variables with common distribution I because all the types that were present initially have been replaced by immigrated types after some time. To see this note that after the first mutation on level 1, the type ξ 1 (0) is pushed up to infinity in a finite time which is stochastically dominated by the fixation time of the type at level 1 in a Beta Fleming-Viot without mutation. This also proves the second point of the corollary. For each n ≥ 1, let us consider π (n) (t) = π |[n] (t) the restriction to {1, . . . , n} of π(t). Then, for all n ≥ 1, the process (π (n) t , t ≥ 0) is an irreducible Markov process on a finite state-space and thus converges to its unique invariant distribution. This now implies that (π(t), t ≥ 0) must also converges to its invariant distribution. By Kingman continuity Theorem (see [35,Theorem 36] or [4,Theorem1.2]) this implies that the ordered sequence of the atom masses (a i (t)) converges in distribution as t → ∞. Because (x i (t), i ≥ 1) also converges in distribution this implies that Ξ t itself converges in distribution to its invariant measure. Furthermore it is also clear that the invariant distribution of (π (n) t , t ≥ 0) must charge configurations with at least two non-singleton blocks. Since π is an exchangeable process, so is its invariant distribution. Exchangeable partitions have only two types of blocks: singletons and blocks with positive asymptotic frequency so this proves that the invariant distribution of π charges partition with at least two blocks of positive asymptotic frequency.
2013-04-04T12:14:13.000Z
2013-04-04T00:00:00.000
{ "year": 2014, "sha1": "9fd71055803821d75aeb4c1bd313824d87bf9e28", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s40072-014-0026-6.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "9fd71055803821d75aeb4c1bd313824d87bf9e28", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233438126
pes2o/s2orc
v3-fos-license
Commentary on: Combination of Metabolic Intervention and T Cell Therapy Enhances Solid Tumor Immunotherapy Metabolism is a common cellular feature. Cancer creates a suppressive microenvironment resulting in inactivation of antigen-specific T cells by metabolic reprogramming. Development of approaches that enhance and sustain physiologic properties of T cell metabolism to prevent T cell inactivation and promote effector function in the tumor microenvironment is an urgent need for improvement of cell-based cancer immunotherapies. Metabolism, is a key mechanism that shapes the properties of the TME. Cancer creates a hostile metabolic microenvironment by various mechanisms including nutrient deprivation, metabolic competition, hypoxia, lactate production and oxidative stress [10,11]. These conditions have a significant impact in the function of T cells residing in proximity to cancer [12,13]. Metabolic reprogramming of T cells is imperative for their differentiation and function. This process occurs in a well-coordinated and temporally defined manner and is mandatory for T cell activation and differentiation. While naïve and memory T cells rely mostly on oxidative metabolism, where lipids have a central role, glycolysis has an indispensable role for generation and expansion of T effector cells [14]. In addition to being involved in T cell metabolism, lipids have a distinct role in antigen-mediated signal initiation. Activation of T cell signaling requires TCR clustering in the lipid rafts and formation of immune synapse, processes that rely on intramembranous cholesterol and are highly dependent on the density and availability of lipids at the plasma membrane [15]. T cells generate such membrane lipids during anabolic metabolism initiated by antigenmediated activation [16]. In the TME, cancer-mediated coordinated metabolic switches, modulate T cell metabolic properties and cellular activities and compromise lipid metabolism [17]. As a consequence, T cells lose their ability to synthesize lipids that regulate clustering of signalosomes and transmission of TCR-mediated signals and become unable to respond to stimulation by cancer-associated antigens. This is a key mechanism of cancer-mediated immune escape leading to cancer progression. A recent study attempted to target this specific step of T cell activation by employing a novel approach to enhance initiation of T cell signaling in order to overcome the detrimental challenges of the TME [18]. Because TCR clustering and stabilization of the immunological synapse rely on intramembranous cholesterol, inhibition of cholesterol esterification enzymes, which increase the levels of membrane cholesterol, improve T cell activation and effector function [19]. To recapitulate this process the authors engineered the metabolism modulating drug Avasimibe (Ava), an inhibitor of acetyl-CoA acetyltransferase 1 (ACAT1) [20], to sustain its presence at the plasma membrane. After implementation of a cell surface anchor-engineering technology facilitated by the insertion of tetrazine (Tre) groups in the plasma membrane [21], liposomal Ava containing bicyclo [6.1.0] nonyne (BCN) was successfully retained at the T cell surface ( Figure 1A). This approach inhibited cholesterol esterification, and enhanced the fraction of cholesterol present at the cell membrane. Notably, cell surface anchored Ava (T-Tre/BCN-Lipo-Ava) did not alter T cell viability, survival, activation, basal metabolism or chemotaxis. Instead, T-Tre/BCN-Lipo-Ava increased plasma membrane cholesterol, amplified the clustering of TCRs, and resulted in the formation of enhanced and stabilized immunological synapse. This led to more efficient TCR downstream signaling and production of IL-2, IFNγ and TNFα, ultimately elevating T cell anti-tumor activity ( Figure 1B). More importantly, TCR-transgenic and CAR-T cells engineered to carry Tre/BCN-Lipo-Ava exhibited greater anti-tumor capacity than unmodified T cells. This was demonstrated by their higher tumor cell-killing capacity when co-cultured with melanoma cells in vitro, and delayed tumor progression in vivo after adaptive transfer into melanoma-or glioblastoma-bearing mice [18]. This study provides evidence for the first time that this cell-surface anchor-engineering approach provides the opportunity to use high concentrations of metabolism-modulating drugs to target specifically and directly antigen-specific T cells prior to adoptive transfer. This approach minimizes the non-specific effects that might arise from systemic administration of a drug with the purpose and hope to target T cells of the TME. This approach provides a sustained metabolism-modifying impact that overcomes the detrimental implications of the TME on the specific module of T cell metabolism that governs TCR signal transduction. Importantly, this seems to have a long-lasting impact on T cell function. The recent success in anchoring an active drug at the cell plasma membrane opens new avenues in therapeutic intervention at a cell-specific manner. This approach is promising particularly in the context of cell-based therapies, where ex vivo or in vitro modification prior to adoptive transfer is feasible. Of note, although in vitro metabolic modulation of T cells before adaptive transfer has been previously proposed and tested, this approach is limited by the metabolically hostile TME that is capable of alternating the intrinsic metabolic state of infiltrating T cells. Anchoring a metabolism-modulating drug in tumorspecific T cells prior to adoptive transfer might overcome this limitation and allow therapeutic exploitation of metabolism for improvement of T cell function in cancer therapy. Several challenges and questions remain to be answered. For example: how stable is the maintenance of a membrane-anchored engineered compound? Do engineered T cells undergo unimpeded divisions after antigen encounter despite the modification? Do daughter cells preserve the engineered compound on their cell surface, and at what levels? Can engineered T cells undergo differentiation to T memory cells after exposure to tumorassociated antigens in the TME and survive beyond effector phase to provide immune surveillance that prevents cancer relapse? Can such modifications be implemented for therapeutic alteration of intracellular molecules such as enzymes or transcription factors that imprint distinct metabolic and functional fates in T cells? Despite the many unanswered questions, the new technology provides hope for a new era in the generation of engineered antigen-specific T cells for improvement of cell-based immunotherapies in cancer.
2021-04-28T13:40:23.588Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "3eb3c0616efbf0bc51b18fe196dee308453a1b6b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.20900/immunometab20210016", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a0f901a14f41bab850b70599efb03161fe948a3", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119140381
pes2o/s2orc
v3-fos-license
An asymptotic approach in Mahler's method We provide a general result for the algebraic independence of Mahler functions by a new method based on asymptotic analysis. As a consequence of our method, these results hold not only over $\mathbb{C}(z)$, but also over $\mathbb{C}(z)(\mathcal{M})$, where $\mathcal{M}$ is the set of all meromorphic functions. Several examples and corollaries are given, with special attention to nonnegative regular functions. Introduction Mahler's method is a method in number theory wherein one answers questions surrounding the transcendence and algebraic independence of both functions F (z) ∈ C[[z]], which satisfy the functional equation (1) a 0 (z)F (z) + a 1 (z)F (z k ) + · · · + a d (z)F (z Questions and results concerning the transcendence of Mahler functions and their special values were studied in depth by Mahler in the late 1920s and early 1930s [28,29,30], though the study of special Mahler functions dates back to at least the beginning of the XXth century with the publication of Whittaker and Watson's classic text, "A Course of Modern Analysis" [42,Section 5·501]. Therein the Mahler function n 0 z 2 n is presented as an example of a function having the unit circle as a natural boundary. Mahler's early results focused on degree-1 Mahler functions, his most famous result in this area being the transcendence of the Thue-Morse number T (1/2), which is a special value of the function T (z) satisfying T (z) − (1 − z)T (z 2 ) = 0. According to Waldschmidt [41], after Mahler's initial results his method was forgotten; the resurgence waited nearly forty years, following the publication of Mahler's paper "Remarks on a paper of W. Schwarz" [27] in 1969. Mahler's method was then extended by Kubota, Loxton, Ke. Nishioka, Ku. Nishioka, and van der Poorten among others; see [18,19,20,21,22,23,24,25,26,31,33,34,35,36,37], though this list is certainly not exhaustive. Much of the continuing interest is connected with the fact that if the sequence {f (n)} n 0 is output by a deterministic finite automaton, then its generating function F (z) = n 0 f (n)z n is a Mahler function. Arguably, the most celebrated result in this area is due to Ku. Nishioka [35], who proved that if F 1 (z), . . . , F d (z) are components of a vector of Mahler functions with algebraic coefficients satisfying (2), then for all but finitely many algebraic numbers α in the common disc of convergence of F 1 (z), . . . , F d (z), we have Ku. Nishioka's result fully reveals the heart of Mahler's method, one can obtain an algebraic independence result for the special values of Mahler functions by producing the result at the function level. Of course, to gain full use of this theorem, one must produce a function-level result. While there are several results concerning specific functions of degrees 1 and 2 (see in particular the recent work of Bundschuh and Väänänen [7,8,9,10,11,12,13]), there is a lack of general results for the algebraic independence of Mahler functions. For degree-1 Mahler functions, general results have been given by Kubota [19] and Ke. Nishioka [32], though the criteria they provide can be quite hard to check, making their results difficult to apply. In this paper, we provide a general algebraic independence result for Mahler functions of arbitrary degree. Our result is based on properties of the eigenvalues of Mahler functions, a concept we recently introduced with Bell [3] in order to produce a quick transcendence test for Mahler functions. To formalise this notion here, suppose that F (z) satisfies (1), set a i := a i (1), and form the characteristic polynomial of F (z), In the above-mentioned work with Bell, we showed that if p F (λ) has d distinct roots, then there exists an eigenvalue λ F with p F (λ F ) = 0, which is naturally associated to F (z). We use the term 'eigenvalue' to denote the root of a characteristic polynomial. Our first result is the following. ] be k-Mahler functions convergent in the unit disc for which the eigenvalues λ F1 , . . . , λ F d exist, and let M denote the set of meromorphic functions. If k, λ F1 , . . . , λ F d are multiplicatively independent, then In particular, the functions F 1 (z), . . . , F d (z) are algebraically independent over C(z). As the title of this paper suggests, our results are obtained by an asymptotic argument. Indeed, the ability to include meromorphic functions in Theorem 1 is a by-product of our method being analytic and not heavily dependent on algebra. Though our result adds general meromorphic functions in the context of algebraic independence, the comparison of Mahler functions with meromorphic functions is not new. Bézivin [5] showed that a Mahler function that satisfies a homogeneous linear differential equation with polynomial coefficients is necessarily rational. Taking this further, in his thesis (and unpublished otherwise), Randé [40] proved that a Mahler function is either rational or has a natural boundary; see our paper with Bell and Rowland [4] for a more recent proof of this result. While Theorem 1 is quite general, if we focus on a certain subclass of Mahler functions, the nonnegative k-regular functions, we can remove the existence assumption on the eigenvalues λ Fi for i = 1, . . . , d. An integer-valued sequence {f (n)} n 0 is called k-regular provided there exist a positive integer d, a finite set of matrices {A 0 , . . . , A k−1 } ⊆ Z d×d , and vectors v, w ∈ Z d such that 0 is the base-k expansion of n. The notion 1 of k-regularity is due to Allouche and Shallit [1], and is a direct generalisation of automaticity; in fact, a k-regular sequence that takes finitely many values can be output by a deterministic finite automaton. We call the generating function F (z) = n 0 f (n)z n of a kregular sequence {f (n)} n 0 , a k-regular function (or just regular, when the k is understood). Establishing the relationship to Mahler functions, Becker [2] proved that a k-regular function is also a k-Mahler function. In order to prove an algebraic independence result for regular functions, we prove the following result on the asymptotics of k-regular sequences. Theorem 2. Let k 2 be a integer and {f (n)} n 0 be a nonnegative integer-valued k-regular sequence, which is not eventually zero. Then there is a real number α f 1 and a nonnegative integer m f such that as N → ∞, We stress that Theorem 2 provides the existence of a constant α f , which essentially takes the place of the Mahler eigenvalue for regular functions. We use these asymptotics to give the following result for k-regular functions. In particular, the functions F 1 (z), . . . , F d (z) are algebraically independent over C(z). Theorems 1 and 3 have some interesting corollaries; we list three here. The first concerns the derivatives of regular functions. Corollary 4. Let k 2 be an integer and F (z) be a k-regular function with k and α f multiplicatively independent. If n 1 and n 2 are any two distinct nonnegative integers, then The next corollary demonstrates that Theorems 1 and 3 can be used to give results for infinite sets of functions. Corollary 5. Let p be an odd prime, let Φ p (z) be the pth cyclotomic polynomial, let k 2 be an integer, and set F p (z) := n 0 Φ p (z k n ). Then the functions with indices odd primes p coprime to k, are algebraically independent over C(z). The functions considered in Corollary 5 were recently studied by Duke and Nguyen [16]. Our last corollary in this Introduction concerns the algebraic independence of Mahler functions of different degrees. and F (z) be the function of Dilcher and Stolarsky [15], which has 0, 1-coefficients Corollary 6 holds also for any pair of derivatives of S(z) and F (z). We note that the algebraic independence over C(z) of the Dilcher-Stolarsky function F (z) and its derivative F (z) follows from our recent joint work with Brent and Zudilin [6]. The remainder of this paper is organised as follows. In Section 2, we prove Theorem 1. Section 3 contains the proofs of Theorems 2 and 3. In the final section, we present an extended 'illustrative' example as well as a few corollaries and questions. Algebraic independence of Mahler functions In this section, we prove Theorem 1. Our proof relies heavily on the use of the radial asymptotics of Mahler functions as z approaches various roots of unity. In joint work with Bell, we recently provided the initial case of the more general result to follows here (see Theorem 8). We record the special case here as a proposition. [3]). Let F (z) be a k-Mahler function satisfying (1) whose characteristic polynomial p F (λ) has d distinct roots. Then there is an Proposition 7 (Bell and Coons where log k denotes the principal value of the base-k logarithm and C F (z) is a realanalytic nonzero oscillatory term, which on the interval (0, 1) is bounded away from 0 and ∞, and satisfies C F (z) = C F (z k ). For the purposes of transcendence, Proposition 7 is enough; this was the purpose of our joint work with Bell [3]. To gain algebraic independence results, we additionally require the asymptotics as z approaches a general root of unity of degree k n for any n 0. Concerning these asymptotics, we give the following result. Theorem 8. Let F (z) be a k-Mahler function satisfying (1) whose characteristic polynomial p F (λ) has d distinct roots and let ξ be a root of unity of degree k n for some n 0. Then as z → 1 − , there is an integer m ξ and a nonzero number Λ F (ξ) such that Proof. If ξ 1 is a root of unity of degree k, then using the functional equation (1) and Proposition 7, where we have used the fact that as z → 1 − , the rational function in (4) can be written for some nonzero complex number Λ F (ξ 1 ) and some integer m 1 that depends on ξ 1 and the polynomials a i (z), for i = 0, . . . , d. The fact that d j=1 a0(ξ1z) λ −j F = 0 follows from the assumption that the characteristic polynomial p F (λ) has d distinct roots. Note that we can continue this process iteratively for any root of unity ξ of degree k n , as z → ξ radially. The result is now a direct consequence of the above argument with the additional realisation that for roots of unity ξ of degree k n with n large enough, a i (ξ) = 0 for i = 0, . . . , d. For a special case of Theorem 8, see our recent work with Brent and Zudilin [6, Theorem 3 and Lemma 5], in which we used radial asymptotics to extend the work of Bundschuh and Väänänen [9]. With these asymptotic results established, we may now prove Theorem 1. Proof of Theorem 1. Towards a contradiction, assume the theorem is false, so that we have an algebraic relation is identically zero. Moreover, without loss of generality, we may suppose that the polynomial m p m (z, w 1 . . . , w s )y m1 Pick a z 0 ∈ (0, 1) and note that as z → 1 − along the sequence {z k m 0 } m 0 , for ξ any root of unity of degree k n , with n large enough, in the notation of Theorem 8, we have where |m| = m 1 + · · · + m d , m is an integer, and C m = 0 depends on the choice of z 0 , but is independent of ξ and z. Let M max ⊆ M be the (nonempty) set of indices m = (m 1 , . . . , m d ) such that the quantity δ := m 1 log k λ F1 + · · · + m d log k λ F d + |m| · m is maximal. We claim that the set M max contains only one element. To see this, suppose that m, m ∈ M max . Then Since the numbers k, λ F1 , . . . , λ F d are multiplicatively independent, the numbers log k λ F1 , . . . , log k λ F d , m are linearly independent. Thus m i = m i for each i ∈ {1, . . . , d}, and we have m = m . Using the uniqueness of the term of index m max with maximal asymptotics, we multiply the algebraic relation (5) by (1 − z) δ and send z → 1 − along the sequence {z k m 0 } m 0 . Then for a root of unity ξ of degree k n , for n large enough and for which G 1 (ξ), . . . , G s (ξ) each exist, we gain the equality This implies that p mmax (ξ, G 1 (ξ) . . . , G s (ξ)) = 0, for each choice of such ξ. Since there are infinitely many such ξ that are dense on the unit circle and p mmax (z, G 1 (z), . . . , G s (z)) is a meromorphic function, it must be that p mmax (z, G 1 (z), . . . , G s (z)) = 0 identically, contradicting our original assumption. Algebraic independence of regular functions In this section, we prove Theorems 2 and 3. As stated in the Introduction, focusing on the subclass of nonnegative regular functions allows us a bit more freedom in the results. While we will still use the fact that regular functions F (z) are Mahler functions, we no longer require the existence of the eigenvalue λ F . For nonnegative regular functions, the role of the eigenvalue will be played by a different constant α f , some properties of which are discussed in what follows. To establish Theorem 2, we require a few preliminary results, the first of which separates out a special linear recurrent subsequence of a regular sequence. Lemma 9. If f is a k-regular sequence, then {f (k )} 0 is linearly recurrent. Proof. Recalling the definition of regular sequences in the Introduction, let {A 0 , . . . , A k−1 }, v, and w be such that f (m) = w T A i0 · · · A is v, where (m) k = i s · · · i 0 is the base-k expansion of m. Then we have which proves the lemma. Though unneeded for our purposes, it is worth noting that one may strengthen the above lemma to show that for any choice of n and r, the sequence {f (k n + r)} 0 is linearly recurrent. We require the following classical result of Allouche and Shallit [1, Theorem 3.1]. Proposition 10 (Allouche and Shallit [1]). Let k 2 be an integer. Then the set of k-regular sequences is closed under (Cauchy) convolution. In particular, if {f (n)} n 0 is k-regular, then so is the sequence {g(n)} n 0 , where By applying Lemma 9 and Proposition 10, we now prove Theorem 2. Proof of Theorem 2. Combining Lemma 9 and Proposition 10, we have that σ f (r) := n k r f (n) is linearly recurrent. Further, since {f (n)} n 0 is nonnegative, the sequence {σ f (r)} r 0 is increasing. Thus using the eigenvalue representation of the linear recurrence σ f (r), as r → ∞, we have (1)), for some integer m f 0 and α f 1. The lower bound on α f follows as σ f (r) is increasing and integer-valued. The result of the theorem now follows quite quickly. To see this, let N be large enough. Then N ∈ (k r , k r+1 ] for r = log k N . So for any ε > 0, Using the trivial upper and lower bounds log k N − 1 r < log k N, we then have which finishes the proof of the lemma. While the statement of Theorem 2 is very precise, we use it in a less technical way; we need only the fact that In order to prove Theorem 3, we determine an asymptotic result that mimics Theorem 8 for nonnegative regular functions, making sure to avoid the need of a Mahler eigenvalue. As in the proof of Theorem 2 above, nonnegativity remains an important assumption. Proposition 11. Let k 2 be an integer and {f (n)} n 0 be a nonnegative integervalued k-regular sequence, which is not eventually zero. Let α f 1 be as given by Theorem 2. If F (z) = n 0 f (n)z n , then for any ε > 0, as z → 1 − , Proof. Let k 2 be a integer and {f (n)} n 0 be a nonnegative integer-valued k-regular sequence, which is not eventually zero. Set Let α f 1 and m f 0 be as given in Theorem 2. For z ∈ (0, 1) define the function By Theorem 2 and the fact that the series in the denominator is nonzero and differentiable on (0, 1), we have that on (0, 1) the function C(z) is nonzero, differentiable, and bounded above and below by positive constants. Set We continue by finding asymptotic bounds on the function D(z). To this end, note that for any positive real number r, we have where Γ(z) is the Euler Γ-function. By Sterling's formula, we have that It then follows from a classical result of Césaro (see Pólya and Szegő [39, Problem 85 of Part I], that for any given ε > 0, as z → 1 − , Recall C(z) 1, so we have G(z) D(z), and thus (7) holds with D(z) replaced by G(z). The result now follows since F (z) = (1 − z)G(z). In order to simplify our further exposition, we make the following definition. Definition 12. We call a function H(z) an ε-function, if there is an a > 0 such that H(z) is defined on the interval (1 − a, 1), and as z → 1 − , either H(z) is bounded away from zero and infinity, or the function satisfies Corollary 13. Let k 2 be an integer and {f (n)} n 0 be a nonnegative integervalued k-regular sequence, which is not eventually zero. Let α f 1 be as given by Theorem 2. If F (z) = n 0 f (n)z n , then there is an ε-function L(z) such that We now extend Corollary 13 to include all radial limits as z approaches a root of unity of degree k n for n large enough. In this way, the following result is the analogue of Theorem 8 for nonnegative regular functions. Theorem 14. Let k 2 be an integer and {f (n)} n 0 be a nonnegative integervalued k-regular sequence, which is not eventually zero. Let α f 1 be as given by Theorem 2 and set F (z) = n 0 f (n)z n . If ξ is a root of unity of degree k n , with n large enough, then there is an integer d ξ and an ε-function L ξ (z) such that Proof. By Corollary 13, there is a real number α f 1 and an ε-function L 0 (z) such that as z → 1 − , we have Recall that any k-regular function is a k-Mahler function; using this fact, let us suppose that F (z) satisfies (1). Let ξ 1 be a k-th root of unity. Then as z → 1 − , we have for some ε-function L 1 (z) and some integer d 1 ∈ Z, which depends on the polynomials a i (z) (i = 0, . . . , d), α f , and ξ 1 . Thus Continuing in this way, if ξ is a root of unity of degree k n , with n large enough so that a i (ξ) = 0 for each i = 0, . . . , d, then there is an ε-function L ξ (z) and an integer d ξ , such that as z → 1 − , which is the desired result. With our asymptotic results in place, we can now prove Theorem 3. Proof of Theorem 3. This proof follows very close our proof of Theorem 1, but with the ε-functions L(z) given by Theorem 14 in place of the functions C(z) from Theorem 8. To start, as in our proof of Theorem 1, and towards a contradiction, assume the theorem is false, so that we have an algebraic relation where the set M ⊆ Z d 0 is finite and none of the polynomials p m (z, G 1 (z), . . . , G s (z)) in C[z][M] is identically zero. Again, without loss of generality, we may suppose that the polynomial m p m (z, w 1 . . . , w s )y m1 1 · · · y m d d in d + s + 1 variables is irreducible. As z → 1 − , for ξ any root of unity of degree k n , with n large enough, in the notation of Theorem 14, we have where |m| = m 1 + · · · + m d , m is an integer, and is an ε-function. Note that L ξ,m (z) is an ε-function since a product of ε-functions is again an ε-function. Let M max ⊆ M be the (nonempty) set of indices m = (m 1 , . . . , m d ) such that the quantity δ := m 1 log k λ F1 + · · · + m d log k λ F d + |m| · m is maximal. Note here that the asymptotic properties of the function L ξ,m (z) do not effect the maximal asymptotics in a way that changes the set M max . Again, we claim that the set M max contains only one element. To see this, suppose that m, m ∈ M max . Then Since the numbers k, λ F1 , . . . , λ F d are multiplicatively independent, the numbers log k λ F1 , . . . , log k λ F d , m are linearly independent. Thus m i = m i for each i ∈ {1, . . . , d} and m = m . Since there are infinitely many such ξ which are dense on the unit circle, and p mmax (z, G 1 (z), . . . , G s (z)) is a meromorphic function, it must be that p mmax (z, G 1 (z), . . . , G s (z)) = 0 identically, contradicting our original assumption. Concluding remarks In this paper, we presented general algebraic independence results for Mahler functions and nonnegative regular functions. We did this by making use of the arithmetic properties of the eigenvalues of Mahler functions, and by the arithmetic properties of certain constants, which we denoted α f , associated to regular sequences {f (n)} n 0 . Before ending our paper, we provide an extended example that illustrates some of the objects that we have considered as well as a few corollaries and questions that may be of interest. This sequence is 2-regular and is determined by the vectors and matrices Further, the generating function S(z) = n 0 s(n)z n is 2-Mahler and satisfies the functional zS(z) − (1 + z + z 2 )S(z 2 ) = 0. The characteristic polynomial for S(z) is linear; it is p S (λ) = λ − 3. Thus λ S = 3. Of course, since S(z) is regular, the constant α s also exists, and in this case α s = λ S = 3. In fact, this equality holds for all regular functions; that is, if F (z) = n 0 f (n)z n is regular and λ F exists, then λ F = α f . To illustrate the effect of these constants, we use two figures. In Figure 1, we have plotted the values of the Stern sequence in the interval [2 15 , 2 16 ]. As the power of two is increased this picture will fill out with more values, but already at these values it is quite stable. Notice that the Stern sequence, while definitely exhibiting structure is quite erratic as well. But if we consider the weighted partial sums N − log 2 3 n N s(n) for N between two large powers of two, a whole other structure arises; see Figure 2. Many results have been proven regarding the transcendence and algebraic independence of the generating function of the Stern sequence, so that providing interesting corollaries of Theorems 1 and 3 using this function is a bit of a challenge. We proved [14] the transcendence of S(z), and Bundschuh [7] extended that result by showing that the derivatives S(z), S (z), S (2) (z), . . . , S (n) (z), . . . are algebraically independent over C(z). But if we throw a wrench in the works, things get more interesting. We can pair the Stern function with other functions and give results which previous methods could not attack. Corollary 6 stated in the Introduction is a good example of this. We give another example here. The Baum-Sweet sequence is given by the recurrences b 0 = 1, b 4n = b 2n+1 = b n , and b 4n+2 = 0. The sequence {b n } n 0 is 2-automatic (and so also 2-regular) and as a consequence of the above relations, its generating function B(z) = n 0 b n z n satisfies the 2-Mahler equation The function B(z) has the characteristic polynomial p B (λ) = λ 2 − λ − 1 and λ B = (1 + √ 5)/2. We thus have the following corollary to Theorem 1. Partial progress has been made in the case of Mahler functions of degree 1 (see Bundschuh [7,8]), though there is not a single known example of a hypertranscendental Mahler function of degree 2 or greater. Towards this question, we can offer the following modest result, which is a corollary to Theorems 1 and 3. Note that Corollary 17 also holds with the addition of any number of meromorphic functions. In this way, it seems natural to consider as well the hypertranscendence of Mahler functions over polynomials in meromorphic functions. 4.3. A question about e and π and Mahler numbers. We end this paper with one more corollary (more a novelty of our method) as well as a question, which we hope will stimulate further research using possibly a combination of algebraic and asymptotic techniques for algebraic independence. Corollary 18. If F (z) is a k-Mahler function for which λ F exists and k and λ F are multiplicatively independent, then tr. deg. C(z) C(z)(F (z), G(z), ζ(z)) = 3, where G(z) is any D-finite function and ζ(z) is Riemann's zeta function. Corollary 18 can be viewed as a sort of classification, which is saying that Mahler functions, functions that satisfy linear differential equations, and the zeta function are very different sorts of functions. It would be extremely interesting interesting if one could prove an algebraic independence result about the special values of such functions. In particular, one would like to answer the following question.
2017-01-13T20:56:21.000Z
2015-11-24T00:00:00.000
{ "year": 2015, "sha1": "2ffb112a9ad8fd4bdc5a42641ac33cdbce0443a4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2ffb112a9ad8fd4bdc5a42641ac33cdbce0443a4", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
26345866
pes2o/s2orc
v3-fos-license
On geometric properties of passive random advection We study geometric properties of a random Gaussian short-time correlated velocity field by considering statistics of a passively advected metric tensor. That describes universal properties of fluctuations of tensor objects frozen into the fluid and passively advected by it. The problem of one-point statistics of co- and contravariant tensors is solved exactly, provided the advected fields do not reach dissipative scales, which would break the symmetry of the problem. Asymptotic in time duality of the problem is established, which in the three-dimensional case relates the probabilities of the volume deformations into"tubes"and into"sheets". I. INTRODUCTION A problem of passive advection in a turbulent medium attracts considerable attention as a solvable model of turbulence. Exact solutions can be found in a simplified case, when the velocity field is chosen to be a random, short-time correlated Gaussian process. Statistics of density, concentration, passive vectors advected by such a field were investigated by many authors (see, e. g., [1][2][3][4][5][6]), where intermittent nature of the fluctuations, non-trivial scalings of structure functions and anomalous role of the dissipation were discovered. All these features are very common in the general picture of turbulence and, therefore, the problem of passive advection can serve as a model for developing corresponding analytical tools. In the present paper we consider passive advection (in the Lie sense) of a second-rank covariant tensor in ddimensional space. Though our master equation for the probability density function (PDF) of the tensor (Eq. (4) below) is very general, we concentrate mainly on statistics of a symmetric (metric) tensor g ij . One-point statistics of any tensor object frozen into the fluid can be related to statistics of such a tensor. We do not impose any restrictions (such as incompressibility) on the velocity field, and, therefore, statistics in both Eulerian and Lagrangian frames are studied. Also, we are only interested in the "initial stage" of the advection, when the advected field does not reach dissipative scales. This allows us to explore the symmetries of the problem, which are broken when dissipation is included. We show that the probability-density function of the eigenvalues of the metric is governed by a d-particle Hamiltonian that can be split into two non-interacting parts. Its non-universal part describes the motion of the center of mass (the determinant g of the metric) and can be separated from the motion relative to the center of mass, i. e. dynamics of the metric's eigenvalues normalized to their geometrical mean, λ i /g 1/d . The Hamilto-nian of the latter motion is of the Calogero-Sutherland type, remains the same in both Lagrangian and Eulerian frames of reference, and therefore describes the universal properties of the advection. These properties are dictated by the symmetry of the problem. The exact integrability of the Calogero-Sutherland Hamiltonian is known to be related to SL(d) symmetry: the Hamiltonian can be represented as a quadratic polynomial in terms of the generators of the corresponding algebra [5,7,8]. The eigenfunctions of this Hamiltonian are the so-called Jack polynomials, which are symmetric homogeneous functions of the eigenvalues. This allows us to find exactly all moments T m of any tensorT advected by the fluid. Indeed, calculating any such moment reduces to averaging expressions of the type Tr k (ĝ n ), which are symmetric polynomials in terms of the metric's eigenvalues, and can therefore be expanded in Jack polynomials of degree nk. We illustrate this method by calculating exactly all moments of passively advected vectors and covectors, in particular, of the magnetic field in kinematic régime and of the passive-scalar gradient. We also demonstrate how this approach works in the general case of a passively advected tensor of any rank. Calculating the moments requires knowing the statistics of the metricĝ with special initial conditions, g ij (t = 0) = δ ij . However, it is also interesting to consider the evolution of the PDF of the symmetric tensor g ij subject to arbitrary initial conditions. In this context, we show that a beautiful dual picture exists: the timedependent PDF of the tensor becomes asymptotically (t → ∞) invariant under the inversion of the eigenvalues with respect to their geometrical mean. For example, in three dimensions, that means that if a magnetic field advected by ideally conducting fluid develops flux tubes, it must develop magnetic sheets with the same probability. The paper is organized as follows. In Section II, we derive the master equation for the PDF of the metric's eigenvalues, and analyze the symmetry properties of this PDF. In Section III, we present a simple method of transforming the PDF between Eulerian and Lagrangian frames, which is important in the case of a compressible velocity field. Section IV discusses general properties of solutions for the PDF in two-and three-dimensional cases. In Section V, we show how the symmetry of the problem allows to calculate all the moments of passively advected tensors. The paper is written in a self-contained manner, all the necessary definitions and derivations are summarized in the Appendices. II. MASTER EQUATION A covariant second-rank tensor field ϕ ij (t, x) passively advected by the velocity field ξ k (t, x) evolves according to the following equation: where ξ k ,i = ∂ξ k /∂x i , and ϕ ij,k = ∂ϕ ij /∂x k . Let ξ i (t, x) be a Markovian Gaussian field: where a is the compressibility parameter, and κ 2 = 1 for simplicity. Here a can vary between −1/(d + 1) for the incompressible flow and 1 for the fully compressible flow. In order to determine the statistics of the tensor, we follow a standard procedure [9,10] and introduce the characteristic function ofφ(t, x): This function is a Fourier transform of the PDF of the matrix elements ϕ ij . Clearly, Z is independent of x due to spacial homogeneity. We find that Z satisfies where d is the dimensionality of space. This equation was derived by taking the time derivative of Z, using Eq. (1), and splitting Gaussian averages. We obtain the equation for the probability density function ofφ by Fouriertransforming (4): The original equation (1) preserves symmetry properties of the tensor ϕ ij , which means that we may restrict our consideration either to advection of symmetric or antisymmetric tensors. Both reductions can be done in a similar fashion. For our present purposes we only consider fluctuations of a symmetric covariant tensor. The corresponding results for a contravariant tensor are summarized in Appendix C. We will use both (co-and contravariant) pictures when discussing statistics of passive vectors in Section V. In the symmetric case, the PDF (5) can be factorized as follows: whereĝ is the symmetric part ofφ. One may think of the tensorĝ as of a metric associated with the medium. Due to spacial isotropy,P depends only on the eigenvalues λ 1 , . . . , λ d ofĝ. After rather cumbersome but essentially simple calculations, we establish the following master equation for the PDF of the eigenvalues of the metric: (from here on the overtildes are dropped). Among the solutions of this equation, those corresponding to the PDF must be non-negative, finite, and normalizable. The normalization is as follows [11]: Clearly, the original stochastic equation (1) preserves the signature of the metric. We will restrict ourselves to the case of all positive λ's. Since there is no means of distinguishing between different orderings of the eigenvalues, the PDF must be a symmetric function with respect to all permutations of λ 1 , . . . , λ d . We should now notice that in logarithmic variables z i = log(λ i ), the master equation (7) describes the dynamics of d pair-wise interacting particles on the line. Furthermore, we can consider these dynamics in the reference frame associated with the center of mass of the particles z = 1 d z i . Denoting the coordinates of the particles in this frame ζ i = z i − z, and noticing that det(ĝ) = g = exp(zd), we find that P now satisfies where the d variables ζ 1 , . . . , ζ d are not independent, ζ i = 0. The Hamiltonian remaining after the dynamics of the center of mass are separated, is translationally invariant, therefore the total momentum of the particles (∂P/∂ζ i ) is conserved. The normalization rule now is: where by ζ we denote the set {ζ 1 , . . . , ζ d }. The operator in the square brackets is a Sutherland HamiltonianH S , which is exactly solvable (see, e.g., [8,12,13]; this Hamiltonian appeared in a similar context in [5,7]). The Hamil-tonianH S is the same for co-and contravariant tensors, and in both Eulerian and Lagrangian frames. It is important thatH S is self-adjoint with respect to the measure (10). Its eigenfunctions are the so-called Jack polynomials, that are homogeneous polynomials in exp(ζ i ) and are symmetric with respect to all permutations of ζ i . Their construction is discussed in Appendix B. We will use particular eigenfunctions of this operator in Sec. V. We see that if P is initially chosen in a factorized form, P = P 1 (g)P 2 (ζ 1 , . . . , ζ d ), it will remain so factorized at all times. Thus, the statistics of g are independent of the statistics of the ζ's at all times if they are initially independent. In particular, this property of Eq. (9) allows us to consider separately the PDFs for the determinant of the metric and for the logarithmic quantities ζ i = log(λ i g −1/d ): An additional symmetry emerges in this context: (7) and (10) remain invariant if the coordinates z i of all particles are simultaneously reflected with respect to their center of mass. Such reflection leaves the center of mass intact and reverses the signs of all ζ i , i. e. transforms all λ i into g 2/d /λ i . The origin of this symmetry can be understood if we notice that the master equations for the PDFs of the dimensionless quantities G ik = g −1/d g ik and G ik = g 1/d g ik are the same, although the initial stochastic equations are different. This symmetry leads to nontrivial results for d ≥ 3, and will be considered in Section IV. III. EULERIAN AND LAGRANGIAN PDF'S The equation for the metric-determinant PDF S(t, g) follows from Eq. (9): where we have rescaled time by the factor of γ = d [1 + a(d+1)]. This factor is always non-negative and vanishes if the velocity field is incompressible, a = −1/(d + 1), in which case any time-independent function S(g) is a solution. Note that the right-hand side of Eq. (12) becomes a full derivative when multiplied by the Jacobian g (d−1)/2 . The solution of this equation is a log-normal distribution: where we took the initial distribution in the form S(0, g) = δ(g − 1). This result can be simply understood if we note that the determinant g obeys the same equation as ρ 2 , the squared density of the medium. The density satisfies the continuity equation, which can be written in logarithmic form: Since the time increments of ξ k are independent identically distributed random variables, the Central Limit Theorem implies the normal distribution of log ρ. Indeed, either from Eq. (12) or directly from Eq. (14), one can easily establish that the density PDF R(t, ρ) = 2ρ d S(t, ρ 2 ) satisfies ∂ t R = (γ/2) (ρ 2 R) ′′ . So far, we have worked in the Eulerian frame, considering statistics at an arbitrary fixed point x. Now we show how the one-point joint Eulerian and Lagrangian PDF's are related. Let us assume that initially Lagrangian particles are uniformly distributed in space. We denote the Eulerian PDF P E (ρ, ζ; t, x), the Lagrangian PDF P L (ρ, ζ; t, y), where y is the Lagrangian label (initial coordinate of the Lagrangian particle), and ρ = | det(∂y/∂x)| (the density of the medium). The relation between P E and P L can be established from the following: Since the one-point PDF P E (ρ, ζ; t, x) is independent of position (due to spacial homogeneity), we can integrate (15) with respect to x. Also noting that the onepoint PDF P L (ρ, ζ; t, y) is independent of y, we get: Transformation to the Lagrangian frame can also be performed on the level of the original stochastic equations such as (14) with the aid of the stochastic calculus (see, e. g., [11,14]). In our considerations, if we choose intially S(0, g) ∝ δ(g − ρ 2 0 ), we may substitute ρ = √ g in formula (16). We see therefore that only the PDF of g is affected by the transformation between Eulerian and Lagrangian frames. The Lagrangian version of S(g) is: Analogous results for the contravariant case are presented in Appendix C. The log-normal statistics such as (13) and (17) are a signature of this problem, and they will also be present for fluctuations of the eigenvalue ratios in asymptoticallyfree régimes, i. e. where different ratios do not interact with each other [1][2][3][4][5][6]. IV. PDF'S OF EIGENVALUE RATIOS IN TWO AND THREE DIMENSIONS We saw in the previous section that F (t, ζ), the PDF of the ratios λ i /g 1/d , would remain the same in both Eulerian and Lagrangian frames. In this section we analyse the equations for these PDF's in two-and threedimensional cases. Having in mind numerical simulations, we will write these equations using d − 1 independent variables. In the general case such reduction is done in Appendix A. Let us start with the two-dimensional case. It is now convenient to integrate the δ-function in (10) and work with the logarithm of the eigenvalue ratio as a new variable: x = 1 2 log(λ 1 /λ 2 ) = 1 2 (ζ 1 − ζ 2 ). The equation for F (t, x) then becomes As expected, the rhs of Eq. (18) becomes a full derivative when multiplied by the Jacobian J(x) = 2 sinh(x). Note that the differential operator in the right-hand side of Eq. (18) becomes a Legendre operator under the change of variablesx = cosh(x). This property is a consequence of integrability of the initial Hamiltonian H S (Eq. (9)), and will be of use in Sec. V when we calculate the moments of passive vectors. The nature of the solution can be easily understood if we first consider only the advective term F ′ x / tanh(x). The characteristic of Eq. (18) then satisfiesẋ = 1/ tanh(x), which implies that F is advected to regions where |x| ≫ 1, and, for t → ∞, the asymptotic solution can be found from (18) by approximating tanh(x) ≈ 1. The asymptotic is log-normal as expected. Note that the reflection symmetry x → −x of Eq. (18) is just a consequence of the previously mentioned general symmetry λ 1 ↔ λ 2 , and does not add anything new. The function F must be initially chosen in such symmetric form. This is not so in the three-dimensional case that we now consider in more detail. The symmetry with respect to all permutations of eigenvalues λ 1 , λ 2 , λ 3 , leads to the following two symmetries of the solutions of Eq. (19): x → −x, y → y − x; and x ↔ y. (20) Eq. (19) posesses another (reflection) symmetry as well: x → −x, y → −y, which corresponds to the inversion of λ 1 /λ 3 , λ 2 /λ 3 , and does not follow from (20). Therefore a general initial distribution should contain both symmetric, F s , and antisymmetric, F a , parts with respect to this reflection. The symmetries (20) act as reflections (21) on the points of the plane located on the lines y = 2x, y = x/2, and y = −x; hence the antisymmetric part of the PDF F a must vanish on these lines. Characterictic trajectories of Eq. (19) are presented in Fig. 1. The lines y = ±x, y = 2x, y = x/2, x = 0, and y = 0 are combined in groups that are transformed by the symmetries (20) independently. Those groups correspond to sheet, tube, and strip volume deformations as shown. Let us concentrate our attention on the sector x ≥ 0, y ≤ 0. Due to the symmetries (20) and (21), this allows us to understand the behavior of the PDF in the entire plane (x, y). Considering the characteristic trajectories (they advect F towards the line y = −x from both sides), or the flux of the conserved function F (x, y)|J(x, y)| (calculated on the line y = −x, it is found to be directed from the semisector with positive F a to that with negative F a ), one can show that the antisymmetric part of the PDF decays with time. The symmetry of the solution with respect to the sheet and tube configurations thus emerges asymptotically as t → ∞. Numerical simulations performed for various initial distributions concentrated in the region |x| ≤ 1, |y| ≤ 1, confirm that the PDF becomes symmetrized very fast, at times t ∼ 1. In the region |x| ≫ 1, |y| ≫ 1, far from the lines y = x, y = 0, and x = 0, the long-time (t ≫ 1) asymptotic is log-normal. This asymptotic can be easily obtained from Eq. (19). V. PASSIVE VECTORS In this section we apply the developed formalism to passively advected vectors. Consider the evolution of the coordinates of a particle advected by the fluid: x i = x i (t, y), where y i is the initial position of the particle, i. e. x i (0, y) = y i . An infinitesimal contravariant vector a i changes under such coordinate transformation as follows: a i (t, x) = (∂x i /∂y k ) a k 0 (y). In order to find the mean of any object constructed out of a i , we have to average it with respect to the initial distribution of a i 0 (y) and with respect to all realizations of the random velocity field ξ k . The latter averaging can be done via the PDF's for co-and contravariant (metric) tensors. Let us assume that the initial distribution of the vector a i is Gaussian, isotropic, and independent of y: a i 0 a j 0 = δ ij . As an example, consider the moments A n = |a| 2 : whereĝ is the contravariant tensor advected by the fluid and with the initial condition g ij (0, y) = δ ij . The distribution of this tensor can be found in the same way as that of the covariant tensor, and is discussed in Appendix C. Moments of a covariant vector a i can be found using exactly the same formula (22), withĝ now the covariant tensor. To simplify the formula (22), we note that the eigenvalues of the matrixĝ can be expressed as λ i = g 1/d exp(ζ i ). Therefore, for all n, g −n/d Tr (ĝ n ) depend only on ζ i , and are independent of the determinant g. Since the initial distribution of g is S(g) = δ(g−1), we can average powers of g independently and obtain: where the functions f d (n, t) do not depend on the statistics of the determinant, and are, therefore, universal. These functions are the same in the co-and contravariant cases, and in both Eulerian and Lagrangian frames. The only parts of the moments A n that are non-universal are the averages of the determinant. These averages can be calculated exactly using formulas (13), (17), and (C3): The universal functions f d (n, t) can, in fact, be easily calculated directly (cf. [4,5,15,16]), if one starts from the equation for advection of a passive vector a i (t, x): where statistics of ξ i (t, x) are given by (2). However, for methodical purposes, we prefer to rederive this result using the technique of Jack polynomials. While it is also quite simple, it illustrates the general method that can be applied to finding moments of any passively advected tensor. At the end of this section, we show, e. g., how moments of a bilinear form a i b k can be caclulated. Formula (23) can be further simplified if we do the average with respect to the distribution of a i 0 . Introducing the generating function we represent f d as follows: The Gaussian average with respect to the initial distribution of the vector can now be easily done, resulting in where the remaining averaging is with respect to the statistics of ζ i . The PDF of the ζ's is F (ζ)|J(ζ)|δ( ζ i ) with the initial condition δ(ζ 1 ) · · · δ(ζ d ). It is important that the function that is being averaged in (29) is the generating function for a particular class of Jack polynomials, that are eigenfunctions of the self-adjoint Sutherland operator H S in (9). Therefore, all functions (28) can be found exactly in the general case. The appropriate calculation is carried out in Appendix B. The answer is: where we denote: (d/2) n = (d/2)(d/2+1) · · · (d/2+n−1). In the two-dimensional case the corresponding result can be obtained in a rather simple manner, which nevertheless illustrates the main idea of the general derivation. In order to do this, we notice that the generating function Z(β), expressed in the two-dimensional case in terms of x = 1 2 (ζ 1 − ζ 2 ) (see Sec. IV), coincides with the generating function for the Legendre polynomials P n (cosh(x)), and, therefore, f 2 (n, t) = n! P n (cosh(x)) . The average can now be completed with the aid of Eq. (18). Multiplying it by |J(x)| P n (cosh(x)), integrating by parts twice, and using the equation for the Legendre polynomials, P ′′ n (µ)(µ 2 − 1) + 2µP ′ n (µ) = n(n + 1)P n (µ), we get: which is in agreement with (30). As an example, consider moments of a magnetic field advected by the fluid. The contravariant vector in this case is B i /ρ, where ρ is the density of the fluid. Let us denote the moments of B i as H n = |B| 2n . Recalling that, in the contravariant case, g = 1/ρ 2 , we get from (23): where for the g average we use the formula (25) in Eulerian frame, or (24) in Lagrangian frame. An analogous derivation can be carried out for a covariant vector, e.g., gradient of a passive scalar ∇θ. For its moments C n = |∇θ| 2n , we find: where for the g average we use formulas (24) or (25) depending on the frame of reference. On passively advected tensors We now briefly demonstrate how one can calculate exactly the moments of a passively advected higher-rank tensorT . Suppose that we are interested in some moment T m . After averaging with respect to the initial distribution ofT , we are left with a combination of Tr k (ĝ n ), which are polynomials of degree nk in the eigenvalues of the metricĝ. But any symmetric polynomial of degree m can be expanded in Jack polynomials of degree m, which can then be averaged exactly. The result will therefore be a linear combination of exponents growing at the rates given by (B10). We are very grateful to Russell Kulsrud and Alexandre Polyakov for many important discussions. We would also like to thank John Krommes for useful comments. This work was supported by the U. S. Department of Energy Contract No. DE-AC02-76-CHO-3073. One of the authors (SAB) was also supported by the Porter Ogden Jacobus Fellowship from Princeton University. APPENDIX A: PDF OF EIGENVALUE RATIOS The δ-function in (10) can be integrated over, and ζ 1 , . . . , ζ d reduced to d − 1 independent variables, viz. the logarithms of the eigenvalue ratios: x n = 1 2 log(λ n /λ d ) = 1 2 (ζ n − ζ d ). In these variables, the equation for F becomes: The last two terms correspond to interactions between different x's and only enter for d ≥ 3. The normalization rule now is: This form of the equation for F is most convenient for numerical solution and for geometric analysis such as that of Sec. IV.
2018-04-03T00:42:58.221Z
1999-07-27T00:00:00.000
{ "year": 2000, "sha1": "b37a6b405ac18806921dfba8825564580e217674", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b37a6b405ac18806921dfba8825564580e217674", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
58221123
pes2o/s2orc
v3-fos-license
A cost-effective analysis of various disease modifying anti-rheumatic drugs for patients with Rheumatoid Arthritis Rheumatoid Arthritis (RA) is a chronic and usually progressive inflammatory joint disease. Patients suffer from pain, stiffness, impaired function in daily life and at work, increased dependence on family and friends, decreased participation in leisure activities. RA is associated with morbidity and worsening of quality of life. Treatment involves non-steroidal anti-inflammatory drugs (NSAIDs), glucocorticoids and early use of diseasemodifying anti-rheumatic drugs (DMARDs) and biologicals. Biologicals are very expensive and cannot be afforded by majority of the patients. The management of RA has become very expensive due to increased use of out-patient medical services, increased hospitalization and major work disability in the course of the disease. Since a number of treatment options are available for these patients, and as these options not only differ in efficacy but also vary widely in cost, it has become important to determine their cost effectiveness. A number of economic evaluations were performed in western countries, of which the results may not be extrapolated to the situations in developing countries. Economic analysis is to be essentially conducted in developing countries for resource utilization at maximum efficiency. ABSTRACT INTRODUCTION Rheumatoid Arthritis (RA) is a chronic and usually progressive inflammatory joint disease. Patients suffer from pain, stiffness, impaired function in daily life and at work, increased dependence on family and friends, decreased participation in leisure activities. RA is associated with morbidity and worsening of quality of life. 1 Treatment involves non-steroidal anti-inflammatory drugs (NSAIDs), glucocorticoids and early use of diseasemodifying anti-rheumatic drugs (DMARDs) and biologicals. 2 Biologicals are very expensive and cannot be afforded by majority of the patients. The management of RA has become very expensive due to increased use of out-patient medical services, increased hospitalization and major work disability in the course of the disease. Since a number of treatment options are available for these patients, and as these options not only differ in efficacy but also vary widely in cost, it has become important to determine their cost effectiveness. 3 A number of economic evaluations were performed in western countries, of which the results may not be extrapolated to the situations in developing countries. 4 Economic analysis is to be essentially conducted in developing countries for resource utilization at maximum efficiency. This study is designed to find out the cost effective therapy among the various treatment alternatives with DMARDs in RA patients. METHODS The present study was conducted as a prospective, observational study extending over a period of six months (from September 2010 to March 2011) to examine the cost-effectiveness of various treatment options with DMARDs, which are considered affordable to the study population -Methotrexate alone, Hydroxychloroquine alone, Methotrexate-Hydroxychloroquine combination in rheumatoid arthritis from a patient perspective. The study was carried out at the Rheumatology clinic of the Medicine Department in a tertiary care teaching hospital in south India and was approved by the Institutional Ethics Committee. Inclusion criteria • Age more than 18 years • With a history of seropositive rheumatoid arthritis • Has been prescribed with the protocol medications i.e., Methotrexate alone, Hydroxychloroquine alone, Methotrexate-Hydroxychloroquine. • Has been prescribed with similar non-protocol medications like corticosteroids, NSAIDs etc. Exclusion criteria • Patients with co-morbidities like liver failure, interstitial lung diseases • Hospitalized patients • Conditions like pregnancy, lactation All the patients satisfying the inclusion criteria were selected for the study. Three groups of patients were considered for the study: • Patients taking Methotrexate alone • Patients taking Hydroxychloroquine alone • Patients taking Methotrexate and Hydroxychloroquine Patients taking non-protocol medications like NSAIDs or corticosteroids as adjunctive therapy may also be considered. An informed consent was obtained from all the patients. A questionnaire was then administered to the patients satisfying the inclusion criteria and collected data about patients' demographic details, disease activity, functional disability, medications and those concerning costs. To assess the functional disability," Stanford Health Assessment Questionnaire -Disability Index" was also administered. 5,6 The patients were followed up every two months for four months The HAQ-DI at the baseline was compared with that of final follow up. The change in HAQ-DI and the total cost for the follow up period were used to find out the average cost -effective ratio. HAQ-DI consists of 16 questions on different activities grouped into eight domains. The highest score of each domain was summed and divided by eight to yield a continuous score from 0 (able to perform activities without difficulty) to 3 (unable to perform activities). The effect of DMARD treatment was calculated from the difference between the HAQ score at baseline and end of study (ΔHAQ -DI). 7 Costs are elicited from patient perspective. To evaluate the economic consequences, both direct and indirect costs were included. Direct costs are costs that are directly related to the intervention. It involves both medical and non-medical costs. [8][9][10] Direct medical cost involves: 11 • Cost of medications (both protocol and non-protocol medications) • Cost required for laboratory investigation • Cost of toxicity arising due to treatment • Payment to the healthcare professional The cost of commonly prescribed brands of each medication was collected from the nearby community pharmacies as well as from CIMS. The drugs were differentiated according to their strengths and the average cost for a single dose was calculated. For obtaining the daily cost, this average cost was multiplied with the dosing frequency. The average cost for laboratory tests was found out to calculate the ADR monitoring cost. To calculate the cost for healthcare professional's time, the salary of health care professionals was collected from the accounts department. Then mean salary per minute was calculated according to the formula: Mean salary /min. = Annual salary (Hours/week) X (No. of weeks /annum) X 60 Direct non-medical cost comprises of: • Transportation and food costs (average costs of visit per head X no. of persons X no. of visit) • Out-of-pocket expenses for disease related activities and purchases (cost of knee cap, collar bandage etc.) Indirect cost includes loss of productivity due to rheumatoid arthritis related disability. This is calculated by human-capital approach. Monetary value of man days lost = No. of man-days lost X Personal daily income. The result of CEA is expressed as average cost effectiveness ratio (ACER). 12 Statistics The SPSS software was used to analyze the statistics. Chi square test was done to check the baseline significance between the groups. ANOVA was done to find the significance level between the groups. In this study, p<0.05 is considered statistically significant. RESULTS A total of 129 RA patients were enrolled in the study. In one patient, the protocol medication was stopped because her disease was in remission. 37 patients were dropped out from the study during follow up due to financial constraints. Data from these patients were excluded from the analysis. Consequently, a total of 91 patients were included in the analysis; 43 patients in combined treatment group -MTX + HCQ; 37 patients in MTX group and 11 patients in HCQ group. Mean age of the study population was 50years. Female patients accounted for 81% of the study population. Mean (SD) disease duration was 5.24 (4.62) years. Most of the patients were already on therapy with DMARDs with mean (SD) treatment duration of 2.92 (3.36) years. Commonly seen co-morbidities were diabetes mellitus (10 patients), hypertension (10 patients) and thyroid disorders (4 patients). Majority of the patients had no co-morbidities (68 patients). Majority of the study population (61.5%) were not employed. 36.3% were working as daily wages. There was no statistically significant difference in the demographic characteristics -age, gender, disease duration, treatment duration, and duration of current therapy-between the three treatment groups under study. There was no statistically significant difference in the baseline disease activity -swollen joint count (SJC), tender joint count (TJC), HAQ-DI score, ESR, duration of morning stiffness and pain between the three treatment groups under study (Table 1). The most commonly seen ADR was gastritis due to NSAIDs. Alopecia, breathlessness and pruritus due to Methotrexate, facial oedema due to steroids were also seen. One patient had to stop Methotrexate due to pruritus. Comparison of swollen joint count (SJC) There is no statistically significant difference in the reduction of swollen joint count between the three treatment alternatives under study. Comparison of tender joint count (TJC) There is no statistically significant difference in the reduction of tender joint count between the three treatment alternatives under study. Comparison of erythrocyte sedimentation rate (ESR) 17 There is no statistically significant difference in the reduction of ESR between the three treatment alternatives under study. Comparison of duration of morning stiffness 18 There is no statistically significant difference in the reduction of morning stiffness between the three treatment alternatives under study. Comparison of HAQ-DI There is an overall significant improvement in HAQ-DI of all the patients (P value 0.000). But there was no significant difference in the HAQ-DI between three groups. Comparison of pain (%) There is an overall significant improvement in pain of all the patients (P value 0.000). But there was no significant difference between the three groups. Comparison of costs between groups 19 Direct costs accounted for 64.1% of the total costs. Of these, direct medical costs represented 58.4% and direct non-medical costs represented 5.8%. Indirect costs comprised 35.8% of the total costs ( Figure 1). Direct medical costs There was statistically significant difference in acquiring the protocol medications -MTX, HCQ (p = 0.000), and folic acid (p = 0.000), and monitoring ADR (p = 0.000) between the various treatment groups. There was no statistically significant difference in acquiring OTC and complementary medications and in total cost required to prevent or treat ADR (p value 0.328 and 0.836 respectively) between the treatment groups. There was no significant difference in total cost required for nonprotocol medications (p = 0.222) between the treatment groups. There was statistically significant difference in total direct medical costs between the three treatment groups (P = 0.000). Figure 1: Distribution of total cost. There was statistically significant difference in the total direct cost (P =0.000), highest value (₹1637) was for the combination group (MTX-HCQ). But there was no significant difference in the total costs (P= 0.376) between the treatment groups. Average cost effectiveness ratio 20 The least ACER (₹ per outcome) was obtained for Hydroxychloroquine (2,544) and highest ACER was obtained for Methotrexate (6,125). But there was no statistically significant difference in ACER between various treatment groups. DISCUSSION Being a chronic disabling disease RA requires long time treatment with drugs. DMARDs are commonly prescribing group. Increase in the healthcare costs and limited healthcare resources, cost effective analysis of drugs are gaining much importance in developing countries like India. Majority of our study population were females. Mean age was 49 years and mean disease duration was around 5 years. The patients should obtain remission or at least a low level of disease activity using the most cost-effective therapy. Most of the patients were already on therapy with DMARDs. 21 The baseline disease activity parameters like swollen joint count, tender joint count, ESR, duration of morning stiffness, pain and HAQ-DI were similar between the groups. Direct costs accounted for 64.2% of the total costs. Of these, direct medical costs represented 58.4% and direct non-medical costs represented 5.8%. Indirect costs comprised 35.8% of the total costs. There was statistically significant difference in the total direct cost; highest value was for the combination group (MTX+HCQ). But there was no significant difference in the total costs between the treatment groups. The least ACER (₹ per outcome) was obtained for Hydroxychloroquine (2,544) and highest ACER was obtained for Methotrexate (6,125). Patients had established RA with mild to moderate disease activity and slightly impaired functional status at study entry. However, approximately baseline HAQ score was greater than 1, which indicated clinically significant disability. 5 Since indirect cost considers the loss of productivity, this result indicates the extent of disability caused by the disease. Most of the patients had to quit their job due to the disease. This contributed highly to loss of productivity. 22 The study revealed that there was an overall significant improvement in the swollen joint count, tender joint count and HAQ-DI, but the differences were not significant between the three treatment groups. This shows that the effects of therapy were almost similar among the groups. 23,24 The study shows that there was a significant difference in direct medical costs i.e., the costs for acquiring the protocol medications, folic acid and monitoring ADRs. The combination group (MTX-HCQ) showed higher costs for acquiring the as mentioned category. Moreover, the combination group showed a high value for the total direct cost. But the total cost (direct and indirect costs) showed no significant difference between groups. This may be due to the difference in indirect costs i.e., the combination therapy might have improved the functional status of patients and hence loss of productivity may be minimum for the combination therapy. 25,26 In this study, MTX+HCQ were the mostly prescribed combined DMARDs. Manathip Osiri et al, determined the cost-effectiveness of various DMARDs compared with HCQ for rheumatoid arthritis (RA) treatment. The study concluded that MTX + HCQ was less costly and more effective than HCQ alone. MTX + SSZ and triple therapy (HCQ + MTX + SSZ) were more effective than HCQ with additional costs. 10 Axel Finckh et al, assessed the potential cost-effectiveness of major therapeutic strategies for very early RA. The study concluded that very early intervention with conventional DMARDs is cost-effective but the costeffectiveness of very early intervention with biologics remains uncertain. 4 Considering direct cost MTX is found to be superior among the three. Rheumatoid arthritis is a chronic disabling disease affecting joints causing destruction. Our study was planned for four months follow up due to constraints of time which may not pick up the long term improvements in such patients. Since not many studies are conducted in this aspect in South India, our results would provide basic data for future long duration communitybased pharmacoeconomic studies.
2019-01-16T14:18:21.188Z
2018-05-22T00:00:00.000
{ "year": 2018, "sha1": "0e61eace86d5c37d7a80637f46960390df635b7d", "oa_license": null, "oa_url": "https://www.ijbcp.com/index.php/ijbcp/article/download/2519/1956", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "01d19e0dcfd646ef16cbe53a8872afae0133c934", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268885607
pes2o/s2orc
v3-fos-license
Speeding Up Time: New Urinary Peptide Clock Associates Greater Air Pollution Exposures with Faster Biological Aging Smoggy morning light blurs a historic building wrapped in scaffolding and a new office tower, with steam vapors rising in the foreground suggests Tim Nawrot, a professor of environmental epidemiology at Hasselt University in Belgium and the paper's co-senior author."That means they can help us understand how air pollutants affect aging and age-related diseases." The study population of 660 men and women was part of a cohort enrolled between 1985 and 2004 in the prospective Flemish Study on Environment, Genes, and Health Outcomes (FLEMENGHO). 14The researchers used Belgium's National Register of Natural Persons to record all-cause and cardiovascular mortality outcomes among participants annually until 30 June 2019. The team estimated daily exposure to coarse and fine particulate matter (PM 10 and PM 2:5 , respectively), black carbon, and nitrogen dioxide based on land cover data and the distance from each participant's residential address (between 2010 and 2014) to the nearest monitoring station for ambient air pollution. 15Due to the study's relatively small geographic area, average daily exposures to each of the four pollutants during the five-year period were highly correlated. 10reviously described technology 16 identified and quantified urinary peptides, which are metabolic waste products that the kidney helps eliminate from the body. 17The researchers used 54 age-associated peptides that predicted mortality in a previous FLEMENGHO analysis 16 to define accelerated biological aging.After adjusting for other cardiovascular risk factors, the A new tool using urinary peptide levels showed that biological aging was accelerated relative to chronological age in participants in a Belgian study who had higher exposures to certain air pollutants.Brussels is shown here during a 2009 smog alert; the capitol building is wrapped in scaffolding.Image: © Benoit Doppagne/Belga/AFP via Getty Images. Science Selection researchers found that an increase from the 25th to the 75th percentile of PM 2:5 exposure was associated with a 1.2-year increase in biological age.Similar differences were observed for the other three pollutants. 10][20] Earlier analyses of FLEMENGHO participants' biosamples found that higher levels of a plasma protein called dephosphorylated uncarboxylated Matrix Gla protein (dpucMGP) were associated with greater cardiovascular mortality. 13Because vitamin K is required to convert dpucMGP to MGP, which is a compound that inhibits arterial calcification, 21 high plasma levels of dpucMGP may indicate low vitamin K levels and reduced arterial health. 22The new study 10 identified a stronger positive association between air pollutants and biological aging in participants with high dpucMGP levels.For this group, higher pollutant exposure was associated with a 2.2-year increase in biological age. According to M. Kyla Shea, a scientist in the Jean Mayer USDA Human Nutrition Research Center on Aging at Tufts University, the clinical significance of plasma dpucMGP levels is controversial 23,24 because levels may depend not only on vitamin K but also on other cardiovascular risk factors, such as body mass index and waist circumference. 25Shea, who was not involved in the study, notes that even when vitamin K supplementation lowered plasma dpucMGP levels in randomized controlled trials, these lower levels did not consistently translate to reduced vascular calcification, 26 calling into question the validity of dpucMGP as a biomarker of vascular health. Protection from inflammation is an alternative mechanism by which vitamin K may influence cardiovascular health. 27,28However, "most of the evidence for anti-inflammatory effects of vitamin K has been derived from in vitro studies and animal models and has not yet been well substantiated in human studies," says Shea. Douglas Walker, an associate professor of environmental health at Emory University who also was not involved in the study, is intrigued by the fact that urinary peptidomics confirmed previously reported associations between air pollutants and accelerated biological aging."This is cutting-edge research since urinary peptidomic clocks offer unique insights into both the human proteome and protein metabolism," says Walker."But it will be important to validate this new tool in independent and more ethnically diverse populations." Silke Schmidt, PhD, writes about science, health, and the environment from Madison, Wisconsin.
2024-04-05T05:10:16.323Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "fd3f528566b2cc2b8fa5f75100ea96735b8299f2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "fd3f528566b2cc2b8fa5f75100ea96735b8299f2", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265916124
pes2o/s2orc
v3-fos-license
Logics and Enablers of Transformative Innovation Policies The case of the Colombian Appropriation of Science and Technology Policy In this work, we seek to answer the question, what are the main logics and enablers underlying the implementation of TIP policies in countries of the Global South? We address this question using the Path-transformative heuristic (Pinzón-Camargo et al. , 2020; Pinzón-Camargo, 2022). This heuristic combines two approaches, path dependency and institutional entrepreneurship theories, to explain the processes, decisions, and actions carried by actors in building an alternative path and how they face internal and external pressures that could support or damage their processes. Using an illustrative case based on the Colombian Social Appropriation of Science, Technology and Innovation policy, we examine in-depth interviews and secondary data on the underlying logics and enablers of innovation policies with transformative potential. This work allows us to identify six underlying logics in three of the four phases of the Path-transformative heuristic and six enablers extended through all the transformative pathway. Those elements bring a starting point to unfold and better understand TIP in the Global South. INTRODUCTION Recent debates on science, technology and innovation (STI) policy are moving rapidly towards new frames that are concerned with societal and environmental challenges and the needed transformative change in these realms.Particularly, since Schot and Steinmueller (2018) distinguished transformative innovation policy (TIP) as a new frame, it has quickly pervaded policymaking circles. Transformative innovation encompasses a broad set of practices that adopt a direct approach on development (Arocena & Sutz, 2017) and that intend to foster major long-term changes in sociotechnical systems, i.e. transformations in broader institutions, practices, infrastructures, networks, among other elements that sustain those realms where society and technology are embedded (Geels et al., 2004).This means that transformative innovations aim at transforming unsustainable production patterns, but also incentivizing the necessary cultural and behavioural changes (Steward, 2008;Weber & Rohracher, 2012). Under this frame, transformative innovation policy (TIP) is "a set of public actions and instruments, through which governments mediate and mobilise resources towards more sustainable and inclusive sociotechnical systems via the promotion of knowledge and innovation production, diffusion and use with a long-term perspective" (Ordóñez-Matamoros et al., 2021, p. 119).Here, innovations seek to introduce changes at the level of broad societal functions or sociotechnical systems (Geels, Elzen, & Green, 2004;Steward, 2008).This implies new rationales for governmental intervention that go beyond market and systemic failures (Woolthuis et al., 2005) to include transformational failures that governments should address to boost transitions (Weber & Rohracher, 2012;Schot & Steinmueller, 2018). This particular policy frame is acquiring a prominent popularity within scholar and policy circles in the Global South, with an active diffusion and impulse given by global partnerships, e.g. the Transformative Innovation Policy Consortium -TIPC-composed by innovation policy agencies from Colombia, Finland, Mexico, Norway, South Africa and Sweden, and coordinated by the Science Research Policy Unit -SPRU-at the University of Sussex in the United Kingdom and its sister project Deep Transitions coordinated by SPRU and the Centre for Global Challenges of University of Utrecht.However, the growing fondness of governments towards the explicit implementation of this approach contrasts with its actual viability, especially in the Global South. For instance, in the case of Colombia, a transformative STI policy approach was adopted by the national STI governmental agency in El Libro Verde 2030 in 2018, a policy document that explicitly orients STI policy towards the achievement of the Sustainable Development Goals.Nevertheless, the implementation of this policy approach in Colombia has not gotten sufficient support, among other things because of political reasons. In this vein, it is possible to identify at least three reasons that arguably explains why an explicit TIP has not been implemented in Colombia. First, El Libro Verde 2030 was launched during the final months of the government 2014-2018, and the last administration 2018-2022 was not clear about this frame in its governmental program.Furthermore, there has not been a visible support to this policy document by governmental agencies in other sectors different from STI.Part of the problem is perhaps that El Libro Verde 2030 depicts a rather normative narrative with no clear implementation plan.A recently elected new government for the period 2022-2026 seems more attuned with the TIP discourse, but it is still too early to conclude about real change. Second, the sort of systemic transformations proposed by this policy frame are difficult to achieve in a country whose economy relies heavily on incumbent regimes based on extractive industries (e.g.mining, monoculture plantations, extensive stock farming), with path-dependence dynamics sustained by political elites that inhibit transformative change at the regime and system level.In other words, pretending to foster systemic transformations on a top-down basis seems to be less viable in political terms than steering bottom-up transformations at the local level.In this respect, although the new government claims it will diversify the economy to be less dependent on extractive sectors based on oil, coal and gas, the real substitution will heavily depend on its ability to mobilise sufficient political support in an adverse context, where the war in Ukraine has led the economy to benefit from raising prices and pressing social demands needing government subsidies funded by such royalties.Even in the context of the new government narratives, knowledge, science and technology is not part of the equation where tourism, another extractive activity, is seen as the chosen sector to substitute the funding necessary currently originating from the mining sector. Finally, the implementation of this approach in Colombia has been limited by the dissonance between explicit and implicit STI policies, i.e. when STI policies enacted in formal policy documents, laws, executive decrees, among others, are incoherent with implicit STI policies that express the actual demand of society for knowledge, as well as the role and value that people ascribe to knowledge to address societal challenges (Herrera, 1973).In this particular case, while El Libro Verde 2030 was enacted as an explicit STI policy that intends to implement global development agendas on a systemic basis focusing on societal and environmental transformations based on people's needs at the local level, implicit STI policies focus on economic growth and competitiveness. This example makes us ask about what are the main logics and enablers underlying the implementation of TIP policies in countries in the Global South.This overarching question leads to making a first step to explore those elements based on an illustrative case from Colombia.In this vein, the illustrative case we study aims to bring insights from the Colombian case as building blocks for further discussions about the logic and enablers of TIP policies in the Global South.Therefore, we are not looking to extrapolate results from an illustrative case for all Global South. The study of the possible logic and enablers underlying the TIP policies requires changing the field of analysis.This change means moving from a normative stance towards a positive analysis of STI policies designed intentionally with a transformative ambition unattached to the multilevel perspective (Geels, 2002) and TIP conceptualisations (Schot & Steinmuller, 2018;Gosh et al., 2021).This sort of policy has been thought of with transformative intentions and implemented for a while now to produce the societal and environmental transformations needed by communities at the local level. We analyse the case of the Colombian Social Appropriation of Science, Technology, and Innovation policy (hereinafter SASTI-policy).This policy shows, we claim, a long trajectory and transformation in its objectives and policy instruments, thanks to key roles played by institutional entrepreneurs.Hence, while in the early 1990's it was mainly a policy focussed on fostering scientific knowledge diffusion in a vertical relationship between academia and society, at the beginning of the XXI century, its directionality was changed by key actors and events.This change entailed a different meaning of this policy at the national level and the development of new policy instruments to address societal and environmental challenges at the local level.Examples of these instruments are: i) A Ciencia Cierta and ii) Ideas para el Cambio, a couple of programs implemented in the frame of our main case study analysis, the SASTI-policy. We approach this case using an interpretative heuristic: the Path-transformative heuristic (Pinzón-Camargo, Ordoñez-Matamoros, & Kuhlmann, 2020).It allows us to inquire on the role of institutional entrepreneurs in shaping innovation policies with transformative potential, in a broader context of interactions between innovation policy, theory and practice (Kuhlmann, Smits & Shapira, 2010;Kuhlmann & Ordóñez-Matamoros, 2017).With this, we contribute to the reflection on the third innovation policy frame identified by Schot & Steinmueller (2018), from the perspective of a country from the Global South.This enables us to identify the peculiarities of this type of policies and to forecast their implications in this particular context. The reminder of the paper is as follows: after this introduction, section 2 presents the main tenets of the path-transformative heuristic, which offer the conceptual elements to analyse the Social Appropriation of Science, Technology and Innovation Policy (hereinafter SASTI-policy) case.Section 3 defines the methodological and heuristic approach, where the SASTI policy is briefly described, and section 4 presents the results of the analysis, the path-transformative heuristic is used to analyse the SASTI policy case.We discuss these results in section 5, and propose some final reflections in section 6. CONCEPTUAL APPROACH In order to understand the transformative potential of existing STI policies, as mentioned in the previous section, we are going to follow the Path-transformative heuristic developed by Pinzón-Camargo (2022) and Pinzón-Camargo, Ordóñez-Matamoros & Kuhlmann (2020).This heuristic offers a conceptual approach to understanding and unfolding processes of change based on the role of actors, mainly institutional entrepreneurs as agents of change.In this vein, the heuristic, as exploratory strategy (Kuhlmann, Stegmaier, & Konrad, 2019), combines in a layering process two literature branches, Path dependence and Institutional Entrepreneurship. The path dependence theory, as the first heuristic's layer, is understood as a never-ending process of path dependence, path destruction and path creation (Hirsch & Gillespie, 2001;Martin & Sunley, 2006).This understanding of path dependence differs from the canonical comprehension of the concept developed by David (1985) and Arthur (1989), and it includes the interpretation offered by Garud and Karnøe about path creation (Garud & Karnøe, 2001a;Garud & Karnøe, 2001b;Karnøe & Garud, 2012).The second layer in the path-transformative heuristic is provided by the institutional entrepreneurship theory. In this case, based on the works by DiMaggio (1988), Battilana, Leca, & Boxenbaum (2009) it is possible to position institutional entrepreneurs as agents who can explain the process of path creation, path destruction and path dependence.In this vein, these actors provide an endogenous explanation to the building paths processes and therefore, processes of institutional change (Pinzón-Camargo 2022).However, it is worth pointing out that these Institutional Entrepreneurs' agencies are distributed and relational (Garud & Karnøe, 2003;Cabero Tapia, 2019;Pinzón-Camargo, 2022), which means that institutional entrepreneurs are not heroes but are part of actors' constellations which work together performing differently roles.The first activity is to identify the practices that support introducing a change regarding the dominant setting identified in the first phase.The second one is to unveil the possible pressures that could undermine or stock institutional entrepreneurs' efforts to build the path-transformative process.Finally, the last phase tries to capture those endeavours by the institutional entrepreneurs to consolidate the new path, besides possible factors that contribute to or challenge such a consolidation process. Based on Figure 1, Pinzón-Camargo (2022) develops a set of crucial concepts to follow the set of phases in the path-transformative heuristic.Table 1 introduces the concepts and their definitions.IEs are "agents who initiate, and actively participate in the implementation of, changes that diverge from existing institutions, independent of whether the initial intent was to change the institutional environment and whether the changes were successfully implemented."(Battilana, J., Leca, B., & Boxenbaum, E;2009 p. 69). These events can be both exogenous but also created by actors.In the case of exogenous events, they can be used by the actors to support their actions. Like critical junctures, the increasing returns can be produced and used strategically by the actors.They also emerge from "contingencies" that actors can manage to reinforce their path creation process. It is a set of actions and behaviours made by IEs to support their vision of change and the introduction of the divergent change or to consolidate their Path-transformative process. It is the set of narratives that combine the past, present, and future to support mobilizing skills and strategies from the IEs and their allies. They include old practices aligned with the new path's institutional logic and new practices.They are part of the niche that the IEs build by implementing their skills and strategies. It is a state of temporary stabilization that allows both positive and negative outcomes based on the process of critical revision and mindful deviation done by the IEs. Table 1.Main concepts to consider in a Path-Transformative Process. MATERIALS AND METHODS As we mentioned in the former section, this study aims to understand the main enablers, and underlying logics of innovation policies with transformative potential.In this vein, we decided to follow the Yin (2018) This case comprises a period between the early 1990s and 2021.In addition, the period was considered regarding the emergence of the idea of the social appropriation of Science, Technology, and Innovation in public policies in Colombia and its last advances. Before describing the policy and the data collected, it is worth pointing out why this country and this policy.Colombia was considered as an illustrative example of a country in the Global South with several complexities.In the first place, it is striving to find new pathtransformative processes after more than 60 years of an internal armed conflict between Colombian state forces, paramilitary and guerrillas.In the second place, this country has been acknowledged as one of the most unequal countries worldwide, with problems of poverty both in urban and rural areas.In third place, Colombia shows a high dependency on extractive and other non-sustainable industries that have caused environmental damage, requiring Innovation Policies with a Transformative Potential that addresses such challenges.Finally, this country faces weak democratic institutions, high levels of corruption and an incipient sense of public good, which characterises many countries in the Global South. These complexities have led to the necessity to find alternatives to transform the RESULTS The use of the Path-transformative heuristic leads to identifying the four phases that comprise a path-transformative process in the Colombian case, in particular in the SASTI policy.The results will be presented according to each of the four phases. The Preformation phase The The Formation Phase As institutional entrepreneur, the Division has seven features that strengthen its role.First, it had a distributed leadership among its members (ColciCase-IT1, 2019).This distributed leadership was helpful to deal with the job instability that features the public sector in Colombia.Second, the Division was shaped by members with a background or a strong relationship with Science and Technology Studies (STS).This quality contributed to defining the Division directionality.Third, the Division was constituted by heterogeneous members.Therefore, it gave them the flexibility to attend different work areas (ColciCase-IT1, 2019).Two qualities (fourth and fifth) that also distinguish this institutional entrepreneur are its opportunities tracking and strategic analysis capabilities.These features allowed the Division to: "advantage spaces or opportunities to involve political and conceptually the topic.For example, in 2015, the linking of social appropriation to the sectorial guide, which is the guide to finance projects from the Science, Technology and Innovation fund" (ColciCase-IT1, 2019). The sixth and seventh Division's qualities are linked with the Division recursion talent and second order-learning (Rip, 1992;Kuhlmann, Shapira, & Smits, 2010). 2 These characteristics were reflected in the members' capacity to overcome challenges in working with communities in remote areas in Colombia (ColciCase-IT2, 2019) and designing and implementing policy tools to develop the Policy and the Strategy of Social Appropriation. The last characteristic of this Division as an Institutional Entrepreneur has been its resilience.This quality, along with its distributed leadership, has contributed to navigating the Colombian political and policy instability and building trust and credibility with local communities in the country. As mentioned, the Strategy of Social Appropriation entailed the vision of change introduced by the institutional entrepreneur.This vision of change emerged from the discussions around the alternatives to build bridges between innovation and society in the critical juncture at the path-transformative phase.The vision of change was featured by the stream of the "strong" approach to scientific knowledge appropriation.This stream acknowledges innovative capabilities in all of society and not only in the scientific community.In that sense, it considers that knowledge production can emerge from co-production processes between different actors and that those processes could address daily problems (Jasanoff, 2004) (COLCIENCIAS, 2010; ColciCase-IT1, 2019). The Division implemented several strategies and self-reinforcing mechanisms to build a policy niche 3 and, therefore, align and develop new practices to support the introduction of divergent change. 4Some of the strategies implemented by the Division are described in Table 2.In this table, the programmes Ideas para el Cambio and A Ciencia Cierta (hereinafter, the programmes) appear repeatedly, showing the centrality to foster the Path-transformative process. Besides the strategies described in Table 2, the institutional entrepreneur used Besides spreading the vision of change by policy documents and official presentations to researchers and policymakers, it was necessary to involve communities from cities and rural areas in the programmes: "So there was an intentional, a very intentional communication process in generating that facility and that confidence in the public so that they wanted to reach this type of experience.Moreover, from that, either it was the failure, or it was the triumph of the two instruments because we did it badly and we scared them, or we did it well, and we generated what we wanted with those instruments; And so, that was the story of the two.So that is why colours, texts, images, names and everything are special and different."(ColciCase-IT2, 2019). The process like building an allies' network to support the critical juncture or spreading the Division's vision of change is similar to what has been described by classical STI European and Latinamerican studies relied on activities like (Callon et al., 1982;Thomas et al., 2019) The opportunities tracking and strategic analysis example described below also depict the Institutional Entrepreneur efforts to "In this sector, in Colciencias, it is very important to be in the policy documents, because if you are there, then there may be resources, there may be implementation when you are not, it is an issue that can go unnoticed."(ColciCase-IT1, 2019). Table 3. Examples of Practices identified in this case.First, a legal producer's alignment was using a traditional instrument in MinCiencias, the public calls, to make agreements with local communities and not with research groups as it used to be the practice.Second, it was necessary to adapt reporting procedures to accept payments from using non-traditional systems of transport. "It was like sitting down with them to explain the nature of the project, to show them how people lived a little and what the realities that were in the territory were like so that they understood the adjustments we had to make there, internally, right.For example, the legalisation thing was crazy because, in the first version with the World Bank, they asked us to even RUT and invoice the donkey on which we went up.I mean, it was like: "no sir, there is no, I mean, they are indigenous, they do not have a RUT, sometimes they do not have an ID card".So, it was like making them understand those processes, to negotiate, for example, that a cash receipt would be worth me like this, or little things that sometimes became a super problem and that could stop the project or the strengthening process."(ColciCase-IT12, 2019). The institutional entrepreneur introduced the role of innovation to attend directly to the local communities and citizens needs, including environmental, social, and economic needs, as a policy objective. Local Communities and academia learnt how to work together. Academia learnt how to apply their scientific knowledge to co-produce solutions to the community needs.The community discovered in academia a partner to overcome their challenges. Local communities learnt or improved the use of ITC technologies to get in touch with the Ministry and other actors involved in the programs and make reports required by MinCiencias (ColciCase-IT2, 2019). The programme's public calls objectives show the intention of addressing environmental, social, and economic practices. Videos and actors' testimonies from the programs' websites show, for example, the reinforcing of agroecological practices. The work by Pinzón-Camargo (2022) studies in-depth those practices based on three cases from the programmes. The Development Phase The was enacted.Both advances can be understood as part of the self-reinforcing mechanism of increasing the institutional density (Pierson, 2000). To sum up, the above elements consider that despite the institutional entrepreneur efforts to foster its Path-transformative process, it is still far to be considered a consolidated process. DISCUSSION In the following do we show which are the operating logics and enablers of the transformative pathway in the analysed case, in particular in the preformation and formation phases of our heuristic.In the first place, we identified six logics underlying the pathtransformative process studied in the preformation, formation, and creation phases.Those logics are: i) technological determinism; ii) knowledge-dialogue -typically framed in the innovation systems approach (ISA)-; iii) technological facilitation; iv) mentorship; v) legality; and vi) visual representation and circulation On the second place, the enablers are extended through all transformativepathway heuristic.We identified at least 6 enablers, namely: i) legitimacy inception of transformation; ii) discursive force; iii) policy-niche inner force; iv) migration of critical policy content; v) public deployment; and vi) sustained vision-oriented of change. Underlying Logics A remarkable underlying logic of the transformative path deployed by SASTI-Policy in the preformation phase, is that innovation is considered a driver to foster industrial productivity and competitiveness.It affirms the linear mode of production of knowledge, also known as technology-push or market pull and reinvents the hierarchical and highly criticised mode of relationship between knowledge producers and knowledge consumers attached to the old-fashion paradigm of technological determinism (Feenberg, 1992). In the formation phase, three underlying logics emerged based on the role performance by the institutional entrepreneur.Those logics contributed to breaking the dominant technological determinism logic from the previous phase.The three logics are: i) mutual learning between academia, communities and citizens at national and local levels (multiactor models: Sabato's triangle, Etzkowitz's triple helix model, ISA etc.) based on the idea of a non-hierarchical processes of knowledge-dialogue; ii) in technical terms, the ITC technologies play a relevant role contributing to building the path-transformative process of transformation pathway, which can be named technological facilitation logic; and iii) in social terms, the mentorship dynamics built up around the figure of Godparents, which can be named as a mentorship logic.These logics are key to trigger the creation phase of a transformative pathway. Formation and creation phases shared common underlying logics.For instance, the knowledge-dialogue logic based on mutual learning was operating in both phases.In managerial terms at these phases at the national level, we also recognise the logic of legality.This means that transformation can acquire momentum laying on formal state mechanisms such as binding contracts with communities and researchers.Without this, any possible transformation could happen.Finally, visual representation is the last recognisable logic in these two phases.Audio-visual representation on web pages and other communicative pieces, as well as "real-people" testimonies of life transformation configure a public perception that "things are going well".An innovation policy such as SASTI-Policy, and its implementation requires social circulation: transformations on "communities" does not exist if there is not social understanding and appropriation that transformations are ongoing. Enablers In the preformation phase there are at least 3 enablers that trigger transformations: i) when the high-level officials focus in STI and coven high level and prestigious scientist, science gain social and political importance.The 1994 Mision de Sabios' interactions enabled discussion on the role of knowledge and the need to spread scientific knowledge in all the levels of the society; ii) giving rise to the notion of Social Appropriation of Science and Technology, a very catchy name, enough catchy to produce a giant snowball that pervaded a very important amount of social and economic sectors until nowadays.Even enough, iii) to be part since then of the public agenda.These three enablers can be named together as legitimacy inception of transformation. Additionally, between the preformation and formation phase, new conceptualisations and discussions on Social Appropriation of Science emerged.Apparently mirroring the old STS debate about the need to deepen the constructive character of the sociology of science exposed by David Bloor (1976), in 2000's in Colombia the notion of "deficit" in social knowledge circulation appeared as a way to point out the importance of making "strong appropriation of science" (De Greiff & Maldonado, 2011).This is, to stimulate a flatter micro-power dynamics in knowledge production, circulation and uses.In particular, when scientists have to work together with or for communities.This enabler can be name as discursive force. In the "formation phase" the "who" and the "where" are very important as enablers. As it was explained pages above the Division in Colciencias was constituted by people with an STS background or a strong relationship with STSer's.A heterogeneous group of officials facilitated work flexibility and the inscription of the idea to make another science: more local, pertinent and critic.This facilitates action in politics, in particular, officials who were very committed with communities, tracked opportunities and make strategic analysis in their benefit.This deserves more research, specially to explore the "corpopolitics of knowledge" of the officials who conducts the innovation policy in the global south (Grosfoguel, 2011;Tlostanova, 2019). Related to the latter, recursion of talent, second order-learning, resilience, and the capacity to to act strategically very significant enablers (Rip, 1992;Kuhlmann, Shapira, & Smits, 2010).The transformative pathway in the formation phase requires a focus on people and what they interpret about their learnings, how they change their behaviour and how to stand and face adversities, in particular working with communities to gain trust and legitimacy.This enabler can be named as policy niche inner force. Strategising on policy documents is also an important enabler.Anchoring critical elements of one document into others as well as keeping a low profile of them in the hierarchy structure of them permits the sustainability of the group of officials involved in the division which institutionalise the policy niche.This enabler can be named as migration of critical policy content. At the creation phase, as well as in the logics section at the national, regional and local level the multiactor interaction producing learnings is a transformative enabler itself. However, at this stage, the IE action supported on policy instruments implemented during a period of a decade, launched periodically is the most important enabler of transformation.This enabler can be name as public deployment. Finally, at the development phase, the vision of change and the formulation of a policy itself are important enablers.Officials' efforts to sustain a particular vision of change contributes to make possible a new policy.Both, vision, and policy constitute at the same time a self-reinforcing mechanism of increasing institutional density (Pierson, 2000), but determinant at the last stage of a transformative pathway in TIP in the south.This enabler can be named as sustained vision-oriented of change. The positive turn of our analysis shows some logics and enablers based on the Figure 1 Figure1depicts the path-transformative heuristic developed byPinzon-Camargo (2022).It illustrates a process divided in four phases.Those phases are the Preformation phase, the Formation phase, the Creation phase, and the Development phase.The first one is focused on describing the dominant setting and the contextual conditions where the Institutional Entrepreneurs are embedded; the qualities and features of the Institutional Entrepreneurs; and the conditions that produced the critical juncture that boost the formation phase.The second phase describes the vision of change championed by the institutional entrepreneurs, the enabling conditions, strategies, and self-reinforcing mechanisms that support a niche building process.The creation phase draws attention to two activities. futures of Colombian society, like in most of the countries in the Global South.In this sense, SASTI-policy was identified as an effort of experimentation to build new development pathways in Colombia.This policy supported the study of the challenges associated with the operationalization of Innovation Policies with Transformative Potential in the Global South.After the second part of the XXI century, the trajectory of this policy was changed by Institutional Entrepreneurs towards attending to the needs of local communities by using Science, Technology, and Innovation directly and involving different types of knowledge(Andrade-Sastoque & Balanzó 2017;Balanzó, Andrade-Sastoque et al., 2021;Pinzón- Camargo, 2022).The objective of this policy was operationalized by two programs implemented since 2012, these programs were Ideas para el Cambio and A Ciencia Cierta.These programs used public calls that invite local communities and researchers to work jointly to address communities' needs or reinforce their path-transformative processes.Applications selected from these public calls receive funding and technical support to implement solutions co-created(Balanzó, Nupia, & Centeno, 2020) between the different actors involved.The analysis in this study comprises a set of data constituted by three different sources.First, it includes seventeen interviews done in 2019 with current and former policy advisors at the Ministry of Science, Technology and Innovation (Before 2020 known as Colciencias) who intervened in the SASTI-policy and with actors from entities who have been working jointly with the Ministry.Second, it considers policy documents, official reports from the Ministry, proceedings from events and information from the Ministry and the Programs Ideas para el Cambio and A Ciencia Cierta websites.Finally, secondary information like news from local newspapers, videos, journal articles, book chapters, and dissertations that discuss directly or indirectly the case were studied.The data considered in this study were processed using the software, Atlas.Ti and following the categories described in the Path-transformative heuristic.Findings from the previous analysis were discussed between the authors and with other researchers in different forums. Preformation phase began in 1994 in the frame of the Science, Education and Development Mission (Better known in Colombia as Mision de Sabios).It was a meeting called by the President of Colombia to discuss the role that knowledge could play in Science and Education in the country's development process.Well-known researchers from different fields took part in such a meeting and delivered a blueprint about the role that should play the topics that convened the meeting (Daza-Caicedo & Lozano-Borda,2013).In this frame, the notion of Social Appropriation of Science and Technology emerged in the public agenda(ColciCase-IT1, 2019;Daza-Caicedo & Lozano-Borda, 2013).This notion was developed to address the need of diffusing or spreading scientific and technical knowledge to society(Aldana Valdes et al., 1996;Daza-Caicedo & Lozano-Borda, 2013).However, it is worth mentioning that efforts in science divulgation were present at that moment(COLCIENCIAS, 2005).The Social Appropriation of Science and Technology was embedded in a dominant setting featured by four elements.The first one was a market liberalisation process triggered by the President between 1990 and 1994.Second, the role of Science, Technology, and Innovation (hereinafter Innovation) was understood under an indirect approach to development(Arocena & Sutz, 2017).Third, Innovation was understood under a linear mode of production.It also entailed that Innovation was considered a driver to foster industrial productivity and, therefore, increase economic competitiveness.Finally, this period was featured by a vertical relationship between the knowledge producers (Academia, Industry and Government) and the knowledge consumers (Citizens).These four elements supported a process of path dependency of the role assigned to Innovation in Colombia (Pinzón-Camargo & Ordóñez-Matamoros, 2021).Discussions about the meaning and scope of the notion of Social Appropriation of Science and Technology increased their intensity between 2005 and 2010.The critical juncture was bordered by the enacting of two critical policy documents.The first one was the draft of the Policy of Social Appropriation of Innovation in 2005 that never was formally issued but it acquired certain legitimacy.Five years later, the second document was enacted, the National Strategy of Social Appropriation of Innovation.This critical juncture was built by the Science, Communication and Culture Division in Colciencias (hereinafter, Division).This Division was in charge of science divulgation activities since the early 1990s, and it was boosted after the Mision de Sabios.In this case, this Division embodied the role of institutional entrepreneur.The critical juncture was featured by intense discussions fostered by the Division around relationships between Innovation and society.Those discussions were boosted by several activities promoted by the Division in the period of the critical juncture (Daza-Caicedo & Lozano-Borda, 2013).They revolved around two approaches that aimed to address the relationship between Innovation and society.The first approach was represented by deficit models of innovation divulgation, and the second one was shaped by "strong" approaches of scientific knowledge appropriation (De Greiff & Maldonado, 2011). 1 self-reinforcing mechanisms to strengthen the Path-transformative process.Some examples of those mechanisms were the institutional density(Pierson, 2000) and financial investments.The first set of mechanisms can be illustrated by enacting policy documents and anchoring critical elements of those into others.From a managerial perspective, the anchoring strategy produced financial and political arrangements between Entities that kept sustainability to the vision.For example, some of the public calls from the programmes have received funding from other National Entities like the Ministry of Information and Communication Technologies or the National Service of Learning (SENA by its acronym in Spanish).The second set of mechanisms have emerged from significant agreements between COLCIENCIAS and multilateral banks (ColciCase-IT1, 2019; ColciCase-IT2, 2019).3Policy niches are very similar to socio-technical niches but in the context of policy's formulation and implementation.They are protected spaces where vision's change led practices to divert the trajectory of the mainstream of policy.These niches provide conditions to experiment inside the public sector, and in implementation policy spaces which can derive into deep socio-technical transformations.Examples of these policy niches are known as "public policy pilots".The implementation of this set of strategies and self-reinforcement mechanisms by the Institutional Entrepreneur produced two results.The first result was the possibility to develop and spread its vision of change.The second result was built and shielded the policy niche where the practices that nurtured the change were developed or aligned to the vision of change.The analysis of this last set of results draws attention to the creation phase in the Path-transformative heuristic. to name the Strategy of Social Appropriation of Innovation as "Strategy" to have a smooth and fast enacting process in 2010.Content-wise, the Strategy looks like a Policy.However, junctural situations like the beginning of a new presidential period and the traditional change of all persons in strategic positions, besides the complexity of negotiations with other entities that entail a policy, explain this strategic decision (ColciCase-IT10, 2019). involve other areas and instruments inside MinCiencias.The process of involving other areas inside MinCiencias has required periodical meetings to explain what the Division does (ColciCase-IT3, 2019; ColciCase-IT12, 2019) in a sort of continuous pedagogic process.The results of the programmes are published on the two websites designed for those programs1, 2.Besides texts describing the projects supported by MinCiencias, those websites include videos with the community's testimonies.The institutional entrepreneur used these results to motivate other entities to follow their path (ColciCase-IT1, 2019; ColciCase-IT5(Part1), 2019).Besides using the results from the programmes to motivate other actors, the visibility and exposure that they brought for Colciencias contributed to standing the Strategy (Pinzón-Camargo, 2019).To spread the vision of change and build the policy niche, the Institutional Entrepreneur anchored the Policy and Strategy of Social Appropriation of Innovation to: the Innovation Law of 2009; in critical methodologies for the Innovation sector like the model of research team measuring (ColciCase-IT1, 2019); in policy documents like National Development Plans (ColciCase-IT1, 2019); or international studies from entities like the OECD (OECD, 2017).In general, the Division was aware about anchoring the Policy and Strategy in strategic documents to keep sustainability to the Path-transformative process. to learn to work with local communities and citizens.It meant developing communicative skills and changing administrative procedures to attend to the needs of these communities.It also entailed the process of involving non-research partners to deploy the public calls at the local level.They must develop methodologies and policy devices to support the technical and organizationally programmes.One of these devices was the figure of Godparents.This figure is the name assigned to researchers who decided to support the projects without financial compensation and following a set of principles to interact with the communities defined by MinCiencias (COLCIENCIAS, 2015).By the time, the Godparents figure became a recurrent practice in all the public calls.Besides, they introduce experimental approaches as part of improving the public calls (ColciCase-IT5(Part2), 2020).Financial and legal procedures were developed and aligned inside MinCiencias to report the financial payments and legally bind agreements with local communities and researchers.Two examples can illustrate these practices. last phase in the Path-transformative heuristic depicts a situation where the process fostered by the Institutional Entrepreneurs arrives at the consolidation stage.To achieve the Path-transformative process' consolidation, the institutional entrepreneur has continued implementing its strategies.The following are some examples of those strategies.• The Institutional Entrepreneur remains showing results based on the programmes; • It is looking for new allies like SPRU; • It is anchoring the Strategy of Social Appropriation to critical sectorial documents like the Green Book (COLCIENCIAS, 2018); • It is using its discursive capability to adapt its interests to appealing narratives like social innovation, public innovation, or transformative innovation.All efforts to sustain its vision of change have had in the last two years two remarkable advances.The first one emerged from the new organisational transformation in COLCIENCIAS.This entity became the Ministry of Science, Technology and Innovation (MinCiencias) in 2019.In that transformation, it was settled the Vice ministry of Talent and Social Appropriation.Second, a new policy of Social Appropriation of Knowledge in 2021 Colombian case.Those elements constitute a starting point to explain how can be set a set of public actions for mobilising resources towards more sustainable sociotechnical systems via the governmental promotion of knowledge(Ordóñez-Matamoros et al., 2021), with or without an underlying transition or mission-oriented's ambition for TIP in the Global South.Policy experiments for localising SDGs(Boni et al., 2021) and transformative outcomes(Gosh et al., 2020) are both normative and explicit ways based on action-research of inquiring TIP in the south.We make a call to complement this type of TIP's research in the south, in particular, for the understanding of what is beyond of the last stage of our heuristic, this is the vision of change as an enabler of transformation. case study methodology to build an illustrative study case that brings insights into the transformative potential of innovation policies in the Global South.Following that purpose, the Social Appropriation of Science, Technology and Innovation Policy was chosen.It is a policy led by the Ministry of Science, Technology and Innovation of Colombia.This case was studied in previous work by the authors (Ordóñez-Matamoros, G. et al., 2021) as an illustrative example to understand what a Transformative Innovation Policy (TIP) could look like in practice.Although the case is the same, the analysis in this exercise takes distance from the first one in two senses.First, the amount of data (interviews and secondary information) is richer and deeply studied.Second, and more relevant, this work studies the case to understand in depth the underlying logics and enablers of innovation policies with transformative potential. Table 2 . Strategies implemented by the Division to foster the Path-transformative process. : Forums (COLCIENCIAS, Universidad EAFIT, 2011; ColciCase-IT1, 2019); Agreements with international entities like the Interamerican Development Bank and the World Bank (ColciCase-IT1, 2019), and actors involved in each of the activities launched by the Division, like the sponsors, beneficiaries and other actors involved in the programmes (ColciCase-IT1, 2019)."This group has been characterised by looking for others to work with, not designing from here only, but seeking alliances with others who are already working, entities, and to be able to work better."(ColciCase-IT1, 2019).
2023-12-07T16:08:13.287Z
2023-12-04T00:00:00.000
{ "year": 2023, "sha1": "f3f7a50e8a8e0dce9a2241c3f874f904a556858a", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.5380/nocsi.v0i5.93602", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f785364c755328b84d8c266e056eedc8415579e5", "s2fieldsofstudy": [ "Political Science", "Economics", "Environmental Science" ], "extfieldsofstudy": [] }
81484672
pes2o/s2orc
v3-fos-license
Role of Candida in Catheter Associated Urinary Tract Infection Background: UTI in hospitalised patients due to Candida spp. is becoming increasingly common in ICU setting. There is always a dilemma as to differentiate colonisation from true infection and whether to treat candiduria or not. The choice of antifungal is also controversial due to low urinary concentration of many antifungal drugs. Objective: This study was conducted to assess the significance of Candida spp. as the causative agent of symptomatic CAUTI in medical ICU patients and perform microbiological characterisation of Candida and their antifungal susceptibility pattern. Methods: A total of 100 patients admitted in medical ICU and put on Foley’s catheter were included in the study and followed up for the development of symptomatic CAUTI. The urine samples from the catheter were collected on day 1 and then on day 3,5,7,10,14 and every weekly till the patient was discharged, expired, catheter removed or developed bacteriuria or candiduria. The samples positive for Candida spp. were identified and processed as per standard guidelines. Results: In this study, it was found that 23% (6/26) of the symptomatic CAUTI was caused by Candida spp. Candida species comprised 15% of the causative organisms. Among the candida species, non-albicans Candida spp. contributed to 83.3% of the isolates and only 16.7% of isolates were Candida albicans. All Candida isolates were sensitive to fluconazole, voriconazole, amphotericin B and itraconazole. Conclusion: Symptomatic catheter associated urinary tract infection with Candida spp. is becoming increasingly common. Among Candida spp., non-albicans Candida is emerging as the predominant pathogen causing CAUTI. INTRODUCTION Catheter associated urinary tract infection (CAUTI) is the most common hospital acquired infection which accounts for more than 80% of nosocomial urinary tract infections (UTIs) [1] . The risk factors associated with CAUTI in adults mainly include intensive care unit (ICU) admission, broad-spectrum antibiotics, diabetes mellitus, increased age, and female sex . [2,3] The microorganisms causing CAUTI range from Gram negative bacteria to Gram positive cocci to Candida. UTI in hospitalised patients due to Candida spp. is becoming increasingly common in ICU setting [27] . There is always a dilemma as to differentiate colonisation from true infection and whether to treat candiduria or not [28] . Symptomatic CAUTI is considered when symptoms / signs consistent with UTI exists along with candiduria in a catheterized patient [2] . The signs and symptoms either are localized to the urinary tract or can include otherwise unexplained systemic manifestations, such as fever [2] . The accepted threshold for bacteriuria/candiduria varies from 10 3 colony forming units per millilitre (cfu/mL) to 10 5 cfu/mL [2] . The choice of antifungals is also controversial due to low urinary concentration of many antifungal drugs [28] . This study was conducted to assess the significance of Candida spp. as the causative agent of symptomatic CAUTI in medical ICU patients and perform microbiological characterisation of Candida and their antifungal susceptibility pattern. MATERIALS AND METHODS Approval of the Institutional Ethics Committee was obtained before starting the study. Informed written consent was taken from all the patients included in the study. This was a cross-sectional study conducted at the Institute of Microbiology, Madras Medical College in association with Medical ICU, Rajiv Gandhi Government General Hospital, Chennai. It was of one year duration from October 2014 to September 2015 which included a total of 100 patients admitted to medical ICU. Those who were 18 years and above and put on Foley's catheter were included in the study. The exclusion criteria included patients less than 18 years of age, those catheterised prior to admission in ICU, those confirmed to have UTI on 1 st day and whose Foley's catheter was removed or who were discharged before the 3 rd day of catheterisation. Data were collected from the patients using a preformed structured questionnaire. Physical examination findings and details of clinical diagnosis was also noted. Daily examination of the patients were done to look for any evidence of urinary tract infection. The patients were followed till they developed bacteriuria/candiduria or discharged, expired or catheter was removed. Patients who were shifted to different ward were followed for up to 48 hrs for the developments of symptoms of CAUTI [3] . Urine specimens were collected aseptically from the Foley's catheter, approximately (minimum) 3ml of urine was taken as a sample in a flat bottomed universal container. The samples were taken to laboratory within 1 hour of collection. Day 1 sample was taken to rule out prior presence of UTI. The samples were repeated on 3rd, 5th, 7th, 10th, 14th day and then every weekly until catheter removal, or patient developed bacteriuria, or until discharge/death of the patient [1,3] . The patients were diagnosed as symptomatic CAUTI as per Centre for disease Control (CDC) guidelines January, 2014 which included the development of UTI caused by Candida spp., with a culture of ≥10 3 CFU/ml on a specimen collected at least 48 hrs after hospital admission and a previous Candida spp.-negative culture [2] . Direct Gram's stain of uncentrifuged urine was done to observe for the presence of bacteria or candida. Detection of nitrites and leucocyte esterase was done on uncentrifuged urine using dipstick test. Then the urine sample was centrifuged at 3000rpm for 3-5 minutes. A wet mount of the sediment was done and the number of pus cells / high power field was counted under 40 x objective. More than 5 WBC/hpf was considered significant for diagnosing CAUTI [1,4] . The specimens were cultured by semi-quantitative method using Mac Conkey Agar and Blood Agar as culture medium. The plates were read after 24 hours of incubation for any growth [1] . Based on colony morphology on 5% sheep blood agar and no growth on Mac Conkey agar, the colonies were suspected to belong to Candida species. Gram stained smear showed Gram positive budding yeast cell with pseudohypa-hae. Candida was speciated based on germ tube test as Candida albicans and non-albicans Candida [5,6] . The candida species were identified on Dalmau plate culture method by the presence of hyphae, blastoconidia and chlamydospores [7,8] . Further speciation of Candida was done by sugar fermentation and sugar assimilation tests [5,6,8] . In sugar fermentation tests, 2% sugars were used which included glucose, maltose, sucrose and lactose. For sugar assimilation test, carbohydrate discs -glucose, maltose, sucrose, lactose, cellibiose, galactose, trehalose, raffinose, xylose, inositol and dulcitol were placed on the yeast nitrogen agar and incubated for 24-48 hours at 25°C. The assimilation of the particular carbohydrate by the yeast was indicated by the growth around the discs. The pattern of assimilation was noted [1] . Speciation of Candida spp. was also done using Candida Chrom Agar [5,8,9] . Candida spp. was subcultured onto Sabouraud's Dextrose Agar and then streaked onto Chrom agar plate. This was incubated for 48 hours at 37°C and the colour and morphology of the colonies were noted. The antifungal susceptibility test was done by disc diffusion method and minimum inhibitory concentration (MIC) method [10,11] . The drugs fluconazole (25µg) and voriconazole (1µg) were tested by Kirby Bauer Disk diffusion method on supplemented Mueller Hinton Agar which contained Mueller Hinton agar supplemented with 2% glucose and 0.5 μg/ ml methylene blue. MIC by microbroth dilution was done for fluconazole, itraconazole and amphotericin B. RESULTS In this study, among 100 patients enrolled, 26 developed symptomatic CAUTI. It was found that 23% (6/26) of the symptomatic CAUTI was caused by Candida spp. A total of 40 organisms were isolated. Majority of the organisms isolated belonged to Enterobacteriaceae (34.5%) and nonfermenters (32.5%). Candida species comprised 15% of the causative organisms. Among the candida species, non-albicans Candida spp. contributed to 83.3% of the isolates and only 16.7% of isolates were Candida albicans. Among non-albicans Candida, 2 patients had Candida tropicalis and one patient each had Candida krusei, Candida parapsilosis and Candida glabrata isolate. All Candida isolates were sensitive to fluconazole, voriconazole, amphotericin B and itraconazole. DISCUSSION Catheter associated urinary tract infection is the commonest device associated nosocomial infection. The rate of device associated infections shows variation in India. According to a study conducted by Angshuman Jana et al (2015) [12] , the incidence was 31.85%. A study by Priya Datta et al (2014) [13] found the CAUTI rate as 10.75% , by Kamat et al (2009) [14] as 33.6%, and Al Jebouri et al (2006) [15] as 28.1 %. In this study, out of 100 patients, 26 patients were diagnosed to develop symptomatic CAUTI during their course of hospitalisation. Therefore, the incidence was 26% and the CAUTI rate was calculated as 25.67 per 1000 catheter days. It is thought that candiduria is very common in hospitalised patients [16,17,18,19] and is mainly due to antibiotic usage [20] . In one of the point prevalence survey done in 228 hospitals from 29 European countries, 9.4% of nosocomial UTIs were found to be caused by Candida spp. The incidence of candiduria varies with hospital setting and is most common in ICUs [19] and among those in burn units [34] . A study conducted by N. Febre et al (1999) found Candida spp. in 18.6% of urine specimens from patients with indwelling urinary catheters in ICU. Other studies report that 11 to 30% of nosocomial UTIs are caused by Candida [22,23] . In the present study, 23% of the symptomatic CAUTI in medical ICU was caused by Candida spp. and it comprised 15% of the total causative agents. Antifungal susceptibility in candiduric patients depends largely on the infecting strains. In this study, all Candida isolates were sensitive to fluconazole, voriconazole, amphotericin B and itraconazole. CONCLUSION Symptomatic catheter associated urinary tract infection with Candida spp. is becoming increasingly common. It is usually difficult to ascertain the difference between Candida colonization and infection. Diagnosis mainly depends on the symptoms of UTI along with pyuria and high colony Candida counts in the urine. Among Candida spp., non albicans Candida is emerging as the predominant pathogen causing CAUTI. Based on clinical setting, the relevance of candiduria must be determined and appropriate decision should be taken for the need of antifungal therapy. There is a need for further studies to determine regime for such patients so as to address some of the unanswered questions of when to treat, whom to treat and how long to treat. I acknowledge the immense help received from the scholars whose articles are cited and included in references of this study. I am also grateful to authors/editors/publishers of all those articles, journals and books from where the literature for this article has been reviewed and discussed. Voriconazole 0 (0.0%) 6 (100.0%) Amphotericin B 0 (0.0%) 6 (100.0%) Itraconazole 0 (0.0%) 6 (100.0%) *: intrinsic resistance for C.krusei
2019-03-18T14:04:35.975Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "7c677d3841e0c38e32f2ad493c1a7ff6e4391062", "oa_license": null, "oa_url": "https://doi.org/10.31782/ijcrr.2018.10204", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ac6afb2bb2ef79a24b175d8ba224cb97a5b9ce8d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
2020933
pes2o/s2orc
v3-fos-license
Genistein suppresses FLT4 and inhibits human colorectal cancer metastasis. Dietary consumption of genistein, found in soy, has been associated with a potentially protective role in colorectal cancer (CRC) development and progression. Herein we demonstrate that genistein will inhibit human CRC cell invasion and migration, that it does so at non-cytotoxic concentrations and we demonstrate this in multiple human CRC cell lines. After orthotopic implantation of human CRC tumors into mice, oral genistein did not inhibit tumor growth, but did inhibit distant metastasis formation, and was non-toxic to mice. Using a qPCR array, we screened for genistein-induced changes in gene expression, followed by Western blot confirmation, demonstrating that genistein downregulated matrix metalloproteinase 2 and Fms-Related Tyrosine Kinase 4 (FLT4; vascular endothelial growth factor receptor 3). After demonstrating that genistein suppressed neo-angiogenesis in mouse tumors, we examined FLT4 expression in primary CRC and adjacent normal colonic tissue from 60 human subjects, demonstrating that increased FLT4 significantly correlates with increased stage and decreased survival. In summary, we demonstrate for the first time that genistein inhibits human CRC metastasis at dietary, non-toxic, doses. FLT4 is identified as a marker of metastatic disease, and as a response marker for small molecule therapeutics that inhibit CRC metastasis. INTRODUCTION Colorectal cancer (CRC) is the third leading cause of cancer-related death in developed countries [1]. Increased incidence of CRC has also been observed in developing countries, likely due to associated changes in diet and environment. The five-year survival rate exceeds 90% in patients diagnosed with early stage CRC, while it is less than 20% for those with metastatic CRC [2]. The development of metastasis is therefore a major determinant of survival. There are currently no treatments that selectively inhibit processes that drive metastasis. Thus, the discovery and development of a safe and effective drug that is able to inhibit human CRC metastasis remains an important goal. It is widely known that there is a wide variation in cancer rates from country to country. In particular, Asians, who have historically consumed a traditional diet high in soy, have a low incidence of clinical CRC [3]. However, Asians who immigrate to the United States and adopt a Western diet have an increased incidence of CRC [4]. These findings demonstrate that dietary and/ or lifestyle factors influence the incidence of CRC, and are consistent with the notion that soy consumption may be in part responsible. Further, there is a large body of epidemiologic evidence to suggest that diets Oncotarget 3226 www.impactjournals.com/oncotarget containing high amounts of soy are associated with an overall low rate of CRC mortality [5][6][7]. Genistein (4', 5, 7-trihydroxyisoflavone), which is present in high amounts in soy products, has been specifically evaluated in many of these studies, and is thought to represent a key bioactive component. It is reported that among the adult Chinese and Japanese populations, the average daily dietary intakes of genistein is 39 and 47 mg, respectively, whereas for those consuming a traditional Western diet, average daily consumption is only 1-2 mg/d [8][9][10]. As the incidence rate of CRC is historically much lower in Japan and China compared to Western countries, one possible explanation for this, at least in part, is that Asians consume much more genistein. Across a variety of experimental models, genistein has shown anticancer activity, typically in association with the suppression of cell proliferation and/or induction of apoptosis [11]. In CRC, previous studies have shown that genistein can decrease cell proliferation [12], and can induce G2/M phase cell cycle arrest and apoptosis [13]. Other studies implicate genistein's role in carcinogenesis through the epigenetic modulation of DNA, including DNA promoter methylation and histone modification, resulting in altered miRNA expression patterns [14]. However, genistein's effects are concentration dependent, the majority of these effects are observed in conjunction with high concentrations of genistein, i.e., mid-to-high micromolar concentrations, and the plethora of reported effects has raised concerns about specificity. At lower concentrations, overlapping with those achieved in the blood of humans after dietary consumption, genistein has been shown to inhibit cell motility and metastasis of human prostate cancer [15,16]. However, the role of genistein in other cancers in this regard remains to be defined. We therefore conducted the current study designed to determine whether genistein affected human CRC metastasis. Further, given the complexity of metastasis and our current inability to successfully therapeutically target it [16], we sought to use genistein as a probe to analyze the associated underlying molecular mechanisms. Metastasis is a complex, multistep process made up of a cascade of sequential steps involving changes in cellular invasion, migration, adhesion, movement of cancer cells through the circulatory system, and their reimplantation within a separate organ located at a distant site in the body, followed by colonization, tumor growth and the associated formation of new capillaries. In order to successfully metastasize, cancer cells must overcome what have been identified as three major barriers [17,18]. The first relates to cellular attachment to the extracellular matrix, and the basement membrane in particular. Cells must re-program themselves in order to survive when not stably attached. The second requires an increased capacity to produce proteases that are able to induce local degradation of the extracellular matrix. And the third involves the ability of cancer cells to migrate through such a modified matrix. Increased cell migration, coupled to increased extracellular matrix degradation, constitute the major components of the composite process of cellular invasion. Increased cell invasion is an essential characteristic of the metastatic phenotype, is absolutely necessary for cells to successfully traverse the metastatic cascade, and strategies designed to selectively inhibit this process are actively being pursued, but remain elusive. The ability to inhibit initial cell invasion would in essence prevent the development of the series of events downstream from it that together lead to metastasis. Therefore, we began to investigate the effects of genistein on cell invasion and migration at cellular level, and then moved on to test CRC metastasis, using in vivo models. In the current study, we demonstrate for the first time that genistein inhibits CRC cell invasive and migratory ability, and that it does so at concentrations that are not toxic to cancer cells in vitro. Using a clinically relevant orthotopic implantation murine model, we demonstrate that genistein inhibits human CRC cell metastasis. Based upon these positive findings, we went on to use genistein as a chemical probe to deepen our understanding of associated regulatory mechanisms. From an upfront screen, we went on to demonstrate that genistein suppresses expression of matrix metalloproteinase 2 (MMP2) and of Fms-Related Tyrosine Kinase 4 (FLT4), and that it does so in cells in vitro and in tumor tissue in vivo. Focusing upon this newly identified role for FLT4, we demonstrate that its overexpression in human CRC tissue is associated with increased stage and early death from the development of metastatic CRC. We hereby identify genistein as an inhibitor of human CRC metastasis and an inhibitor of FLT4 expression. Further, we identify FLT4 as a potential marker for the development of metastatic CRC. Genistein's effects on CRC cell viability As induction of cell death can falsely affect measurement of cell movement, our initial investigations sought to characterize the concentration of genistein that was not toxic to cells. We first performed a cell proliferation assay on HCT116, HT29 and SW620 cells, treated with different concentrations of genistein, and measured effects upon cell growth each day, for five days. As shown in Fig. 1A, genistein inhibited cell growth in a concentration-and time-dependent manner. At 10 µmol/L, no significant effects were observed until after 72 hr, and then they were only minor. In contrast, at 25 and 50 µmol/L, genistein inhibited cell growth at 48hr and decreased it by up to 83% at 5 days. We further corroborated these effects by performing colony formation Oncotarget 3227 www.impactjournals.com/oncotarget assays. After one day pre-treatment with different concentrations of genistein, cells were cultured for another 10 days, and colonies counted. As depicted in Fig. 1B, 10 µmol/L genistein did not decrease colony formation, while both 25 µmol/L and 50 µmol/L genistein significantly decreased colony formation in a concentration-dependent manner, and did so in all three cell lines tested. Genistein inhibits CRC cell invasion and migration For cell invasion and migration experiments, cells were treated with 10 µmol/L genistein for a total of 48hr (24hr pre-treatment plus 24hr during invasion or migration). As indicated above, this concentration has no impact upon cell viability at 48hr, and even with continued treatment at 10 µmol/L, no effects are observed for another 48 hr (i.e., at the 96 hr time point), and are even then only minor. Cell invasion was measured by a matrigel™ transwell invasion assay. As shown in Fig. 2A, genistein significantly inhibited cell invasion and cell migration in all three cell lines tested, inhibiting cell invasion by 36% to 56%, and inhibiting cell migration by 32% to 39%. We further corroborated these findings in a wound healing assay and in the Cellomics high content cell migration system. The wound assay is shown in Fig. 2B, and demonstrates that genistein significantly inhibits wound closure. With the Cellomics high content system, we tracked and quantified the movement of individual with 0, 10, 25 or 50 μmol/L genistein, and effects upon cell growth were measured daily over a five-day period. Data are the mean ± SEM OD value from a single experiment run in replicates of N=3; similar results were observed in a separate experiment, also N=3. * denotes p<0.05 comparing the cell viability at the fifth day to control group (P, by student's T TEST). (B) Effects of genistein on colony formation. One day after pretreatment at the indicated concentrations of genistein, cells were plated into 6-well plate at 1000 cells per well in the absence of genistein after 10 days, colonies counted. Data represented the mean ± SEM, from a single experiment run in replicates of N=3, and expressed as the percent of control (i.e., 0 µmol/L/vehicle only); similar results were observed in a separate experiment, also N=3. One-way ANOVA was used to calculate the significance of difference among groups. SNK analysis was used to compare differences between two groups as indicated in the figure. ns denotes p>0.05, * denotes p<0.05 compared to control group. Oncotarget 3228 www.impactjournals.com/oncotarget cells across a two dimensional surface over a 12-hr period of time. As can be seen in Fig. 2C, genistein significantly inhibited the migration of all three cell lines tested, with statistically significant effects detected as early as 180 min. Genistein effects on distant organ metastasis in orthotopic nude mouse model The above findings demonstrate that genistein can inhibit the migration and invasion of CRC cells at concentrations below which cell toxicity is seen. Given the central role of cell migration and invasion in driving metastasis, we hypothesized that genistein would inhibit human CRC metastasis. We tested this in an orthotopic implantation model of human CRC cells in mice. This model closely recapitulates human disease in that it requires cells to complete all of the steps in the metastatic cascades, including initial steps, such as neoangiogenesis formation, invasion out of the colon tract, as well as latter steps involving formation of distant metastasis. Specifically, in this experiment, small pieces of tissues from subcutaneous CRC tumors, arising from subcutaneously implanted cells, were transplanted into the cecal wall. Three days after orthotopic transplantation, 25 Transwell assay was adopted to analyze cell invasion and migration of CRC cells. Cells on the lower side of the transwell membrane were counted. Representative images of different cell lines under different treatment conditions, stained with crystal violet and originally imaged at 200× magnification, are depicted. Data depicted graphically are the mean ± SEM of a single experiment, similar results were obtained in a separate experiment performed at a separate time, each experiment was in replicates of N=3. * denotes p<0.05 compared to vehicle control group by Student's t test. (B) Wound healing assay. Cells were pretreated with 10 µmol/L genistein or vehicle for one day, the wound created, and wound closure over 48 hr was measured. The wound gap was calculated to reflect cell migration degree. The area of 0 hr was set as 100 percent, and the percent changes at 24-hr and 48-hr time point were recorded. Data represented the mean ± SEM of a single experiment, each in replicates of N=3. Similar results were obtained in a separate experiment performed at a separate time, also in replicates of N=3. * denotes p<0.05 compared to control group by student's t test. (C) Cell migration assay. After pretreatment of cells with 10 μmol/L genistein for one day, average cell migration distance was recorded every 45 min over a 12-hr period of time as described in materials and methods. The data presented as one representative mean ± SEM from at least three separate experiments. * denotes P < 0.05 compared to control by student's t test. GEN: genistein. Oncotarget 3229 www.impactjournals.com/oncotarget mg/kg/d or 75 mg/kg/d genistein or sesame oil (as control) was orally administered 5 days a week until the end of the experiment. After treatment for 5 weeks, primary tumor size and metastases were measured. Metastases were quantified by measuring bioluminescence in whole lung and liver organs immediately after necropsy (Fig. 3A). As can be seen in Figs. 3B and C, genistein significantly inhibited metastasis to both lung and liver in a dose-dependent manner (ANOVA p < 0.001). With lung, metastases were significantly lower in 25 and 75 mg/kg/d groups, as compared to control (SNK p < 0.05, respectively). However, the difference between 25 and 75 mg/kg/d groups was not significant (SNK p = 0.226). With liver, metastases were also significantly lower in 25 and 75 mg/kg/d groups, as compared to control (SNK p < 0.05, respectively), and metastases in the 75 mg group were significantly lower than those of the 25 mg group (p < 0.05). We did not find ascites in any of our mice, although it was reported in other studies that ascites appeared after orthotopic tumor implantation [19,20]. The existence of primary tumors and metastatic loci in lung and liver were confirmed by H&E staining (Fig. 3D). We also measured primary tumor weight and size for each mouse. As shown in Figs. 3E and F, genistein treated groups showed a tendency toward reduced tumor weight (ANOVA p value = 0.276) and tumor size (ANOVA p value = 0.354), but the difference was not significant. Together, these findings demonstrate that genistein inhibits human CRC metastasis in a dose-dependent manner. Our previous study with genistein on mice using doses at and above current doses, comprehensively evaluated the systemic side effects of genistein in mice, failed to find any toxicity [21]. For the current study, we therefore only measured body weight, in addition to observing animal behavior. Genistein had no effect upon behavior, nor upon body weight (Fig. 3G). These findings indicate that genistein did not exert systemic toxicity in mice at the given doses. It is recognized that the propensity to metastasize increases as tumor mass increases. As we found a tendency for reduced tumor weight by genistein, this raises the possibility that the differences in metastasis observed in the current study might just result from differences in tumor mass. If this were the case, then tumor mass should closely correlate with degree of metastasis. We went on to demonstrate that there was a poor correlation between tumor mass and metastasis at the individual mouse level (Fig 3H, I). A weak positive correlation trend was noticed between tumor weight and lung metastasis (Pearson R=0.322, 95% CI: 0.020 to 0.570 P=0.038, R 2 =0.104), whereas the result for liver was not significant (Pearson R=0.119, 95% CI: -0.192 to 0.408 P=0.455, R 2 =0.014). These findings indicate that the anti-metastatic effect of genistein results from factors other than primary tumor growth. Genistein's effects on angiogenesis and cell proliferation in vivo We next examined expression in orthotopic tumor tissues of the cell proliferation marker, Ki67, and the angiogenesis marker, CD34, by immunohistochemistry. Our results showed that although the percentage of Ki67 positive tumor cells showed a decreased trend after genistein treatments, the decreases were not statistically significant (one-way ANOVA p value =0.126) (Fig. 4, upper panel). This was in agreement with our above findings that no significant effect upon tumor weight was observed. Further, both of these findings are consistent with our in vitro results, which demonstrate the ability to achieve anti-motility effects in the absence of cell toxicity. However, genistein did significantly reduce microvessel density, as reflected by decreased CD34 staining in tumor tissues, and it did so in a dose-dependent manner, with the strongest reduction observed in the 75 mg/kg/d group (p=<0.000) (Fig. 4, lower panel). Identification of metastasis-related genes affected by genistein Having demonstrated genistein's ability to inhibit human CRC metastasis, we next sought to use it as a chemical probe to better understand the molecular regulation of CRC metastasis. We approached this by treating HCT116 cells with/without 50 μmol/L genistein for 1day, followed by screening for altered gene expression on an 84-gene human tumor metastasis PCR array platform. Genes with more than twofold changes were considered of interest. Based upon this, 8 out of the 84 genes were downregulated, while 4 were upregulated (Table S1). We then tested these 14 genes by qPCR and confirmed MMP2 and FLT4 RNA levels were decreased by genistein (Fig. 5A). Next, we examined the effect of genistein on MMP2 and FLT4 protein expression in HCT116, HT29, and SW620 cells by Western blot (Fig. 5B). In all three cell lines tested, genistein decreased the expression of both MMP2 and FLT4 proteins. To check if this was the case in vivo, we measured protein expression in tissue sections from tumors of genistein treated or control mice by immunohistochemistry. As seen from Figs. 5C and D, genistein significantly decreased both MMP2 and FLT4 expression in a dose-dependent manner. Taken together, these findings demonstrate that genistein selectively suppresses MMP2 and FLT4 expression, and that it does so both in vitro and in vivo. Prognostic role of FLT4 in human CRC The above findings demonstrate that genistein decreases MMP-2 and FLT4 expression coincident Oncotarget 3230 www.impactjournals.com/oncotarget orthotopically implanted with CRC cells. Images from luminescent imaging by IVIS system at the end of experiment were taken. The photon counts were automatically calculated by the software installed with the instrument. All exposure time and imaging parameters were set equally to generate comparable results. (A) Representative bioluminescent images from first batch of study with 6 nude mice in each cohort were presented. Left, whole mouse, upper right, lung, lower right, liver. The color scale depicts the photon flux (p/s) emitted. (One mouse in the control group died before the end of experiment.) (B, C) Imaging results of whole lung (B) and liver (C) organs were transformed to scatter plot scheme and the horizontal bar represented median value of each group. The Y axis was formalized as Log10 scale. Data from two batches studies, including total 14 mice in each group were presented. One-way ANOVA was used to calculate the significance of difference among groups. SNK analysis was used to compare differences between each two groups. ns denotes p>0.05. * denotes p<0.05 (D) Representative H&E staining of tissues from mice were displayed to show orthotopic tumor and metastatic loci in lung and liver. (E, F) Primary tumors formed by orthotopic implantation were separated and the weight (E) and volume (F) from each tumor were measured. Tumor volume was calculated as 0.52× (width) 2 × (length). The mean ± SEM value from each cohort (sesame oil, 25 mg/kg/d genistein and 75 mg/kg/d) was presented as bar chart. One-way ANOVA was used to calculate the significance of difference among groups SNK analysis was used to calculate significance of differences between each two groups. ns denotes p>0.05. (G) Mice weight was recorded every week after surgery. The mean ± SEM value of each cohort was presented. (P>0.05 by one-way ANOVA). (H, I) Correlation between tumor weight and distant organ metastasis was showed. The graph depicts the metastatic image signal at lung (H) or liver (I) plotted against the tumor weight for each mouse. The Pearson R between these two parameters was determined. GEN: genistein. Oncotarget 3231 www.impactjournals.com/oncotarget with inhibition of CRC metastasis. MMP-2 has a wellestablished role in regulating cancer cell invasion and metastasis, in a variety of cancer types (32,33), including CRC. Thus, while of clear importance to cancer, and to invasion and metastasis in particular, its role in this regard is relatively ubiquitous. Further, its function as an extracellular proteinase has been particularly difficult to therapeutically target with any type of selective efficacy. In contrast, FLT4 is an intracellular kinase, and little is known about its role in regulating metastasis, and CRC metastasis in particular. We therefore focused further investigations on FLT4. We utilized a colon human tissue array to examine the relationship between FLT4 expression and clinicopathological characteristics. This array included 60 cases of colon cancer tissues and normal colon mucosal tissues from the same patient diagnosed staining from each cohort were displayed. The percentage of Ki67 in each specimen was calculated and the microvessel density was determined by the average number of positive CD34 staining in five random selected 200× fields. Data represent mean ± SEM for all mice in a given cohort. Scale bar: 50 μm. One-way ANOVA was used to calculate the significance of difference among groups. For CD34 staining, Games-Howell analysis was used to compare differences between two groups. For Ki67 staining, SNK analysis was used to compare differences between two groups. * denotes p<0.05 compared to controls. ns denotes p>0.05 compared to controls. GEN: genistein. Results were presented as a percentage ratio with the control group set as 100%. One-way ANOVA was used to calculate the significance of difference among groups, SNK analysis was used to calculate significance of differences between each two groups. * denotes p<0.05 compared with control group. ns denotes p>0.05 compared with controls. GEN: genistein. www.impactjournals.com/oncotarget with colon adenocarcinoma. In this manner, we found that FLT4 expression level was significantly increased in CRC compared to that in paired non-cancerous tissues (Table S2). Furthermore, increased FLT4 was found to correlate with advanced clinical stage and with the presence of lymph node metastasis, but not with gender and age ( Table 1, Fig. 6A). We then went on to analyze the relationship between FLT4 expression and patient prognosis by the Kaplan-Meier method (Fig. 6B). Patients with higher FLT4 expression had a statistically significant worse prognosis. Specifically, among 60 patients examined, those exhibiting weak, moderate and strong staining had a median survival of 76, 49 and 21 months respectively (p value =0.001). These results demonstrate that increased FLT4 is a poor prognostic marker for CRC, and implicate it in CRC progression. DISCUSSION We demonstrate for the first time that genistein can inhibit the invasion, migration and metastasis of CRC cells. Further, we accomplished this with human CRC cells. Importantly, we demonstrate this function in vitro at concentrations that are non-toxic to cells. The nontoxic nature of therapy in association with therapeutic efficacy is further supported by findings in our systemic murine models. Specifically, we observed anti-metastatic efficacy in a dose-responsive fashion, while observing no evidence of systemic toxicity. Further, genistein did not significantly inhibit tumor growth, nor did it inhibit cell growth, as assessed by Ki67 expression. These in vivo findings directly support our in vitro ones, which demonstrate that anti-motility effects can be induced at concentrations that do not induce toxic effects. Finally, we further corroborated the specificity of in vivo findings by going on to demonstrate that there was a poor correlation between tumor size and number of metastasis. It is also of importance to note that antimotility efficacy is observed in vitro at concentrations that approximate those attainable in the blood with Oncotarget 3234 www.impactjournals.com/oncotarget administration of dietary doses to humans. Further, we demonstrate anti-metastatic efficacy in a murine model in which genistein is delivered via the oral route, as it would be through dietary consumption. Finally, we know from prior work by us that the doses we administered to mice provide blood concentrations that directly overlap with those attained by dietary consumption [22][23][24]. The clinical relevance of our findings is further supported by the fact that efficacy was observed at genistein dosages taken daily with Eastern-style diet or Western-style diet supplemented with genistein [10,24]. As our further analysis of primary tumor weight with the occurrence of lung or liver metastasis did not reveal their close correlation, our findings indicate that inhibition of CRC metastasis by genistein is not dependent on primary tumor growth. Together, these findings provide a mechanistic explanation for the lower incidence of clinical, i.e., metastatic, CRC observed in high soy consuming populations. Based upon these findings, it will be important to begin evaluating this potential mechanism in humans, and to compare findings in cohorts who consume high soy versus those who do not. Also, having demonstrated genistein's therapeutic efficacy, this led us to use genistein as a chemical probe. These studies were successful in that we found that genistein decreased expression of MMP-2 and FLT4. The finding that genistein decreased MMP-2 expression served as an important positive control for these studies. This is because MMP-2 has been widely implicated in cancer cell invasion and metastasis in a wide array of cancer types, including CRC (32,33). Further, genistein has been shown to decrease MMP-2 expression in human prostate cancer, coincident with its ability to inhibit human prostate cancer cell invasion [25,26]. Our identification that genistein decreased FLT4 expression was considered of particularly high potential importance by us. FLT4 is also known as vascular endothelial growth factor receptor 3 (VEGFR3), and it has been implicated in cancer related to its role in increasing neo-angiogenesis [27]. Previous work on the receptor has produced variable findings, with some studies identifying its expression in tumors but failing to find any correlation between level of expression and clinicopathological parameters [28], whilst several studies do report a positive and significant correlation between the level of FLT4 expression in the tumor and the development of metastasis and poor prognosis [29,30]. Our results point to a significant positive correlation between increased FLT4 expression and an aggressive tumor phenotype and resultant poor survival. Specifically, we demonstrated that elevated FLT4 was associated with lymph-node metastasis, advanced stage, and with early death from CRC. Our findings are in agreement with those drawn from the studies of others involving a larger sample analysis [29,30]. Furthermore, our findings are in agreement with the current biological role of the VEGF family of receptors, inclusive of FLT4, as drivers of neo-angiogenesis, and resultant metastasis. Our identification of FLT4 as an important regulator of cancer metastasis is supported by the work of others who report that the VEGF-C/FLT-4 axis promotes lung cancer cell migration and invasion [31]. Taken together, our findings implicate FLT4 in the regulation of CRC progression to a metastatic phenotype. It will be important for future studies to investigate the specific mechanism by which FLT4 acts to stimulate metastasis. Considering that FLT4 is a protein kinase, it raises the notion that a FLT4 specific kinase inhibitor may have significant anti-metastatic potential in CRC. In this regard, we want to highlight that although genistein is a known protein kinase inhibitor, it in fact decreased FLT4 expression at the transcript level, resulting in decreased protein expression. Therefore, it is unlikely that genisteinmediated inhibition of FLT4 kinase activity is relevant. Of interest, genistein decreased tumor-associated angiogenesis. There are several potential mechanisms that may be responsible for this finding. A likely one involves a direct extension of genistein's anti-motility action. In particular, it may be inhibiting endothelial cell movement. This notion is further supported by the fact that genistein decreases FLT4 expression. Other potential mechanisms include modulation of microenvironmental cytokines, altering epithelial to mesenchymal cell transition, as well as others. While our findings were the first to demonstrate that genistein inhibited angiogenesis in CRC, it has previously been shown to do so in other cancer types, including bladder and hepatocellular carcinoma [32,33]. Importantly, our identification of genistein-mediated suppression of FLT4 serves to provide a mechanistic explanation for its antiangiogenic action across several different tumor types. The process of neoangiogenesis has been recognized as a vital factor for sustaining tumor growth [34]. Our finding that genistein effectively suppressed formation of neo-angiogenesis has significant implications for the therapeutic use of genistein in humans. Anti-angiogenesis drugs, such as bevacizumab (anti-VEGF monoclonal antibody), have already been successfully approved for clinically treating many malignant tumors, including CRC. Nevertheless, the side effects associated with bevacizumab is a real concern clinically [35,36]. Thus, the identification of novel antiangiogenic agents with less side effects, such as genistein, is urgently needed, and should be further pursued in this regard in future studies. Altogether, in this study, we demonstrate for the first time that genistein is able to selectively inhibit human CRC cell motility and metastasis. We also demonstrate that genistein exerts such inhibitory effects at concentrations that approximate those attained with dietary intake. As such, our findings provide a solid mechanistic rational for epidemiologic studies which associate soy consumption with decreased metastasis. We demonstrate that genistein suppresses MMP2 and FLT4, coincident with inhibiting cell motility and metastasis. Suppression www.impactjournals.com/oncotarget of FLT4 is accompanied by inhibition of primary tumor neo-angiogenesis. Our findings provide a strong rationale for pursuing the clinical application of genistein to inhibit CRC metastasis. Moreover, based on our results, MMP2, FLT4 and CD34 could be used as biomarkers to monitor genistein efficacy in clinic. Cells HCT116, SW620 and HT 29 colon cancer cell lines were purchased from The Cell Bank of Type Culture Collection of Chinese Academy of Sciences (Shanghai, China). All cells were cultured in DMEM supplemented with 10% fetal bovine serum (FBS). HCT 116-LUC cells (stable luciferase expression) were used for orthotopic transplantation, and were established by lentivirus infection, followed by puromycin selection. Cell proliferation assay The CCK8 assay kit (Beyotime Institute of Biotechnology, Shanghai, China) was used to determine cell viability after genistein treatment. Briefly, cells were seeded in 96-well plate (1000 cells/well) overnight before complete medium containing different concentrations (0, 10, 25, and 50 µmol/L) of genistein was added. Then the cells were cultured for up to 5 days. At the end of each day, cell viability was measured by adding CCK8 agent at a 10 µL/100µL medium ratio. Cells were incubated for 2 hr before recording absorbance at 450nm using a Varioskan™ Flash Multimode Reader (Thermo Fisher Scientific, MA, US). Colony formation assay The procedures were performed as previously described by us [37]. Briefly, cells were pretreated with genistein or DMSO at the indicated dose for 24 hr before seeding on 6-well plate at a density of 1000 cells per well. After 10 days culture, the colonies were visualized by 0.3% crystal violet staining for 15 minutes. Excess crystal violet was removed by rinsing the plate with PBS. The visible colonies were counted and the colony formation rate of each group was calculated with the control group set as 100%. Transwell assays Transwell cell invasion and cell migration assays were used to evaluate both cell migration and invasion, as previously described by us [38]. Briefly, for invasion assays, 50,000 cells were pretreated with 10 μmol/L genistein or DMSO for one day before seeding in media without FBS into the upper chamber of each transwell, which was precoated with Matrigel. Media with 20% FBS was placed in the lower chamber and served as chemoattractant. Cells were allowed to invade for 24 hr with/without genistein treatment. After that, noninvasive cells on the upper surface of the membrane were removed by a cotton swab and cells on the lower surface of the membrane were fixed and stained with crystal violet. Invasive cells attached to the lower surface of the filter were visualized and photographed at 200× magnification using an Olymus BX51 microscope. Photographs of 3 random fields from 3 replicate wells were recorded, and the number of cells was counted. For migration assay, all procedures were the same as in invasion assays except that 25,000 cells were seeded in each chamber and there was no Matrigel coated on the membrane of the transwell chamber. Wound healing assay Cells were pretreated with 10 μmol/L genistein or DMSO one day before assay. Then, the confluent monolayer of cells was wounded gently by scratching with a 200-μL tip along the diameter of the well followed by PBS rinsing to remove debris. Fresh media, containing either genistein or DMSO, were then added. For each well, at least 3 pictures were taken with a microscope at a magnification of 100× at 3 time points (0, 24, and 48 hrs) after scratching. The degree of wound healing was represented by the percentage of the non-covered wound area. Single cell migration assay As described previously with modification [39], cells in log phase were seeded in 96 well plates (5000 cells/well) and incubated at 37°C to allow adhesion. After treating with 10 μmol/L genistein or DMSO for one day, cells were rinsed with serum free medium twice and stained with Hoechst 33342 for 15min at room temperature (RT). After rinsing, cells were imaged every 45 min over a 12-hr-period using Cellomics ArrayScan® VTI HCS Reader (Thermo Fisher Scientific), with an additional incubator to maintain and image cells at 37°C. Specifically, the instrument randomly selected 49 evenly distributed fields in each well and automatically calculated the average migration distance of each cell. Every group contained 5 separate wells, i.e., N=5 replicates. www.impactjournals.com/oncotarget Orthotopic murine model of colorectal cancer metastasis All the procedures involving animals were reviewed and protocols were approved by Xijing Hospital Animal Care and Use Committee. Six-to-eight week old, athymic, Balb/c mice were obtained from Vital River Laboratories (Beijing, China). To obtain subcutaneous tumor used for tissue transplantation, 3×10 6 HCT116-LUC cells were subcutaneously injected on each flank and allowed to form tumors. Two weeks after injection, subcutaneous tumors were isolated, cut into 2 mm pieces and kept briefly on ice until orthotopic implantation. For orthotopic transplantation, mice were anesthetized with pentobarbital before sterilizing. A 2-cm left abdominal flank incision was made and the cecum was isolated and fixed. A partial thickness cut in the cecal wall was made by fine needle, and a tumor piece was sutured into the incision. After re-inserting the cecum, the abdomen was closed with single layer suture [40]. Mice were randomly separated into three groups including, vehicle control (sesame oil), high-dose genistein (75 mg/kg/d) and low dose genistein (25 mg/kg/d). Three days after surgery, mice were weighted, and therapy begun and was administered daily, 5 days per week (Monday through Friday), for 5 weeks. Throughout the experiment, mice were weighed weekly. After 5 weeks treatment, mice were injected intraperitoneally with D-Luciferin (Caliper Life Sciences, Hopkinton, MA, US), allowed to move about freely for 3 minutes to promote absorption of substrate, were then anesthetized by isoflourane and underwent whole body imaging using the IVIS imaging system (Caliper Life Sciences, Hopkinton, MA, US). As the strong signals from the orthotopic tumor masked the much weaker metastatic signals, mice were then immediately necropsied, lung and liver harvested as whole organs, and their bioluminescence signals separately captured by IVIS imaging. Organs, and tumor tissues, were fixed in formalin, paraffin embedded, sectioned and stained with hematoxylin-and-eosin (H&E), and used for immunohistochemical analysis, as indicated. The weight of orthotopic tumor was measured, and its volume was calculated as 0.52× (width) 2 × (length) with measures taken in two perpendicular dimensions, as previously described [21]. Immunohistochemical staining All procedures were performed as previously described by us [21,38,41]. Briefly, formalin-fixed paraffin-embedded tissues were prepared in 4-μm sections. After performing dewaxing, rehydration, blocking endogenous peroxidase activity and antigen retrieval steps, sections were blocked with 10% normal goat serum at RT for 15 min and incubated with primary antibodies against MMP2, FLT4, Ki67 and CD34 (Abcam, Cambridge, MA, US) at 4°C overnight. Corresponding secondary antibodies, conjugated to HorseRadish Peroxidase (DAKO, Carpinteria, CA), were incubated at RT for 30 min. Staining achieved with a DAB kit (Zhongshan Golden Bridge Biotechnology Company, Beijing, China), per manufacturer's instructions. Microvessel density was determined on CD34 stained slides by counting five representative fields in each specimen under 200X magnification, as described [42]. Ki67 expression was determined as the percentage of positive cells in the representative areas examined in each specimen, as described by us [21]. The expression level of MMP2 and FLT4 was calculated by considering the ratio and intensity of staining [31]. The ratio score was determined as follows: 1 for <25%, 2 for 26-50%, 3 for 51-75%, 4 for >75%. And the intensity score was determined as follows: 1 for weak staining, 2 for moderate staining and 3 for a high level of staining. A final composite score ranging from 0-12 was calculated by multiplying the ratio score to the intensity score. The tissue array of human colon cancer and normal colon tissue was purchased from Xi'an Alena Biotechnology Company. Tissue arrays were constructed from tissue that had already been collected from patients undergoing standard-of-care treatment. Further, all tissue was de-identified, and no links back to the patient are available, nor will be attempted. Also, clinical data associated with each patient was provided by Alena Biotechnology. Here too, no patient identifier information is provided, all data has been de-identified, and no links back to patients are available, nor attempted. For FLT4 staining in the tissue array, the final result was recorded as negative staining=0 (score 0~2), weak staining=1 (score 3~5), moderate staining=2 (score 6~8) and strong staining=3 (score 9~12). All tissue was scored by a single person, and in a blinded fashion. PCR array Human Tumor Metastasis PCR Array (APHS-028A; Super Array Inc.) provided by Kangcheng Gene Chip Company (Shanghai, China) was used. Total RNA was extracted from genistein treated HCT116 cells or DMSO treated control cells using TRIZOL® reagent (Invitrogen, Carlsbad, CA, US), followed by synthesis of cDNA using SuperScript. III Reverse Transcriptase, 10mM dNTPs Mix, oligo (dT) 18 (Invitrogen) and RNase Inhibitor (Epicentre, Madison, WI, US), all per manufacturer's protocol. The mixture of cDNA template and a 2×Super Array PCR master mix was added to the wells of the PCR Array plate (384-well) containing the gene-specific primer sets before real-time PCR was performed. The PCR cycling conditions were as follows: 40 cycles of 95°C for 15 s, 60°C for 1min, and 72°C for 30s. Five housekeeping genes (ACTB/NM_001101, B2M/ NM_004048, GAPDH/NM_002046, HPRT1/NM_000194 and RPLP0/NM_001002) were used as internal controls. www.impactjournals.com/oncotarget The ΔCt value of each metastasis-related gene in each group was calculated. The differential expression of each gene was measured according to the comparative Ct method (ΔΔCt) [43], and the fold-change in difference between the genistein treated and DMSO treated control groups were compared. A fold-change greater than 2 was regarded as up-regulation, and a fold-change less than 0.5 as down-regulation. RNA extraction and qRT-PCR Total RNA from the colon cancer cells was extracted using TRIZOL® reagent (Invitrogen) according to manufacturer's instructions, and 500 ng RNA of each sample was subjected to cDNA synthesis using TaKaRa PrimeScript RT reagent kit (TaKaRa Biotechnology, Dalian, China). A Roche Light Cycler 480 PCR machine and SYBR®Premix Ex Taq™ Green I (TaKaRa) were used for the real-time PCR. The PCR program consisted of 40 cycles of the following steps: 95°C for 5s and 60°C for 30s. The 18S mRNA was set as the internal control and the final expression level of each gene was normalized to the control group. Primer sequences used in real-time PCR were listed as follows: FLT4 forward: GCCATGTACAAGTGTGTGGTCTC, FLT4 reverse: ACTTGTAGCTGTCGGCTTGG; MMP2 forward: CTCATCGCAGATGCCTGGAA, MMP2 reverse: TTCAGGTAATAGGCACCCTTGAAGA; 18S forward: CGGCTACCACATCCAAGGAA, 18S reverse: GCTGGAATTACCGCGGCT. Western blotting Protein isolation was performed as described by us [25]. Briefly, protein samples were prepared using RIPA lysis buffer (25 mmol/L Tris-HCl, pH7.5, 150 mmol/L NaCl, 1 mmol/L EDTA, 1% TritonX-100) containing protease inhibitor cocktail tablet (Roche Applied Science, Mannheim Germany). Proteins were separated by 12% Sodium dodecyl sulfate-poly-acrylamide gel electrophoresis and were transferred to a nitrocellulose membrane. After blocking with Tris-buffered saline containing 5% non-fat milk powder and 0.1% Tween-20 for 1 hr at RT, the membrane was incubated with anti-FLT4 (Cell Signaling Technology, Danvers, MA, US), anti-MMP2 ((Santa Cruz Biotechnology, Delaware, CA, USA) or anti-actin (Sigma-Aldrich, St. Louis, MO, US) at 4°C overnight. Goat anti-mouse secondary antibody (Boster, Wuhan, Hubei, China) was used to incubate the membrane for 1h at RT and enhanced chemiluminescence was then used to visualize protein bands in BIO-RAD ChemiDoc XRS Imaging system. Statistical analysis Results were presented as the mean ± SEM and analyzed by two-sided Student's t test or oneway ANOVA for continuous variables, as indicated. The bioluminescence values reflecting liver and lung metastasis were transformed to logarithm with base 10 before subjecting to statistical analysis. Student-Newman-Keuls (SNK) or Games-Howell approach was used to perform two-group comparison after one-way ANOVA depending on the homogeneity of variances. To evaluate the association between tumor weight and metastatic burden, the Spearman correlation coefficient was used. The relation between FLT4 expression and clinicopathological parameters was analyzed by Pearson Chi-square test. The overall survival was calculated using Kaplan-Meier method. SPSS 19.0 software (SPSS Inc., Chicago, IL, US) was used for statistical analysis. Statistical significance was considered present for P-values less than 0.05.
2017-04-01T17:12:11.625Z
2014-12-18T00:00:00.000
{ "year": 2015, "sha1": "2c645e5ecb25a7dfa80da4dadb9291acb1f24770", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=3064&path[]=5885", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "2c645e5ecb25a7dfa80da4dadb9291acb1f24770", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119517724
pes2o/s2orc
v3-fos-license
CN-Stream: Open-source library for nonlinear regular waves using stream function theory CN-Stream is a library for the computation of nonlinear regular ocean waves. The library is developed in order to be easily integrated with wave generation models in CFD solvers. It is based on the stream function theory and provides significant improvements regarding the applicability of the method for waves close to breaking (in deep or shallow water) compared to the classical implementation of Rienecker and Fenton [26]. The complete description of the wave field is available, including the free-surface evolution and the wave kinematics in the fluid domain. It is released as open-source, developed and distributed under the terms of GPL v3. Introduction The simulation of water waves is an old topic of investigation in naval and offshore hydrodynamics. The knowledge of the incident wave field acting on a structure is important in the computation of loads. The linear description of waves is not sufficient for most realistic cases. An overview of some methods for the solution of regular waves in different conditions is presented in [28]. Regular waves are usually described either by the Stokes theory [29] or the stream function theory, for instance the one presented by Rienecker and Fenton in [26]. The important physical parameters in the context of wave propagation are the linear steepness kH/2 and the relative water depth kh, where k = 2π/λ is the wavenumber, λ is the wavelength, H is the wave height and h is the water depth. A combination of these two parameters H/h or the Ursell number U r = Hλ 2 h 3 = 4π 2 kH (kh) 3 can also be used, especially in the context of reduced water depth. In ocean engineering, the limits of applicability of the different wave theories are typically taken following Le Mehauté's diagram [19] presented in Fig.1. In this figure, d stands for water depth, L for wave length, T for wave period and U R for Ursell number. However, it is well established that the Stokes wave theory is not accurate for very steep waves or for shallow water depths. This perturbation method is not able to provide convergent high-order Fourier coefficients [27,7]. The solution that is usually chosen is consequently to replace the perturbation expansions by a numerical evaluation, solving a nonlinear set of equations. This is assumed to be a more suitable approach for waves close to the wave breaking limit [16]. This enhanced accuracy is particularly important for the detailed physical analysis of such phenomena but also when looking for a reference solution for waves in nonlinear potential flow formalism. For instance, it is necessary to achieve such level of accuracy when propagating waves over a long time (e.g. during 1000 waves periods as presented in [1,10]) or when estimating the accuracy of a numerical model (see [12]). The original works [3,8,2,26] present different numerical solutions of the problem. The most widespread one in the ocean engineering community is probably the one described in details in [26] and simplified in [15]. In [15], the method is described and a Fortran program is provided. It uses a finite Fourier series to reduce the free surface conditions to a set of nonlinear algebraic equations, and then used Newton's iteration method to solve these nonlinear equations. This is the one taken as basis in this work. Note that other formulations and approaches exist with the main objective of increasing the accuracy for steep waves close to the wave breaking limit in arbitrary constant water depth. We can cite e.g. [30] or the recent work of [32] which presents a numerical method free of any kind of approximation techniques or [6] that provides an efficient algorithm for computing steady surface gravity waves for all wavelength over depth ratios. CN-Stream is an open-source stream function model developed at Ecole Centrale Nantes, LHEEA Res. Dept. (ECN and CNRS). The software is available to download and contribute on the GitHub platform [9]. The code is developed and redistributed under the terms of the licence GPL v3. Documentation that describes the compilation and execution of the source files is provided along with the source code. This code is one of the open-source wave models developed at Centrale Nantes. Others wave generation codes available on the GitHub platform are HOS-ocean [12] , HOS-NWT [11] respectively for 2D and 3D non linear wave generation in open water and wave basin and Grid2Grid [4], which serves for their coupling with CFD (Computational Fluid Dynamics) methods. In the following sections, the stream function theory and the corresponding numerical procedure are briefly presented together with the improvements proposed and implemented in CN-Stream. Sections describing how to compile and use the code as a library are also provided. Finally, different study cases are presented as typical applications of the presented numerical model. [19]. d stands for water depth, L for wave length, T for wave period and U R for Ursell number One of the purposes of this code is to encourage other researchers to use this library in the context of the coupling with CFD software for wave-structure interactions modeling. Indeed, in a lot of nonlinear potential flow solvers [12,13] or CFD softwares (SPH [23], WCCH [21], ICARE [24], OpenFOAM [18]), the incident waves are issued of the stream function theory. Some examples of the reconstructed volume fields used in CFD models are provided. Stream function method In this section, the formulation of the problem to obtain the nonlinear solution is presented in a simplified manner. More details can be found in the original work of [26] or [15], taken as basis for the numerical model CN-Stream. Some improvements of the original numerical method are then detailed. Coordinate system The wave propagation is solved in a fixed reference frame (O, X, Z) with the origin O taken on the free surface at rest: the horizontal axis X is oriented in the direction of the waves, and the Z axis is vertical upward. The wave solution of the problem is assumed to be periodic both in space and time. The free surface profile is of permanent shape and the wave is propagating with a constant phase velocity c. The solution becomes stationary in a moving reference frame denoted as (x, z). The horizontal axis x is oriented in the direction of wave propagation and the vertical axis z is upward with the origin at the free surface at rest. Note that in the original article of [26], the origin of the vertical axis was located on the sea bed. This induces the following changes with respect to this initial work: The exact definition of the different variables (R, Q, b 0 ) is presented in the following sections. Equations In the case of bi-dimensional isovolume flow, the stream function ψ(x, z) allows for the representation of the velocity field V = (u, w) = ( ∂ψ ∂z , − ∂ψ ∂x ). Furthermore, if the motion is irrotational, ψ satisfies the Laplace's equation in the fluid domain: The free surface elevation is defined as z = η(x) and the different boundary conditions are: • the dynamic free surface boundary condition: with R the so-called Bernoulli constant, • a free-slip condition on the free surface z = η(x) (also known as kinematic free surface boundary condition) • and a free-slip condition on the bottom z = −h. 4 The free-slip boundary conditions are easily written with the stream function, which has the following properties: • iso-lines represent the streamlines, • the variation of the stream function between two streamlines is equal to the flow rate between those lines. The bottom of the domain and the free surface being streamlines when considering the moving reference frame at phase velocity (and consequently permanent elevation), it is chosen to impose at the bottom: As a consequence, the stream function at the free surface is related to the flow rate Q between the bottom and the free surface. This gives: In addition, the free surface presents a zero mean elevation with respect to the definition of the origin of the vertical axis. This is written as: Then, η and ψ can be decomposed with the help of Fourier series in the horizontal plane: with a n and b n the modal amplitudes of the free surface elevation and the stream function respectively. In the moving reference frame (x, z), those are constant for a given wave. This equation satisfies both Eq. (6) and Eq. (8). Equivalently, we can write the horizontal velocity u and the vertical velocity w: and the pressure is defined as: with ρ the water density. Numerical solution 1.3.1. Inputs The numerical solution needs some inputs that will define the wave to be solved. Different choices are possible for its description and the corresponding inputs are: • the wave length λ or the wave period T 5 • the wave height H • the water depth h (finite or infinite) • the value of the current U c that may be of two kinds: i) a Eulerian transport (the reference frame moves with respect to the fixed reference frame) or ii) a fixed mass transport velocity. The inputs can be in dimensional or non-dimensional form. Dimensional wave parameters. In the case of dimensional inputs, the required parameters are given in Tab.1 depending on the water depth and the known wave parameter (T or λ). Non-dimensional wave parameters. In the case of non-dimensional value, we set non-dimensional wave height, water depth and current, denoted respectively H , h and U c . They are defined as follows (Tab. 2), depending if we know/fix as input the period or the wavelength. Note that the linear theory gives the simple following relations between the two sets of non-dimensional parameters for the wave height and the water depth: where k L indicates the wave number obtained from linear dispersion relation, which is consequently slightly different from the exact wave number (see Sec. 2.3). However, for an estimate, we can also set kH 40H and kh 40h . Non-dimensional wave outputs In the case of non-dimensional input values as described in previous section, the outputs are also made non-dimensional. This is dependent on the input: Discretization The free surface elevation can be studied considering its N 2 + 1 values at the collocation points or equivalently by expressing it on N 2 + 1 modes of the Fourier series: For the stream function, its representation in Fourier series is truncated at another number of modes chosen as N 1 + 1: The independent choice of N 1 and N 2 is one of the main difference with the original algorithm [14]. The motivation and implications of this choice will be detailed in Sec. 2. Collocation points x m are defined with respect to the free surface, fixed to a number N 2 + 1 between the crest and the trough of the wave (a vertical symmetry exists on half a wavelength). Previous set of equations is discretized on those collocation points such as Unknowns In all configurations, the unknowns are the modal amplitudes of the stream function b n for n = 0 to N 1 and the free surface elevation η(x m ) for m = 0 to N 2 , the constants R and Q and the phase velocity c. In addition we have: • The wave number k if we specify as input the wave period T , • The wave period T if we specify as input the wave number k, This corresponds to a total number of unknowns of N 1 + N 2 + 6. Equations These unknowns satisfy, at a given accuracy, the following discrete nonlinear equations: • the dynamic free surface boundary condition (Eq. (7)) written at the collocation points, • the kinematic free surface boundary condition (Eq. (9)) written at the collocation points, • the zero-mean free surface elevation, which is written using trapezoidal rule: • the fixed wave height The stream function is built so that b 0 is the mean velocity of the fluid in the reference frame linked to the wave, moving at the phase velocity c. The method allows to take into account the influence of a current of two kinds: • Eulerian transport (the reference frame is moving at a velocity c E with respect to the fixed reference frame). This leads to the following equation: One last equations is needed to close the system, which uses the relationship between k, c, and T : We consequently end up with 2N 2 + 6 equations. Numerical scheme We assume for the numerical solution of the problem that N 2 ≥ N 1 . The system is consequently overdefined with 2N 2 + 6 equations and N 1 + N 2 + 6 unknowns. Initial values for Q and R have to be given to solve the problem. In [26], the particular case N 1 = N 2 is solved iteratively with a Newton-Raphson method, while least square method is used in the present implementation. The different equations to solve are expressed under the form f (η(x m ), b n , c, R, Q, T or k) = 0. The system is linearized at each iteration i to obtain an equation of the form: where A is the Jacobian matrix formed with the derivative of the equations with respect to the different variables, Z i the solution vector and F i an error vector. If absolute errors are retained, the convergence of the solution is determined with thresholds set on If the convergence is controlled with relative errors, those are , where the function "Scale" ensure that the first modes are the one giving the magnitude of the solution. Initial solution The first order Stokes solution was used in [26] as the initial solution. Here we choose to impose the second-order Stokes solution, which gives the free surface elevation as: with σ = tanh kh. The stream function at the free surface is defined as: Q is set to Q = 0 and R to R = −c 2 /2. From stream function to velocity potential From the definition of the velocity potential and the stream function, we have the following equalities: and 8 The velocity potential φ is thus defined in the moving reference frame (x, z) as: When going back to the fixed reference frame (O,X,Z), the problem becomes non-stationary. We remind that capital letters refer to the fixed reference frame, while small letters refer to the moving one, with the following change of coordinates: In the fixed grid the elevation η and the velocity potential φ are thus defined as: And the slope used in section 2.3.2 is simply defined as: The horizontal velocity U and vertical velocity W are thus written as: and the pressure P: Using: it comes: Remarks For some applications (see e.g. [12]), the dynamic free surface boundary condition Eq. (7) is written in terms of the velocity potentialφ(X, Z, t)) under the following form: This equation differs from Eq. (7) in terms of the gauge condition imposed to uniquely define the velocity potential, see [5]. The velocity potential φ(X, Z, t) does not satisfy the new dynamic boundary condition Eq.(44), leading to the definition of another velocity potential, namelyφ. The latter has to satisfy the following equation Note that to keep the spatial periodicity of the potential in the x-direction, the following condition needs to be satisfied: This condition is satisfied if the Eulerian velocity c E is taken equal to zero. Increments in wave height When considering waves very close to the wave breaking limit, it appears that the numerical procedure may have some difficulty to converge toward a proper solution. In order to overcome this issue, the solution is looked for as an iterative process on the target wave height H. The idea is to increase gradually the height of the non-linear wave, toward the final target one. At the end of one iteration, the non-linear solution for a given wave height is taken as the intial solution for the next iteration (i.e. a higher wave height). This allows to find an accurate solution for non-linear waves very close to the wave breaking limit, as detailed in Sec. 2.2. The necessity of such procedure is actually related to the fact that: i) the choice of the number of modes N 1 and N 2 should be adequate to the simulated wave and ii) the second order solution is not accurate enough for highly non-linear wave. The solution procedure needs an initial guess close enough to the fully non-linear solution to be convergent. Then, the user can specify as input the number of steps in the wave height (variable n H of the input file), together with an increment type for these wave heights, which is either linear or exponential. As a summary, the different successive wave heights are defined as follows with H t the target wave height, n H + 1 the number of steps and i H ∈ [1, n H + 1] the index of the iteration: • Exponential increment: Automatic evaluation of N 1 and N 2 When solving the problem, an automatic evaluation of the optimal number of collocation points (or equivalently of the number of modes) is performed. Together with the independent choice of the number of modes for the descritpion of the stream function (or eq. velocity potential) N 1 and free-surface elevation N 2 , these represent the main enhancements of the present numerical solution compared to the original one of [26]. This routine is called after the solution of the linear system (achieved with a least square method) which uses specific numbers of modes N 1 and N 2 (see Sec. 1.3.6). Then, the number of modes is adjusted with the procedure described hereafter, leading to a new linear system (solved as in the previous step) until the convergence criteria on the choice of the number of modes is reached. The algorithm consists in adapting the value of the number of modes for the description of the stream function (or eq. velocity potential) N 1 automatically so that the amplitude of the last mode is smaller than the target accuracy provided by the user, denoted N1 . The procedure is depicted in Fig. 2 and follows the main steps: • Solve the problem with an initial set of values for N 1 and N 2 • Look at the modal amplitudes a n deducing the efficient number of modes N 1 ef f satisfying abs(a N1 ef f ) < N1 . Then, three configurations possible: • If N 1 changed, the number of modes N 2 of the elevation is deduced from N 1 by an empirical formula calibrated in section 2.3.3: The procedure is stopped when the solution of the problem is achieved at the target accuracy (iterative solution of the linear system) and when the number of modes is unchanged in the previous algorithm, meaning it is optimal for the current configuration. Results This section presents different results obtained with the CN-Stream code. The objective of this part is to detail the numerical properties of the method and especially to demonstrate the relevance of the enhancements proposed. The highest waves accessible with the current method are also provided explicitly as a matter of completeness. In addition, different applications of the CN-Stream model to the study of non-linear regular waves are presented. In the text some references are done to the parameters names in the input files, which are further described in section 3.3.1. Some examples: Modal description of quantities In this paragraph three different wave conditions are simulated corresponding respectively to infinite, finite and shallow water depths. The corresponding wave parameters are given in Tab.3. For each wave condition, the elevation and the slope are presented as a function of the phase kx, as well as the modal amplitudes of the elevation and velocity potential. Then, the maximal steepnesses available for different water depths are presented. Table 3: Wave parameters for the three studied conditions. Infinite water depth In Fig.3, an example of a wave propagating over an infinite water depth with a wave period T = 8s and a wave height H = 15m is presented. The wave surface elevation and the slope are shown as well as the modal amplitudes of the free surface elevation η and the velocity potential φ. For such high steepness kH = 0.80, the well-known non-linear features of the free surface elevation are recovered, namely a strong asymmetry between the crest and the trough, together with large value of the local steepness. It is also clear from the modal description that the necessary number of modes is different for η and φ due to a different convergence rate of the modal amplitudes. Thanks to the proposed enhanced algorithm, one can reach an accuracy on the amplitude of the mode of N1 = 10 −12 (defined as relative error). As a matter of comparison to the original stream function model [26], the same algorithm is applied, fixing the same number of modes for the two quantities (i.e. N 1 = N 2 ). The results are depicted in Fig. 4. The free surface looks the same than previously in the spatial domain, but even if the modal description of η and φ are still convergent, the level of accuracy is reduced compared to the enhanced stream function model. The results of Fig. 4 are actually the highest accuracy (i.e. smallest amplitude of highest mode) one can possibly reach when using N 1 = N 2 . The amplitude of the smallest mode for the decription of the free surface elevation is now = 2 10 −5 to compare with N1 = 10 −12 in the previous configuration. The accuracy is actually limited by the fact that if one increases the number of modes for the description of η, the consequent increase in the description of φ may create some numerical instabilities. As an example, Fig. 5 depicts the initiation of such process for a regular wave in infinite depth with a smaller wave height (T = 8s and wave height H = 11m). For this wave steepness, one can reach a relative amplitude of the smallest mode = 5 10 −11 . It is clearly seen that the decrease of the modal amplitudes of the velocity potential reach a plateau after the mode number 16 − 17. These highest modes, which do not decrease in amplitude any more are responsible of the enhanced behaviour observed of CN-Stream compared to original implementation of [26]. This comes from the involved spatial derivatives of the quantities, corresponding to a multiplication by k in the modal space that will induce a non convergent Fourier description of the corresponding quantity. In Fig.6, an example of a non-linear regular wave propagating over a finite water depth (h = 37m) with a wave period T = 8s and a wave height H = 14m is presented. The wave surface elevation and the slope are shown as well as the modal amplitudes of η and φ. The last example in this part deals with a wave propagating over a the same water depth than previous one (h = 37m) but with a significantly longer wave period T = 25s. This corresponds to a shallow water wave configuration (kh = 0.48) and a wave height H = 10m. The wave surface elevation and the slope are shown as well as the modal amplitudes of η and φ in Fig.7. The physical effects associated to the shallowness of the water depth are now clear in this configuration with very clear assymetries between crest and trough in the horizontal and the vertical directions. In terms of modal representation, it is interesting to note that when going to shallower water depth, the decrease rate of the modal amplitudes of η and φ becomes closer one with the other. As a consequence, the necessary number of points to reach the target accuracy N1 = 10 −12 is now N 1 = 25 and N 2 = 49. Limiting waves It appears interesting for the user to have an idea of the waves that can be computed with CN-Stream. We remind that the important physical parameters are the steepness kH and the relative water depth kh (or a combination of these two parameters such as height to depth ratio H/h or the Ursell number U r = Hλ 2 h 3 = 4π 2 kH (kh) 3 ). The results are dependent of the numerical parameters. The following parameters are used in the present section (in brackets the corresponding input file option, see section 3 for details): • n H = 100 (option: n_H) Limits to the existence of waves have been first parametrized by [22]. He proposed a simple formula for the maximal steepness that can be computed given by: for a large range of depths h. This equation (51) has been validated thanks to experimental and numerical data and takes now the following form: It should be noted that in very shallow water depths (kh → 0), this equation (52) This formula presents the advantages to accurately treat the following limiting cases: • infinite depth and kH lim = 0.885, • solitary wave in very shallow water depth H lim /h = 0.833 (see for instance [17]) The two preceding equations (52) & (53) use non-dimensional quantities with respect to the wavelength λ. It can also be useful to non-dimensionalize the quantities by the period, as presented in Le Méhauté's diagram (see Fig. 1, Fig. 9 and [19]). This diagram presents the limit in terms of wave height H lim /(gT 2 ) as function of the relative water depth h/(gT 2 ). Figure 8: The region in which solutions for steady waves can be obtained with CN-Stream (dots representing the highest wave accessible H lim for given input parameters). Comparison to the theoretical formulas of [22] and [16]. Input is the wavelength. As a summary, with the chosen high level of accuracy, those results demonstrate that the CN-Stream code allows the simulation of non-linear regular waves up to waves close to the breaking limit. If one intends to simulate even higher waves, the acceptable level of error needs to be reduced. Le Mehaute CN_Stream Figure 9: The region in which solutions for steady waves can be obtained with CN-Stream (dots representing the highest wave accessible H lim for given input parameters). Comparison to the theoretical formulas of [19]. Input is the period. Nonlinear effects This section is dedicated to the study of some of the non-linear features associated to regular water waves. These are useful in the definition of some properties for the numerical solution. Influence on the wavelength Infinite water depth. In infinite water depth, the only non-dimensional parameter characterizing the wave is the steepness. Figure 10 (left) shows the evolution of the wavelength with the "real" slope (measured as the maximum of the slope |∂η/∂x| over the wavelength). A good agreement is found with the third-order formula: with ka = kH 2 = max |∂η/∂x|, until ka 0.3. The evolution of the two different definitions for the steepness (kH/2 and maximum slope) as a function of the non-dimensional height H = H/gT 2 is also provided in Fig. 10. It appears that the steepness defined as the maximum slope as an almost linear dependence with H over the whole range of existence of the wave (except for the most extreme ones), while kH/2 exhibits a more complex evolution, which is linear only for waves with moderate steepness. All water depths. Then, Fig.11 shows the non-linear evolution of the wavelength as a function of the maximal wave slope. The whole range of depths is covered from shallow water depths to infinite water depths, as shown in Tab. 6. For small slopes, the increase of the wavelength is more important for small relative water depths. For larger slopes, the modification of the wavelength does not exhibit a specific trend with the relative water depth anymore, even if the shallower water depth seems to always exhibit the largest increase in non-linear wave length. Note that depending on the relative water depth, the maximum slope observed for the steepest wave com-puted is varying in the range max |∂η/∂x| ∈ [0.40; 0.48]. Similarly, we observe that the maximal modification in wave length is in a small range [13%; 16%]. The non-linear modification of the wavelength is consequently moderate. There is thus no explicit need to use the non-linear wavelength when computing the non-dimensional parameters such as kh and kH. Figure 12 presents the wave elevation obtained for the maximal slope at various water depths. As expected, the crest-trough asymetry is enhanced when reducing the relative water depth (both in terms of amplitude and relative length). The numerical solution of CN-Stream in small water depth recovers the cnoidal wave features. Maximal slope Infinite water depth. It is interesting to compare the various definitions of the steepness (the linear steepness kH/2 and the maximal slope) as a function of the relative wave height H = H/gT 2 . The following relationship is expected: with 2π 2 19.7. From Fig.10 (right) we observe that max |∂η/∂x| = 19H for all H . It means that H is a very good measurement of the wave slope non-linearity for the infinite water depth case. The linear steepness kH/2 is moving away from the maximal wave slope as soon as H > 0.015. All water depths. The evolution between the slope and H is presented in Fig.13 for different water depths. One can observed that the relative water depth k L h = 3 (h = 0.08) already corresponds to the infinite water depth: results for larger water depths are superimposed to those obtained at k L h = 3. This corresponds to the usual definition of waves considered as deep-water when h/λ > 0.5. For an infinite water depth (see paragraph above), we observed that H was proportional to the maximal wave slope for all wave steepnesses. Here, Fig. 13 shows that for a shallow water depth, the relationship between H and the steepness is linear only for small slopes. Thus the parameter H is not a good measurement of the wave non-linearity in shallow water. Indeed, if the value of H is multiplied by 2, the maximal wave slope is multiplied by a factor larger than 2, showing that the wave non-linearity increases faster than the wave height is shallow water depths. It is also observed in Fig.13 that when varying the wave height, the value of the maximal steepness varies between 0.43 and 0.6, whatever the depth h . The wave steepness is thus a good indicator of the non-linearities, even if the maximal slope is the most relevant one, as noticed in Fig. 11. N 1 and N 2 As previously, the whole range of relative water depths was covered from shallow to deep water, as presented in Tab. 6. In this section, the following numerical parameters have been used: In order to achieve the convergence on the amplitude of the modes (input parameter option: eps_N1), the number of modes N 1 and N 2 are plotted as a function of the slope in Fig.14. It can be observed, as expected, that when increasing the slope, an increased number of modes is necessary. This is associated to the need of a larger number of modes in shallow water depth than in infinite depth at a given slope. For instance, for the maximal slope achievable, 50 (200) modes for φ (η) are necessary in shallow water depth and 20 (60) in infinite depth. As a matter of simplification of the numerical procedure, Fig. 15 shows the evolution of the ratio N 2 /N 1 as a function of the slope. We observe that this ratio N 2 /N 1 is almost constant for any water depth. This allows us to extract the following relationship between those two number of modes: This reduces to only one parameter the procedure for an automatic choice of the number of modes, as described in 1.5.2. Kinematics and pressure inside the domain This final section presents some examples of velocity and pressure fields obtained with CN-Stream. This illustrates the possibilities of the numerical model to provide informations about the incident wave field in view of possible coupling with CFD software for wave-structure interactions modeling. Finite water depth The finite water depth case presented in Tab. 3 along with the option parameters used in Section 2.3.3 is computed and a reconstruction of the volume fields is performed, as presented in Fig. 16. The horizontal velocity appears highly non-linear with large differences between the values in the crests and in the troughs (max(U ) 9 m/s and min(U ) −3 m/s). Similarly, the dynamic pressure field exhibits larger absolute values in the crests than in the troughs (difference is around 80 %). Shallow water depth The shallow water depth case presented in Tab.3 along with the option parameters used in Section 2.3.3 is computed and a reconstruction of the volumic fields is performed, as presented in Fig. 17. The non-linear features observed previously at a larger relative water depth are further enhanced with the reduced water depth. The strong asymmetry in the free surface profile (both in horizontal and vertical directions) is also observed in both the velocity and the pressure field. The necessary use of fully non-linear potential wave theories in the context of highly non-linear waves, close to the wave breaking limit, is clearly demonstrated. The wave kinematics and induced pressure fields are strongly influenced by the wave non-linearity. Program documentation CN-Stream is a computational program written in Fortran language. It can be compiled as an executable file for the study of specific wave problems with inputs and dedicated outputs to be detailed in the following sections. It can also be used as a static library, which can easily be linked to other numerical models in the objective of, for instance, solve the problem of wave-structure interactions. RF_solve_auto manages the automatic calculations of the numerical parameters at use in CN-Stream. This contains the specific enhancements proposed in the code as detailed in Sec. 1.5. RF_solve_iterate manages the iterations in the solution procedure and the corresponding stopping criteria relative to the errors/tolerances (minimum amplitude of the modes, inversion of the system, etc.) as well as the maximum number of iterations. RF_solve is the effective solution of the linear system of equations described previously: it is the core of the original stream function procedure. Note that for internal communications and library use in other Fortran programs, CN-Stream uses Fortran types to reduce the number of passing arguments. An example is also provided to link the library with C++ program (in particular OpenFOAM), in this case the library is interrogated to provide flow quantities at a certain position and time. Project organisation and dependency The main folder consists in: • CMakeLists.txt • example Folder with examples of using the library through the communication module from Fortran and C++ • src Folder with source files ( include also the sources of libFyMc) • input Folder with input file example • output Default folder output The code use the library libFyMc to read the "dictionary" input file. The library is provided with the sources. CN-Stream -variables and types The different Fortran types (RF_type, option_type, output_type) are defined explicitely in variables_CN_Stream.h and variables_output_CN_Stream.h and are included when needed in CN-Stream. It allows the user to include them easily into CN-Stream but also in another code, which makes use of the CN-Stream library. In more details, those types include: • RF_type -definition of the parameters of the wave, corresponding to the input parameters specified in the input file as detailed in Sec. 3.3.1 -Modal amplitudes of the free surface elevation η and of the velocity potential φ (or equivalently the stream function ψ). -If needed, free surface elevation and slope in the spatial domain. • option_type: all options relative to the solution method, specified in input file as detailed in Sec. 3.3.1. This type also includes the optimal number of points N 1 and N 2 resulting from the procedure detailed in Sec. 1.5.2. • output_type: defines for one location (X, Y, Z) the free surface elevation, the pressure and velocity components together with the necessary time and/or spatial derivatives of those components. The possible existence of a Y -component is associated to the definition of an angle of propagation as input, referenced as θ. CN-Stream -main program The set of Fortran files needed in order to compile CN-Stream is listed in Tab. 7 with a brief description of the purpose of each of the source file. CN-Stream -library For the possible use of CN-Stream as a library in another program, the set of Fortran files is similar to the one described in previous subsection. There are two ways to use CN-Stream as a library. • Use the declarations of the variables of the variables_CN_Stream.h and variables_output_CN_Stream.h and call the functions described in the source file lib_CN_Stream.f90. • Use the subroutines indicated in the communication module mod_CN_Stream.f90. Compilation The code can be compiled on any computer architecture. One only needs a Fortran compiler (for instance gfortran, the GNU Fortran compiler, part of GCC). A makefile is provided but the recommended procedure is to use cmake. The following commands can be executed in the root folder where CMakeLists.txt is located, to compile the dependency, the executable and the shared library: • cmake -H. -Bbuild • cmake --build build Compilation has been tested with gfortran on different Unix/Linux platforms as well as in Windows environment. For Windows environment, compilation using Intel Visual Studio has also been tested. The program is provided with the corresponding project file CN_Stream.vfproj allowing a straightforward compilation of the code. Running CN-Stream CN-Stream has been developed for command-line run with an input file located in the input folder containing all specifications needed. All output files will be created in the directory output, but other specifications can be given. Details of inputs and outputs are provided hereafter. The executable can be run with the command ./mainCNS. The name of the dictionary can be specified as an argument. Inputs CN-Stream needs as input the characteristics of the wave, together with some informations relative to the numerical solution of the problem (target accuracy, etc.). The wave can be described in dimensional or non-dimensional form. As a matter of clarity, the wave parameters to provide as input are detailed in the next paragraphs depending on the need of the user. Note that those parameters are provided within an input file which content is also detailed. In CN-Stream, the non-linear regular water wave is characterized by: • the water depth h, possibly infinite, • the wave length λ or the wave period T , • the wave height H (distance from crest to trough), • the constant current superimposed to the wave (under the form of a Eulerian current or a given transport of mass). Input file. The input file is assumed to be named CN_Stream_input.dict. Table 8 describes the different parameters accessible in this input file. The Options_solver parameters are useful for an advanced user, in order to obtain solutions with a controlled accuracy and/or to look for waves close to the wave breaking limit. Output files Depending on the choices made in the input file (see Tab.8), different output files are created. They are located at the root of the folder. Input file also defines if outputs are dimensional or non-dimensional quantities. Following files may be created: • waverf.cof gives the main important parameters of the simulation, namely λ, H, k, T , c, c S , c E , N 1 + 1, N 2 + 1, R, h (in dimensional or non-dimensional form depending on the value of input: GeneralDimension in the input file) as well as the modal amplitudes a n and b n , • waverf.dat gives the modal amplitudes a n and b n . In complement, different subroutines may be called to write the necessary outputs needed by the user. They are available inside the source files and a simple call in the main program will enable the corresponding outputs: • WriteOutput: this subroutine creates the file resultsOutput.txt containing at a given location and time all spatial quantities computed by CN-Stream (free surface elevation, velocities, pressures, derivaitves, etc.). • TecplotOutput_Modes: this subroutine creates the file Modes_CN_Stream.dat containing the modal description of the free surface elevation and velocity potential, for use with Tecplot. • TecplotOutput_VelocityPressure: this subroutine creates the file VP_card_fitted.dat containing the velocity and pressure field under the simulated wave, for use with Tecplot. • TecplotOutput_FreeSurface: this subroutine creates the file FreeSurface_CN_Stream.dat, which provides the free surface elevation and slope. waveInput: Label (waveStream in example file) and Definition of the characteristics of the simulated wave Conclusions CN-Stream has been developed to compute non-linear regular ocean waves with a high level of accuracy. The model is limited to arbitrary constant water depth and non-breaking waves. CN-Stream is an open-source code, redistributed under the terms of the GNU GPL v3 License as published by the Free Software Foundation. It is available through the GitHub platform [9]. Along with the source code, a Wiki documentation is available, which makes the compilation of the source files and the execution easy. It can be used as a stand-alone binary or as a library to be included in another program. The code is based on the stream function theory and the original works of [26] and [15] have been taken as basis. Some enhancements are proposed in the current implementation, namely: i) a possible different number of modes to represent the free surface elevation and the stream function (or eq. the velocity potential) and ii) an automatic calculation of the optimal number of collocation points (or equivalently of the number of modes) to reach a target accuracy. It has been demonstrated that these allow an increase accuracy of the numerical solution, together with an extended domain of application with respect to the maximum wave height accessible. Different example of applications of the model are provided, which demonstrate the importance of the non-linear effects in the description of a regular waves. The free surface profiles are analyzed over a wide range of steepness and relative water depth, together with the kinematics and pressure fields.
2019-01-29T22:00:20.000Z
2019-01-29T00:00:00.000
{ "year": 2019, "sha1": "145b384ce86ecf13b03fc63b466020ead8df9448", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "145b384ce86ecf13b03fc63b466020ead8df9448", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
52961031
pes2o/s2orc
v3-fos-license
Five-year effectiveness of short messaging service (SMS) for pre-diabetes Objective An observational post-randomized controlled trial (RCT) design was adopted to evaluate the long-term sustainability and maintenance of improved glycemic control, lipid profile, reduced progression to diabetes at 3-year following a 2-year short messaging service (SMS). We performed a naturalistic follow-up to the 104 participants of SMS intervention, a 2-year randomized controlled trial comparing the SMS to non-SMS for pre-diabetes. All participants were arranged screening for diabetes at 5-year assessment. Primary outcome of this post-RCT study was cumulative incidence of diabetes whereas secondary outcomes were the change in biometric data over a 5-year period. Results After a mean 57-month follow-up, 19 (18.3%) were lost to follow-up after the RCT period. Progression to diabetes occurred in 20 and 16 patients among the intervention and control group respectively, with no significant between-group difference (8.06 and 7.31 cases per 100 person years, respectively; Hazard Ratio in the intervention group, 1.184; 95% confidence interval, 0.612 to 2.288; p-value = 0.616). No significant effect of SMS on reduction in diabetes was observed in overall and pre-defined subgroups. The SMS intervention preserved the clinical benefits within the trial period but failed to transform from treatment efficacy to long-term effectiveness beyond 2 years after intervention. Trail registration ClinicalTrials.gov Identifier NCT01556880, retrospectively registered on March 16, 2012 Introduction Diabetes mellitus (DM) is a global epidemic issue with age-specific prevalence of 8.3% [1] and considered as undiagnosed in 45.8% of all DM cases [2], in which is likely to result in both cardiovascular and non-cardiovascular morbidity and mortality. Prediabetes is a precursor stage before DM, where abnormal glycose regulation including impaired glucose tolerance (IGT) and/ or impaired fasting glucose (IFG) was observed. It was reported that approximately 5-10% of people with prediabetes convert to diabetic patients annually [3]. The identification of efficient and effective interventions for DM prevention is imperative at reducing the disease and economic burden attributable to DM and its complications. Thus, interventions are targeted to halt the progression of prediabetes to diabetes. Different forms of treatment modalities are available for DM prevention among patients with pre-diabetes [4]. Effective interventions aiming at preventing DM include pharmacological interventions with oral antidiabetic drugs [5,6], non-pharmacological lifestyle interventions with intensive training in diet and physical exercise [7], and more aggregative surgical interventions with bariatric surgery [8]. Despite American Diabetes Association (ADA) [9] and National Institute for Health and Care Excellence (NICE) [10] guidelines that advocated intensive lifestyle modification program for high-risk individuals with pre-diabetes, they have not been routinely performed in many clinical practice settings. The landmark multicenter randomized controlled studies (RCTs) such as Diabetes Prevention Program (DPP) [11] and Diabetes Prevention Program Outcomes Study (DPPOS) [12] in the US, Da Qing Diabetes Prevention Program in Open Access China [13,14], Finnish Diabetes Prevention Study [15][16][17] demonstrated the long-term effectiveness of lifestyle modification intervention on the diabetes prevention among IGT patients. Collective evidence from systematic reviews [18,19] illustrated that lifestyle interventions compared to placebo or control group were associated with significant reduction in relative risk of DM, despite the heterogeneity in lifestyle programs and duration of study and follow-up in those trials. More importantly, based on the long-term post-trial data from the DPP and DPPOS, participants in lifestyle intervention reduced significantly more diabetes incidence than those in metformin [12]. Therefore, non-pharmacological lifestyle modification intervention is recognized as a first-line treatment modality for DM prevention. Lifestyle interventions in those with IGT were not only effective but also highly cost-effective in the long term [20,21]. Evolution of technology overcomes challenges and barriers to deliver core contents of lifestyle modification through cellular phones and other electronic devices [9]. Ranging from reminder systems via short messaging service to tele-consultation, telemedicine strategies are helpful and useful, especially for patients who have difficulties in traveling to health care facilities due to long distances or disabilities [22]. The effectiveness of telemedicine on the management of diabetes has already been confirmed by two systematic reviews [22,23], both of which found that telemedicine interventions significantly reduced haemoglobin A1c (HbA1c) of diabetic patients compared with usual care. Short messaging services (SMS) via cellular phones serves as a mode of knowledge delivery, and an effective mean to enhance lifestyle modification. Our within-trial report [24] indicated that the SMS intervention had beneficial effects on diabetes prevention at 12-month but protective effects were attenuated at 24-month. With regard to its costeffectiveness, the SMS intervention was considered as cost-saving when compared to control group [25]. The objectives of this post-trial report were to observe glycemic control, blood pressure, waist circumstance, weight, and body mass index (BMI) levels after cessation of SMS trial, determine the long-term impact of SMS intervention on diabetes outcome, and evaluate the effectiveness of SMS for patients with pre-diabetes at 5-year. Main text Study design and protocol have previously been described elsewhere [24]. In brief, 104 participants with pre-diabetes (i.e. IFG or/and IGT) who were accessible and received Chinese text messages by mobile phone were recruited from a project to screen for pre-diabetes and undiagnosed DM in Hong Kong. IFG was defined as a fasting plasma glucose level of 5.6-5.9 mmol/L. IGT was defined as a fasting plasma glucose level of < 7.0 mmol/L or 2-h post-load plasma glucose (2HPPG) of 7.8-11.0 mmol/L after a 75-g glucose load according to World Health Organization criteria [26]. Subjects were excluded if they had a history of DM, were on medicines known to alter glucose tolerance, were unable to read Chinese characters, and refused to take part in study. All 104 participants were randomized either to 2-year SMS intervention or usual care without SMS reminder delivered by our research team, and were given booklets with information of pre-diabetes and diabetes by the research nurse. In the intervention group, text messages were sent three times a week, once per week and once per month within the first 3 months, the second 3 months, and the subsequent 18 months, respectively. At the trial end, diabetes onset in the SMS group was reduced by 38% when compared with control group [24]. After a mean 57-month follow-up (range 10-82 months), all participants were approached for consent to take part in this post trial study from September 2015 to April 2016. Electronic medical records were retrieved to obtain the diagnosis of event, anthropometric and blood measurements for those who had clinical reading and detailed events recorded within 1-year of assessment. For those recorded at the time beyond one-year of assessment, the research team arranged health examinations to obtain anthropometric and blood measurements. Ethics approval for this post-trial study was obtained from the Institutional Review Board of the Hong Kong East Cluster of the Hospital Authority. Statistical analysis Descriptive statistics were used to show the distribution of socio-demographic, occupational profile, lifestyle, clinical history, and to summarize the biometric data (weight, BMI, waist circumstance, blood pressure, lipid profile) of the SMS intervention and control patients. Significant differences between the two groups were compared by Chi square tests for categorical variables and independent t-tests for continuous variables. Biometric data were analysed according to the intention-to-treat principle. Missing values at subjects who were lost to follow-ups (i.e. defaulted or withdrawal) were imputed with last observed value carried forward. Sensitivity analysis was performed on complete cases for biometric data. Repeated measures analysis of variance was conducted to assess differences in biometric data over the time and their interactions between groups. Primary outcome of this observational study was the DM incidence. Kaplan-Meier estimates were used to calculate the cumulative proportion of patients who had a DM event (i.e. fasting glucose level ≥ 7.0 and/or 2HPPG ≥ 11.1 mmol/L). The hazard ratio (HR) of SMS intervention was estimated by Cox regression using overall 104 patients. Repeated analyses considering 14 prespecified subgroups (based on age, gender, working shift based, regular exercise, family history of DM, history of high blood pressure, and BMI at baseline) were done to assess heterogeneity of treatment effects. The incidence rates of DM among overall sample and the pre-specified subgroups were reported. All statistical analysis was performed using STATA Version 13.0 (StataCorp LP, College Station, Tex). All significance tests were two-tailed and findings with a p-value less than 0.05 were considered statistically significant. Results At trial randomization stage, 104 subjects were randomly assigned to either the SMS group or control group (Fig. 1). However, the majority of subjects in both groups were male (90.7% and 96.0%). At 60-month follow-up, 86 (65 subjects completed 24-month follow-up and 21 subjects whose were withdrawal in previous follow-ups) completed assessments whereas 21 subjects (14 in SMS and 7 in control group) had DM occurrence. The number of subjects progressing from pre-diabetes to DM was 36 (34.6%) over the 60 months. Table 1 shows the effect of the SMS group on the change in the level of biometric data. No significant interactions between treatment group and time were found in all biometric factors. There were significant mean differences on weight, BMI, waist circumstance, total cholesterol (TC), high density lipoprotein-cholesterol (HDL-C) and low density lipoprotein-cholesterol (LDL-C) over time (p-value = 0.005; 0.007; 0.006; < 0.001; 0.014; < 0.001) in the intention-to-treat analysis. However, the mean changes in systolic blood pressure (SBP), diastolic blood pressure (DBP), Triglyceride (TG) and DM risk score were not significantly different between groups, over time. The interaction effect between groups and time on those changes were not significant. Table 2). In addition, there were no significant interactions among the pre-specified subgroups. The HRs in the subgroup of aged 65 or above and female were not applicable as the occurrence of DM event between intervention and control subjects in the subgroup of aged 65 or above were equal (1 vs. 1) while there was no occurrence of DM event in the control subjects (= 0) in the female subgroup. Discussions This post-RCT report conferred the long-term effectiveness of a 2-year cellular phone-based SMS intervention in patients with pre-diabetes. One of the principal findings was that the immediate outcomes and diabetes outcome were highly comparable at the end of followup. Although the SMS intervention was effective in reducing DM events during the 24-month trial period, the reduction in DM events by SMS intervention was attenuated at 3 years after the cessation of trial. Small differences in cumulative DM incidence averted in SMS intervention failed to result in long-term benefits in DM prevention at 60-month of follow-up. The phenomenon of 'legacy effect' of SMS intervention on DM outcome was not observed in current intervention for pre-diabetes. Unlike the pragmatic RCTs like DPP [12], Da Qing Diabetes Prevention Program in China [13,14], Finnish Diabetes Prevention Study [17], those trials demonstrated a legacy effect for lifestyle modification on prevention of DM for pre-diabetes occurred over a decade after intervention. Durability of protective effects lasted for SMS intervention group was one of the key determinants of relative effectiveness, in which was influenced by frequency of text messaging and duration of intervention. However, whether an intensification of intervention such as increased messaging frequencies or extended duration of intervention could achieve a long-term clinical benefit remained uncertain. Furthermore, advanced two-way interactive platform such as internet-driven social networks is alternative means to deliver lifestyle modification contents [9]. Whether those electronic platforms are effective and cost-effective vehicles to deliver lifestyle modification materials for DM prevention in comparison to SMS and traditional face-to-face approaches warranted further exploration. Based on post-RCT data, the SMS intervention preserved the clinical benefits within the trial period but it failed to transform from treatment efficacy to longterm effectiveness beyond 2 years after intervention, and was not associated with significant reductions in diabetes prevention over 5 years. Possible reasons for the insignificant effectiveness of SMS compared to regular care in this study include sample size, representativeness of the sample and the durability of the SMS. Hence, further researches on whether increasing sample size or messaging frequencies, or extending duration of SMS for pre-diabetes could achieve a long-term clinical benefit are needed. Limitations Although this analysis was conducted using randomized controlled trial data, there were several limitations requiring cautious interpretation of study findings. First, no routine yearly assessment was undertaken at post-trial period (from year 3 to 5), which may record a lagged onset date of DM from electronic medical records. For those diagnosed with DM, annual assessments would keep track of DM outcome on regular basis and thus preclude potential overestimation of the follow-up duration and the number of patients at risk of DM. Second, the missing data were handled by using last observed value. This method may not appropriate as it induced errors and thus affected the accuracy of long-term effectiveness. Third, this paper did not report the hypoglycaemic event, HbA1c and fasting plasma glucose values at baseline and follow-ups and, thus, failed to compare the effectiveness of two interventions on changes in glycaemia-related biometrics [22,23]. In addition, the sample size of this study was relatively small and almost all of the participants were men, so that this study may not be able to give good and comprehensive estimates for the effectiveness analysis.
2018-10-18T06:43:11.630Z
2018-10-10T00:00:00.000
{ "year": 2018, "sha1": "9051866148a0f07d1ef9df2bbcb251a5de8c5ca7", "oa_license": "CCBY", "oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-018-3810-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d4de87514a1110503cd489b91aec2372f9b44edf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55365810
pes2o/s2orc
v3-fos-license
Entropy in Multiple Equilibria , Systems with Two Different Sites † The influence of entropy in multiple chemical equilibria is investigated for systems with two different types of sites for Langmuir’s condition, which means that the binding enthalpy of the species is the same for each type of sites and independent of those that are already bonded and that this holds for both types of sites independently. The analysis makes use of the particle distribution theory which holds for each type of sites separately. We provide physical insight by discussing an Xm{AB}Xn system with m = 0, 1, ..., M and n = 0, 1, ..., N in detail. The procedure and results are exemplified for an Xm{AB}Xn system with M = 3 and N = 2. A satisfactory consequence of the results is that the eleven equilibrium constants needed to describe such a system can be expressed as a function of two constants only. This is generally valid for any Xm{AB}Xn system where the [(M + 1)(N + 1) − 1] equilibrium constants can be expressed as a function of 2 constants only. This has also implication for quantum-theoretical studies in the sense that it is sufficient to model only two reactions instead of many in order to describe the system. We have observed that it is sufficient to have two different sites in a multiple equilibrium in order to observe a characteristic of isotherms that cannot be described by Langmuir’s equation. This is a result that may be useful for explaining experimental data which otherwise have not been explained satisfactory so far. Instead of inventing adsorption models it might often make sense of describing the system in terms of multiple equilibria. Introduction We explained the influence of entropy in multiple chemical equilibria by studying the particle distribution for the conditions that the binding enthalpy of the species is the same for all sites and that it is independent of those that are already bonded [1].Consequences were discussed for the insertion of guests into the one dimensional channels of a host, for dicarboxylic acids, and for cation exchange of zeolites.The validity of the results is independent of the nature and the strength of the binding.The quantitative link between the description of multiple equilibria and Langmuir's isotherm [2][3][4] was found to provide new insight.Multiple equilibria of objects with several equivalent binding, docking, coupling, or adsorption sites for neutral or charged species play an important role in all fields of chemistry .We now investigate systems with two different types of sites, which we name Xm{AB}Xn, for the condition that the binding enthalpy of the species is the same for each type of sites and independent of those that are already bonded and that this holds for both types of sites independently.The analysis makes use of the particle distribution theory as described in ref. [1], which holds for each type of sites separately.The condition that the binding enthalpy of the species is the same for all sites and that it is independent of those that are already bonded is equivalent to the condition I. Langmuir used one hundred years ago to derive the Langmuir isotherm [2,3].We therefore name it Langmuir's condition. Results and Discussion The number of distinguishable chemical objects of an Xm{AB}Xn (m = 0, 1, …, M and n = 0, 1, …, N) system is equal to (M + 1)(N + 1).From this follows that the number of equilibria with X is [(M + 1)(N + 1) − 1] which is also the number of equilibrium constants.We show that Langmuir's condition in connection with the particle distribution function allows to express the (M + 1)(N + 1) − 1 equilibrium constants as a function of two different constants only.This is a simplification which allows studying systems quantitatively by experimental and theoretical means which otherwise might be difficult to handle.A numerical analysis of experimental data for a system with 5 different types of sites has been carried out based on this reasoning and has allowed to correct earlier reports on the reaction entropy of silver zeolite A [16].We improve the physical insight by discussing a simple Xm{AB}Xn system in detail.The notation Xm{AB}Xn represents individual particles, a grid consisting of many sites, microporous objects, or other chemical systems.The procedure and results are exemplified for m = 0, 1, 2, 3 and n = 0, 1, 2. The 11 equilibria and the corresponding equilibrium constants Ki are collected in Table 1. We apply the stoichiometrie-matrix expression for evaluating these equlilibria [9,10].Details of this procedure are reported in the appendix.The result is given in Table 2.It is convenient to use the following notations to write the concentrations of the individual objects, namely Ci and also [Xm{AB}Xn], but only [X] for the concentration of X. We have 11 equation available for expressing the 13 concentrations: Ci, i = 1, 2, …, 12 and [X].An additional equation is available from the fact that in a closed system the total concentration of the Xm{AB}Xn species, which we name A0, is constant, as expressed in Equation ( 1).The concentration C12 = [{ }] can, hence, be determined using Equation (1).The concentration [X] of the ligand X that can bind to the { } is the free variable. We need to know 11 equilibrium constants in order to describe the evolution of the concentrations Ci of the twelve species as a function of the variable [X].This is a difficult situation and may in many cases have as a consequence that a system cannot be handled in a satisfactory way.A very important simplification arises if Langmuir's condition applies.This may often be the case sufficiently well.Langmuir's condition implies in our example that K1, K4, and K7 are equal.The same holds for K2, K5, and K8 and also for K3, K6, and K9.From this follows a further simplification from the application of the particle distribution function f(n,r) [1,16,30], where n is the total number of equilibria of a set and r counts the individual equilibria in a set; r = 0, 1, …, n − 1: The particle distribution function describes the entropy decrease in the corresponding reaction sets, as we have discussed in detail [1].Applying Langmuirs's condition and the particle distribution function we find the results reported in Table 3. The very satisfactory consequence of the result shown in Table 3 is, that the eleven equilibrium constants can be expressed as a function of two constants only, namely K1 and K10.Inserting this in the equation shown in Table 2 we find the Equations ( 3) and ( 4), where C12(X) is the concentration of { } expressed as a function of the concentration of X.This is a nice and very useful result.It allows to study the concentration of the twelve species Ci, i = 1, 2, …, 12 as a function of the concentration X by considering only 2 parameters, namely K1 and K10, instead of eleven.This has also implication for quantum-theoretical studies in the sense that it is sufficient to model only two reactions instead of eleven, in order to describe the system. Table 3. Relation between the equilibrium constants defined in Table 1 as a consequence of Langmuirs's condition and the particle distribution function.We see e.g., in Figure 1A, that the X{AB}Xn appear only at the beginning for small values of [X] and even more, that only X{AB}X shows temporally a value of larger than 0.05, while X{AB}X2 always stays very small.We note that {AB} vanishes soon and that the X3{AB}Xn become dominant.[{AB}X2] and [{AB}X] always remain small.The situation changes very much in Figure 1B.The symmetry of the plot of the concentrations Ci versus the total concentration [X]tot we have observed in Figure 4B of ref. [1] has completely disappeared, however, in both cases as seen in Figure 1A',B'.We also observe that out of the 12 species Xm{AB}Xn only few manage to evolve significant concentrations.An example with different values of K1 and K10 is reported in the appendix. The fractional coverage expressed as a function of the concentration [X] is of special interest, also because it can often be determined experimentally relatively easy.We show this in Figure 2. and (A',B') between 0 and 20.Solid lines: Amount of the objects Xm{AB}Xn.Red: m = 1,2,3, all n; divided by 3. Blue: {AB}Xn, n = 1,2; divided by 2. The rectangles and the circles correspond to Langmuir's eqs.( 29) and (29A) of ref. [1] with KL = K1/3 and KL = K10/2, respectively.Green: all Xm{AB}Xn, except {AB}, divided by 5. Black, dashed: Isotherm calculated using Langmuir's eqs.( 29) and (29A) of ref. [1] with optimized values for KL.Orange, dash dot: Sum of the red and the blue curves weighted by an optimized factor. It is interesting but not surprising that the amount of the objects Xm{AB}Xn (m = 1,2,3, all n; divided by 3) can be perfectly described by Langmuir's isotherm equation.We observe the same for the concentration of {AB}Xn (n = 1,2; divided by 2).The sum of all objects Xm{AB}Xn (m = 0,1,2,3, all n), however, cannot be described by the Langmuir isotherm equation.This behaviour seems to be of general validity, as I have numerically tested for a number of representative examples.It should be possible to prove this analytically but such a proof is not yet known.If the numerical values of K1 and K10 are equal, the system simplifies to the situation we have discussed in ref. [1].In the other extreme, when K1 and K10 differ by orders of magnitude, the system decomposes into separate parts. Different types of explanations for isotherms that deviate from Langmuir isotherms have been developed.They are in many cases satisfactory because they have been linked to a microscopic phenomenon, but they seem to be arbitrary in other situation [6,11,18,24].We find that it is sufficient to have two different sites in a multiple equilibrium in order to observe a characteristic that differs from Langmuir's equation, despite of the fact that the latter applies for individual parts.Writing multiple chemical equilibria could therefore be useful for explaining experimental data and for making prediction.Instead of inventing adsorption models, it might make sense to describe a system in such terms.The system may consist of one set of equivalent sites [1], two sets, as reported here, or even of several sets of equivalent sites [16]. Table 2 . Concentrations Ci, calculated based on the equilibria in Table1and Equation (1); see appendix.
2018-12-11T13:29:21.751Z
2017-11-20T00:00:00.000
{ "year": 2017, "sha1": "8d63cc31371d84195cf227a075f365930dc16951", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2504-3900/2/4/168/pdf?version=1524471763", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8d63cc31371d84195cf227a075f365930dc16951", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
244909862
pes2o/s2orc
v3-fos-license
Hybrid Machine Learning Techniques and Computational Mechanics: Estimating the Dynamic Behavior of Oxide Precipitation Hardened Steel A new generation of Oxide Dispersion Strengthened (ODS) alloys called Oxide Precipitation Hardened (OPH) alloys, has recently been developed by the authors. The excellent mechanical properties can be improved by optimizing the chemical composition in combination with heat treatment. However, the behavior of such materials requires the consideration of a large number of variables, nonlinearities, and uncertainties in the analyses, and the modeling of such alloys by analytical methods is not accurate enough. Therefore, artificial intelligence (AI) methods, such as machine learning (ML), can be beneficial to alleviate the problems associated with the complexity of these alloys. In this work, three different hybrid ML techniques have been employed to estimate the ultimate tensile strength (UTS) and elongation in these special alloys. The proposed methods include a feedforward artificial neural network (FF-ANN) trained using particle swarm optimization (PSO) and two adaptive neuro-fuzzy inference system (ANFIS) methods trained using both fuzzy C-means (FCM) clustering and subtractive clustering (SC). Since OPH alloys are mainly produced via mechanical alloying (MA) of a mixture of powder components followed by consolidation and hot rolling, a series of standard tensile tests were performed on the different variants of the OPH alloy. In this way, some critical parameters such as UTS and elongation could be extracted from the experimental results. The main contribution of the present study is to estimate these important parameters based on some material properties including Aluminum (Al), Molybdenum (Mo), Iron (Fe), Chromium (Cr), Tantalum (Ta), Yttrium (Y) and Oxygen (O), MA and the heat treatment conditions. The results show that the proposed strategies are not only able to accurately determine the complex behavior of OPH alloy with an accuracy of about 95%, but they can also help the designer to benefit from these powerful tools to design new versions of such materials without analytical calculations. strength, corrosion resistance, and toughness [2]- [5]. Based on the importance of the oxide nanoparticles, they have been widely studied concerning their morphology, composition, crystallographic structure, and interface relationships with the matrix [6]- [8]. However, further improvement of ODS steels' mechanical properties needs appropriate composition designs, which have become a hot topic for researchers. Y 2 O 3 is one of the typical oxides usually used to develop ODS as well as OPH steels. However, its strengthening effect is not ideal due to its growth at high temperatures [9]- [12]. Reactive elements, such as Cr, Ti, and Zr, could be added to the Al-free ODS steels to reduce the size of oxide dispersoids and produce stable oxide dispersoids [13]- [15]. To extend the maximum temperature capability of superalloys, such as Chrome or Iron Aluminum-based OPH alloys, the mechanical alloying (MA) of powder feedstock, followed by Hot Rolling (HR) and Heat Treatment (HT) were studied [14], [16]. This new design expresses the leading idea in the OPH steel processing: dissolve a required amount of O in the matrix during mechanical alloying and let a fine dispersion of oxides precipitate during hot consolidation. Such a microstructure evolution depends on the initial chemical components and the entire thermomechanical processing history through all processing operations, which still needs optimization [13], [17]. Nowadays, the complexity of engineering problems has been increased, and modeling and simulation methods have been appearing as essential computational tools that can explore and reveal insights into investigated processes [18]- [22]. Many theoretical methodologies have been conducted to survey the physical parameters of materials [23]- [28]. In this regard, the use of Machine Learning (ML) techniques is considered a powerful way of conquering the problems associated with conventional methods [29]- [31]. The literature study reveals that ML algorithms like artificial neural network (ANN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) display superior performance in terms of high accuracy and low error content compared to conventional statistical methods [32]. ANFIS can combine the least-squares and the back-propagation gradient descent method to identify the effective parameters of Sugeno-type fuzzy inference systems. It has the benefits of both neural networks and fuzzy logic principles [30]. Fuzzy C-means clustering (FCM) is a data clustering technique in which each data point belongs to two or more clusters. FCM has a smaller number of rules, higher speed, and better results [33]. These properties make FCM-ANFIS more efficient for data simulation [34]. Moreover, the ANFIS-Subtractive Clustering (SC) method is another helpful technique reported by several authors [35]. Researchers have been attracted to AI and ML, which originated from artificial neural networks (ANNs), to find the potential relationships between inputs and outputs in complex functions and systems. This theory imitates the human neuron network structures, including data processing, intentionmaking, and learning [36]. However, this methodology has essential weaknesses, like landing at a local minimum using many parameters and a slow convergence rate. Therefore, many attempts have been made to solve these problems. One effective way of dealing with these issues is to conclude the hybridization of ANNs with intelligence algorithms and merging the neural network using powerful optimization techniques [23]- [28]. Particle Swarm Optimization (PSO) is the most useable optimization model for robust ANN, defined as a population-based algorithm [37]. The PSO algorithm is beneficial due to its easy implementation, fewer parameters, and high convergence speed. The role of PSO is to optimize the weights and biases of ANN to get the highest performance capacity of the hybrid intelligent systems [38]. In recent years, researchers have shown great interest in modeling various alloys' mechanical properties using ML techniques [26], [27]. Table 1 gives useful information about some very recent publications with a focus on applications of AI-based methods for studying and analyzing materials. This way, Stanev et al. tried to use AI for the search and discovery of quantum materials [39]. In that field of materials, the rise of new experimental and computational techniques has increased the volume and the speed with which data are collected, and AI is used to impact the exploration of new materials such as superconductors, spin liquids, and topological insulators [39]. They outlined how the use of data-driven approaches is changing the landscape of quantum materials research, with the result that artificial intelligence is already well on its way to becoming the lynchpin in the search and discovery of quantum materials [39]. Wang et al. surrogated the model via Artificial Intelligence Method for Accelerating Screening Materials and Performance Prediction [40]. They used deep learning models, which have been verified as an effective and efficient method for handling computer vision and neural language problems [40]. Using a deep learning surrogate model (DLS) for predicting the maximum stress value under complex working conditions reproduced the finite element analysis model results with 98.79% accuracy [40]. They outlined that deep learning has great potential with a new approach for material screening in practical engineering [40]. Guo et al. investigated Artificial intelligence and machine learning in the design of mechanical materials [41]. They show that the performance of an ML-based materials design approach relies on the collection or generation of a large dataset that is properly preprocessed using the domain knowledge of materials science underlying chemical and physical concepts, and a suitable selection of the applied ML model [41]. Recent breakthroughs in ML techniques have created vast opportunities for not only overcoming long-standing mechanics problems but also for developing unprecedented materials design strategies [41]. Eser et al. used Artificial Intelligence-Based Surface Roughness Estimation Modelling for Milling of AA6061 Alloy [42]. The cutting speed, depth of cut, and feed rate were evaluated as input parameters for their experimental design [42]. The results show that the depth of cut is the most effective parameter for surface roughness [42]. Prediction models developed using ANN and RSM were compared in terms of prediction accuracy R2, MEP, and RMSE [42]. The data estimated from ANN and RSM were found to be very close to the data acquired from experimental studies [42]. The value R2 of the RSM model was higher than the values of the ANN model which demonstrated the stability and sturdiness of the RSM method [42]. Kabaldin et al. evaluated the mechanism of the destruction of metals based on approaches of artificial intelligence and fractal analysis [43]. They showed that a relationship has been established between the fractal values of fractures of specimens tested for impact from a value and the impact strength KCV [43]. With an increase in toughness, a decrease in the fractal dimension of the sample fracture is observed [43]. Also, it has been shown that when recognizing a viscous component in fractures of steel 45 using an INS, the recognition error does not exceed 8% [43]. The appropriate features of OPH alloys make them great alloys for different applications. Since the estimation of these alloys always involves a number of uncertainties and nonlinearities, the application of an efficient model is essential for developing and studying such alloys. Accordingly, in this research, some hybrid ML-empowered methods are employed to address the complex behavior of these materials. The hybrid ML methods include two neuro-fuzzy methods based on Adaptive Neuro-Fuzzy Inference System (ANFIS) [44] and an FF-ANN to estimate the UTS and strain in OPH steels. These parameters play a crucial role in considering the properties of OPH alloys. Therefore, understanding the relationship between these critical parameters and other structural parameters can lead to great improvements in the accuracy and speed of designing OPH alloys. Moreover, since the experimental data are used to determine the behavior of the above parameters, the resulting models are more reliable and accurate than the mathematical models which do not cover many nonlinearities or complexities. The ANFIS techniques include SC and FCM to generate a fuzzy inference system. Also, a hybrid of the ANN-PSO optimization method is used to model the mechanical properties. The rest of this paper includes four sections. Section II describes the experimental procedure and the special properties of the new OPH alloy based on metal powders. Subsequently, the proposed adaptive neuro-fuzzy inference system (ANFIS) trained by FCM clustering and SC in addition to an ANN method trained by PSO is deployed for estimating the mechanical characteristics in Section III. Following that, the results of estimating the UTS and elongation in the OPH steel are discussed in Section IV. Finally, Section V summarizes the conclusions. II. EXPERIMENTAL PROCEDURE The new OPH Alloy is based on metal powders using powder metallurgy [45]. The main powders (Fe and Al) and other components ( Table 2 ) are mechanically alloyed in a vacuum low energy ball mill developed by the authors (Fig. 1). While the MA is completed, the mixture of powders is transferred to a low-alloy steel rolling container with no contact to the air, evacuated, and sealed by welding. Afterward, it is rolled in three steps ( Fig. 2) under 900 • C to a final thickness of 3.2 mm. An approximately 2.5 mm thick OPH sheet covered on both sides by a 0.3 mm thick scale from the rolling container is produced in this way. The samples are then cut using a waterjet parallel to the rolling direction (Fig. 3), followed by grinding to get a final thickness of 2 mm. Using a servo-hydraulic MTS machine (Fig. 4), all tensile tests were carried out with a strain rate of 1 × 10.3 s-1. Standard size specimens with a thickness of 2 mm and a geometry of 53mm height and 13mm width with the active part length of 25mm were tested. A central data logger recorded all the measurements while the elongation was measured using a video camera extensometer (Fig. 5). Three samples were tested for each state and the average values of UTS and elongation to failure (A) were statistically calculated. As shown in Fig. 5, the DIC technique was used to measure the elongation of the samples. Speckle patterns were sprayed on two opposite surfaces of the specimen using an airbrush to achieve an optimal speckle size of 3-5 pixels. A professional operator created all the patterns trying to get a coverage factor falling within the range of 42% to 50%, which then minimizes the noise. The average speckle size and coverage factors were 4.3 pixels (mean value range: 4.1 to 4.5 pixels) and 49% (range 47% to 50%) respectively. Images were acquired under the best achievable experimental conditions by using the maximum exposure time (56 ms, due to the frame rate set to 15 Hz). Later the strain was compared to what the internal measurement system of the hydraulic machine measured to be sure about the measurements. III. THE PROCEDURE FOR COMPUTATIONAL ANALYSIS The proposed adaptive neuro-fuzzy inference system (ANFIS) method has been trained by FCM clustering and SC, which increases the capability of the ANFIS algorithm to analyze and model complex functions. In addition, an ANN method trained by PSO has been deployed for identifying the parameters as mentioned earlier. It should be noted that these methods can predict the behavior of dynamic parameters in different materials. Therefore, using such techniques can substantially decrease the costs and time required for designing and producing new alloys. Fig. 6 demonstrates the framework in which each of the parameters was modeled. As observed in the first approach, the datasets are measured and collected for modeling purposes. Next, the aggregated data need to be prepared. After that, the prepared data are randomized and divided into two sets for training and testing. In the next stage, three different scenarios are considered to complete the process of estimation. As mentioned above, there are three ML-based approaches that have been used to identify the desired characteristics of the understudied alloy. A. ANFIS-SUB Subtractive clustering is functional, especially when there is no indicated technique to distribute the data in the centers and the number of clusters [35]. The algorithm is typically summarized as follows: 1. A set of data points placed into a dimensional space should be considered. In this regard, the most potential data point in putting in the center of the first cluster needs to be chosen. 2. The density index D i of the corresponding to data x i is then calculated as in (1): r a is defined as a number showing the radius in which all the points within its area are accounted as neighborhoods. Accordingly, the data point with the most potential density measure is opted for the first center group indicated with x c1 its density D c1 . 3. D -the density measurement -is recomputed for each data point x i with the use of the equation (2): 4. D i , D and other parameters are recalculated and the procedures until adequate cluster centers are produced. B. ANFIS-FCM The second approach is ANFIS-FCM. In this approach, firstly, the number of clusters is chosen based on the system's dynamic. Coefficients for each data point are then determined randomly and placed into the clusters built-in before the stage [46]. In the following step, the algorithm needs to be repeated until the best results are reached. In other words, the center of each cluster (the centroid) is calculated. In addition, the coefficients, which are used for placing the data points in the clusters, are computed again. Generally, the FCM algorithm can be described as equation (3): where the fuzzy cluster level c i is controlled by m which is a hyper-parameter while x i shows the data point. C. PSO-ANN In optimization application, PSO is defined as a computing technique utilized to optimize a complex problem by an iterative method. In this algorithm, considering the required quality, the algorithm can calculate the best possible value for a candidate solution that has the potential to be the best solution [47]. The population of particles called dubbed particles plays a crucial role in this method. In other words, this algorithm tries to move these particles around a certain search-space area using some simply specific formulas over the velocity and position of the particles. In this part, we use this algorithm to train an ANN to estimate the parameters as mentioned above in the alloy. Fig. 7 illustrates a combination of the PSO algorithm and ANN as a hybrid methodology. As observed in this figure, firstly, the size of population is selected depending on the required accuracy. Next, the population of particles is generated with different kinds of variables. Now, the first generation is available to complete the initialization phase. In the proposed technique, firstly, a cost function is considered to be minimized or maximized depending on the optimization problem. After that, a number of particles is provided and employed throughout the problem with D dimensional space. In fact, any particle includes some variables of the problem. Therefore, the fitness function (cost) can be computed for each particle. Consequently, the position and velocity of these elements should be updated based on equations (4) and (5) as follows: where i and k in order are defined as the number of particle and the number iteration. Also, . . , ρ ij , . . . , ρ iD in order are defined as the velocity and position vectors. Moreover, P k best,i = p i1 , p i2, . . . , p ij , . . . , p iD and G k best = {g 1 , g 2 , . . . , g D }. Furthermore, c 1 is a cognitive parameter and c 2 is a social parameter. In this regard, w is considered to be the internal weight utilized of preservation of the previous velocity while the optimization process is performed, whilst r 1 and r 2 are considered as two random numbers which are uniformly distributed between 0 and 1. t is the time interval for updating velocity and position and it is typically equal to 1. In fact, the process of training for an ANN contributes to minimizing the problems which can be performed through metaheuristic or mathematical algorithms [48]. Fig. 8 demonstrates the structure of a conventional multilayer perceptron feed-forward ANN (MLP-FF-ANN). As is shown in this figure, there are three important layers, input layer, hidden layer and output layer that can be described via equation (6): where x i and y j are considered as the values in the previous and current layers respectively. This way, b j and w i are defined as biases and weights of the ANN. In addition, f is an activation function used for computing the value of the ANN. Training is a process in which biases and weights of the ANN are calculated in order to minimize the error between the outputs of the network and the real values (targets). That is why we face a minimization problem when it comes to training an ANN. In the proposed method, PSO as the activation function helps the ANN to reduce the errors by calculating the optimized values, considering some structural parameters, such as biases and weights. Hence, variables of PSO are weights and biases of the network. In addition, the suitable space of the problem is related to the intervals. The fitness function (cost) of particles is calculated via equation (7): here P kl is the predicted output and T kl is the target output. Also, the number of neurons is defined by O. In the proposed network, the parameters of the PSO algorithm are defined as Swarm Size = 200; Max Iteration = 35; C1 = 2; C2 = 4 -C1. This procedure is illustrated in Fig. 9. According to this figure and the above explanations, the following stages can be summarized to demonstrate the mechanism of this method: 1. After determining the number of neurons of ANN in its hidden layer, a network with initial biases and weights are built. 2. Since D is defined as the total number of the problem, each of the biases and weights is considered as a particle in a specific location in D-dimensional space of the problem. 3. Then output values of the particles in each iteration can be estimated, leading to computing value of the presented fitness function brought in Equation (7). 4. Finally, the location of weights and biases which are defined particles are updated via the PSO algorithm for an indicated number of iterations and populations and until achieving the target value. The proposed hybrid ML-based methods have been implemented by MATLAB R2020A through a processor Intel(R) Core (TM) i7-9700 CPU @3 GHz with 16 GB (RAM). A. MODELING RESULTS In this section, the results of the estimation of the UTS and elongation in the OPH steel are discussed. The most important advantage of the utilization of the present method is to estimate these complex parameters with three ML methods without using mathematical analysis. In this approach, the material is considered to be a black-box model. Fig. 10 depicts a comparison between the output of the ANFIS-SUB model and the actual measurements. This figure is extremely helpful as it shows the nonlinear and complex behavior of the UTS and elongation in the OPH steel. In fact, this nonlinear dynamic behavior is the reason that analytical methods cannot model or identify such materials for design purposes with the highest rate of accuracy as well as providing a reliable estimator for the prediction of UTS and elongation. Moreover, even if mathematical models can consider all uncertainties or nonlinearities, we face some complex equations that cannot be solved. That is why VOLUME 9, 2021 the proposed method can compensate for the weaknesses of analytical strategies. However, the most challenging issue in order to apply such techniques is to have a reliable measurement. In the present work, as mentioned in section 2, we try to utilize an appropriate dataset in which the accuracy of the proposed methods is not affected. According to Fig. 10, the ANFIS-SUB method can successfully estimate all three studied parameters' values. In this regard, Figs 10a and 10c depict to the training data for UTS and elongation, respectively, whereas Figs. 10b and 10d represent their related test results. Since UTS and elongation can assist the process of the design, reaching this level of accuracy will help the designers to see the effects of changing any structural parameters on the properties of these alloys. Disregarding the excellent results, the figure shows how such methods highlight the capability of AI-based models to predict the complex behavior of different materials which can be used as a holistic approach for other materials. This can be accomplished when it comes to materials that cannot be modeled by simple equations. In this section, two error criteria have been used as follows: As with the ANFIS-SUB method, the results of the ANFIS-FCM model are demonstrated to evaluate the method's performance. In this respect, Fig. 11 shows the output of the ANFIS-FCM model compared to the actual measurements. As can be seen, although the introduced hybrid method based on ANFIS-FCM is suitable for identifying the parameters, the accuracy of the ANFIS-SUB model seems better. Accordingly, Figs 11a and 11c compare training data for UTS and elongation respectively, whereas Figs 11b and 11d demonstrate test data. Moreover, Figs. 12 and 13 demonstrate the error regression graph and the Standard Deviation (SD) for ANFIS-SUB and FCM respectively. Based on these figures, the performance of the proposed hybrid methods equipped by the ANFIS model can be evaluated appropriately. Accordingly, in Fig. 12a, 12c, the SD of three parameters show desirable values about 0.55 and 0.0012 for UTS and elongation respectively. Moreover, in Figs. 12b and 12d, representing error regression graphs, there are almost no dramatic differences between the estimated data points at a specific time and the actual data points, proving the appropriate performance of the ANFIS-SUB model. However, a different scenario should be said(?) for ANFIS-FCM. According to Fig. 13, the ANFIS-FCM method does not have an appropriate performance in all the graphs in Fig 13. After initializing the network, the simulation with the time domain is started. The fitness function for each of the generated particles is then found while in the first stage, their corresponding is determined. Consequently, the fitness function should be evaluated, in which if it is the best solution, the process ends; if not, it needs to be repeated. Fig. 14 comparison between the output of the ANN-PSO approach and real data points. As can be seen, this method has an average performance compared to ANFIS. The first criterion is Mean Squared Deviation (MSD) -also called Mean Squared Error (MSE)-which is an estimating technique to measure the average of the squared errors. In this method, the average squared difference between the real output and the estimated output of the model. where N is defined as the number of data points, y i is the model's output, and y i is the real value for the data point i. Root Mean Square Deviation (RMSD) or the Root Mean Square Error (RMSE) is a kind of statistical method that is nearly the same as the standard deviation of the mean (SD), in which instead of N − 1 data points, N ones are used in RMSE: Table 3 gives information about the actual values and estimated values for three parameters. As can be seen, the ANFIS method trained by the subtractive clustering method has the best fitness for identifying the parameters, while another ANFIS model optimized by the FCM algorithm shows the worst results. Moreover, the proposed ANN method trained by the PSO algorithm has an acceptable response for modeling the parameters. One of the most significant contributions of this work compared to our recent research [20] is the methodologies that have been used. In fact, in [20] a conventional ANN trained by Levenberg-Marquardt backpropagation algorithm is used while in the present work, a hybrid ML method, which is trained via particle swarm optimization (PSO) is used. In this research, as for any optimization problem, minimization of the network cost is taken into consideration. This has been achieved by minimizing some form of error function between the desired and the actual network outputs, during the training phase. However, the conventional algorithms like Levenberg-Marquardt backpropagation are sensitive to the choices of the initial weights and tend to get trapped in local minima. On the other hand, evolutionary algorithms like the proposed method have proved their usefulness in introducing randomness into the optimization procedure, since they work on a global search strategy and induce a globally minimum solution for the network weights. In this regard, the utilization of an ANN trained by PSO for analyzing the behavior of the alloys leads to achieving the local-best and global-best particle positions as possible solutions to the setting of the network weights. In summary, the application of PSO for training the ANN is one of the important contributions of the presented manuscript. Another important contribution of the present work is about the target of modeling. Although both papers used ANFIS for analyzing the alloys, the aims are completely different to each other. For example, in [20], the simulation of the hardness of OPH steels was studied, while in this work we use the ANFIS methods or the ANN-PSO method for analyzing ultimate tensile strength (UTS), and elongation. UTS and elongation are two main parameters that highlighted alloys for formability and strength. Higher UTS value usually backs with lower elongation while higher elongation increases the plasticity of the alloy. So, it is quite important to balance and optimize these two values to achieve the best performance of the final alloy. In fact, this work aims to estimate the nonlinear behavior of UTS and elongation. The reliability and accuracy of ML-based techniques for estimating the nonlinear behavior of OPH alloys indicate that for future work a multi-objective method based on deep learning or reinforcement learning can be considered in order to increase the level of precision. It is also worth mentioning that, regardless of the satisfaction and reliability of the derived results from all the applied techniques, all ML methods should be utilized with caution as sometimes the appropriate data for a mining operation is not available, leading to effects on the simulation and prediction. Table 4 compares the results of the proposed method with some similar papers, representing the applicability and accuracy of the introduced method in the estimation of UTS. It should be noted that measurement accuracy and input signals can significantly affect the outputs of all data-driven models. V. CONCLUSION In this paper, some hybrid ML techniques have been employed and evaluated for the estimation of the dynamic behavior of OPH steels. OPH alloys are types of material VOLUME 9, 2021 that are typically prepared by mechanical alloying from a mixture of powder components consolidated by hot rolling followed by heat treatment. The proposed ML approaches were applied as estimators to estimate the ultimate tensile strength (UTS) and elongation based on actual measurements of the different chemical compositions of the studied alloy, such as Al, Mo, Fe, Cr, Ta, Y, and O), heat treatment conditions, and mechanical alloying conditions. The proposed methods consist of a feedforward artificial neural network (FF-ANN) trained by particle swarm optimization (PSO) and two adaptive neuro-fuzzy inference system (ANFIS) methods trained using both fuzzy C-means (FCM) clustering and subtractive clustering (SC). The results showed that the proposed strategies can model and identify the complex behavior of OPH Steels with an approximate accuracy of 95% and can help the designer to address and predict these steels with all nonlinearities and uncertainties without using analytical calculations. In addition, the proposed methods provide designers with a tool for finding which chemical composition might have more impact on UTS or elongation in such steels. In the future, we will work on more accurate ML-based techniques to study different materials and adapt the proposed method accordingly. There are a number of parameters for each material that can significantly affect the accuracy and reliability of the estimate. Moreover, some new DL -methods such as reinforcement learning and transfer learning can be considered to improve the applicability of the proposed method. Table 5. EHSAN SAEBNOORI received the M.Sc. degree in materials engineering focusing on electrochemical corrosion from Tarbiat Modares University (TMU), Iran, and the Ph.D. degree in materials engineering focusing on electrochemical corrosion from TMU, in 2012. Since then, he has been working as an Assistant Professor with Islamic Azad University, Najafabad Branch (IAUN). His main research interests include electrochemicalbased synthesis and surface modifications, and evaluation of the corrosion behavior of materials. See BOHUSLAV MAŠEK received the Ph.D. degree in materials engineering focusing on forming technology from the University of West Bohemia (UWB), in 1993. He was a Professor, in 2005. He is the author or coauthor of more than 360 articles, many improvement proposals, 25 Czech Republic patent applications, of which 19 patents have been granted and 21 applications published, three patent licenses sold, 16 patent applications in USA, of which eight patents have been granted and
2021-11-20T16:27:59.788Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e101bceeed1adec58dd759ceedb0987e69252f59", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1109/access.2021.3129454", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "b4c46e86ea93d90f3cfc3c132822259a90036953", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
222229742
pes2o/s2orc
v3-fos-license
Respiratory dysfunction in Parkinson's disease: a narrative review The presence of respiratory symptoms in Parkinson's disease (PD) has been known since the first description of the disease, even though the prevalence and incidence of these disturbances are not well defined. Several causes have been reported, comprising obstructive and restrictive pulmonary disease and changes in the central ventilatory control, and different pathogenetic mechanisms have been postulated accordingly. In our review, we encompass the current knowledge about respiratory abnormalities in PD, as well as the impact of anti-Parkinsonian drugs as either risk or protective factors. A description of putative pathogenetic mechanisms is also provided, and possible treatments are discussed, focusing on the importance of recognising and treating respiratory symptoms as a key manifestation of the disease itself. A brief description of respiratory dysfunctions in atypical Parkinsonism, especially α-synucleinopathies, is also provided. Introduction Parkinson's disease (PD) is a neurodegenerative disorder due to a progressive loss of striatal dopamine, thus leading to tremor, bradykinesia, rigidity and postural instability. The presence of respiratory abnormalities in PD has been well known for many years, but its prevalence is probably underestimated. In his first description, PARKINSON [1] noted this association describing a man who "fetched his breath rather hard". Since the half of the last century, many studies have discussed respiratory impairment in PD, even in early stages and in asymptomatic patients [2]. Respiratory dysfunctions may be responsible for the mortality and morbidity associated with PD [3]. Although respiratory changes are usually correlated with peripheral motor impairment [4][5][6], several causes have been reported, including obstructive and restrictive patterns, as well as changes in the central ventilatory control. Overall, respiratory dysfunctions in PD seem to correlate with motor scores, but the relationship with pharmacological therapies, disease phenotypes and nonmotor symptoms is not completely understood. In this review, we encompass the current knowledge about respiratory changes in PD, focusing on obstructive and restrictive patterns, as well as on the role of the central respiratory control, highlighting the underlying putative pathogenetic mechanisms. A brief discussion about pneumonia in PD it is also provided; an overview of the impact of anti-Parkinsonian drugs and deep brain stimulation (DBS) is also described. Finally, we briefly discuss the presence of respiratory abnormalities in atypical Parkinsonisms, especially multiple system atrophy (MSA), dementia with Lewy bodies (DLB) and supranuclear palsy (PSP). Materials and methods A literature search was updated from March 2019 to June 2020 and referred to PubMed and Google Scholar, using the terms "Parkinson's disease", "Parkinson", "Parkinson disease" combined with "respiratory failure", "drugs respiratory failure", "pulmonary dysfunction", "respiratory dysfunction" and "ventilatory dysfunction". Another search combined the terms "Parkinson's disease", "Parkinson", "Parkinson disease" with terms "sleep" and "sleep apnea". We included articles in English only. Exclusion criteria included animal studies and other neurological disorders different from PD or Parkinsonisms. Obstructive respiratory dysfunction Several studies have shown obstructive respiratory dysfunction in PD (table 1). Many authors have described upper airway obstruction (UAO), with a highly variable prevalence ranging from 6.7% to 67% [8,9]. Dyspnoea could be a manifestation of UAO, even if other common indicators may include hypophonia, shaky voice, stridor or wheeze [17]. Two types of UAO have been described by spirometry and fibre optic endoscopy for the first time by VINCKEN and co-workers [7], and was further confirmed by subsequent studies [8,9,11]. The first type ("respiratory flutter") is characterised by regular consecutive flow decelerations and accelerations superimposed on the general flow-volume loop, with a frequency similar to the hands' tremor (5)(6)(7)(8). In the second type, abrupt and irregular changes in flow (often dropping to zero) are seen on an abnormal flow-volume loop due to irregular and jerky movements of the glottic and supraglottic structures, thus leading to intermittent airways closure. Although the pathophysiology is still debated, both patterns probably reflect dysfunctions in the basal ganglia (figure 1). In PD, α-synuclein deposition starts in the caudal portion of the brainstem, involving the dopaminergic neurons in the substantia nigra (SN) [18]. Loss of dopaminergic stimulation is known to lead to motor disturbances such as tremor, bradykinesia and rigidity and movement abnormalities in the phonatory structures may ultimately resemble those observed in peripheral muscles. These data are consistent with electromyographic abnormalities of laryngeal, rib cage and neck muscles [19,20], even if these changes are not specific and may be present also in other movement disorders [7]. A possible correlation between UAO and tremor has been reported by some authors [21], even if it should be stated that other types of Parkinsonisms were included in the study. SABATÈ et al. [9] have, nevertheless, reported the association of UAO with bradykinesia more than rigidity and tremor. Moreover, an association of UAO with dystonia has been described by JANKOVICH et al. [22]. All these data seem to further suggest a correlation between UAO and peripheral motor disorders, due to basal ganglia dysfunctions. UAO has been correlated also with dorsal column arthrosis, which may be explained by chronic anomalous postures in advanced stages [9]. Lower airway obstruction has been reported and differentiated by UAO in the work of SABATÈ et al. [9], correlating with rigidity, resistance to passive mobilisation of the cervical column and cervical spine arthrosis. Obstructive abnormalities have been described in other papers, although differences between UAO and lower airway obstruction have not been systematically assessed [14][15][16]. Moreover, a small number of obstructive pattern patients in PD was found in these studies, and no further characterisation has been provided to date [14][15][16]. Differences among studies may be related to the characteristic of the cohorts examined such as number of the patients enrolled, disease duration and the timing of pharmacological washout before the spirometric evaluation (table 1). Restrictive respiratory dysfunction Papers describing a restrictive respiratory pattern are summarised in table 1. Restrictive respiratory dysfunction has been described both in symptomatic and asymptomatic patients, with variable prevalence ranging from 28% to 94% [8,9,12,14]. Even for restrictive dysfunction, the pathogenesis is controversial; several hypotheses have been postulated, including dysautonomia related to PD and adverse effects referred to ergot-derived drugs [8,9], whereas myopathic weakness of the chest wall seems unlikely (figure 1) [5,23]. No correlation with tremor, bradykinesia or rigidity has been described, while a probable relationship with dorsal spine arthrosis has been postulated [9]. Moreover, some authors reported a correlation with motor features, such as gait freezing and falls, in moderate to severe PD, and a correlation with camptocormia and kyphoscoliosis in more advanced stages [24]. Others reported conflicting data showing a possible correlation with rigidity and bradykinesia, but not with tremor [25]. These data may be, at least in part, in line with other studies. DE PANDIS et al. [12] identified a restrictive pattern in a cohort of advanced parkinsonian patients (mean Hoehn and Yahr stage 4.08) worsened in the "off" condition, probably due to chest wall muscle rigidity and reduction of chest wall range of movements. Similar results were obtained by SATHYAPRABHA et al. [14], showing a high percentage of PD patients with a restrictive pattern worsened during the "off" condition and more pronounced in later stages (Hoehn and Yahr 2). Even in this case, reduction in rigidity may result in improved muscle coordination and facilitation of chest wall movements. A really small number of restrictive pattern patients was found in other studies, but these patterns have not been further characterised (table 1) [15,16]. Central ventilatory control As aforementioned, the deposition of α-synuclein in PD starts in the caudal portion of the brainstem, and structures involved in the respiratory control, as those responsible for coordinating ventilation and detecting peripheral hypoxaemia or hypercapnia, may be directly affected by neurodegeneration at an early stage [18,[26][27][28][29]. These data seem to agree with what has been reported in literature. ONODERA et al. [10] described a reduced central chemosensitivity to hypoxia even in the early stages, without abnormalities in the response to hypercapnia. Other authors, as opposed, found an abnormal ventilatory response to carbon dioxide in patients with normal lung volumes and flows, especially in mild to moderate PD, but not for mild hypoxia (table 1) [15]. Despite these little discrepancies among different studies, according to the Braak hypothesis, the early involvement of the brainstem in PD may lead to dysfunction of the medullary respiratory centres and consequently of the central drive of breathing [30]. Moreover, other mechanisms have been reported to explain the impaired central control (figure 1). Neurodegeneration involves not only dopaminergic neurons, but also astrocytes: losing astrocytes in key regions involved in breathing activity will produce ATP deficiency, which in turn will fail to stimulate breathing [31][32][33]. In addition, an indirect mechanism underlying central ventilator control has been recently proposed in animals, basing on the demonstration of a di-synaptic excitatory pathway from the dopaminergic neurons of the SN to the retrotrapezoid nucleus (RTN), passing through the periaqueductal grey (PAG) [34]. PAG is engaged in a number of physiological functions, comprising nociception, arterial pressure and heart rate, while RTN is critically involved in the chemosensory control of breathing [31,[35][36][37]. PAG also coordinates motor output, including respiratory muscles, based on the integration of input arising from limbic, pre-frontal and anterior cingulate cortex regions [38]. Overall, degeneration of SN dopaminergic neurons, as occurs in PD, may lead to a progressive loss of functions along the SN-PAG-RTN pathway. These findings further emphasise the key role of a central breathing control failure in PD respiratory dysfunction, in addition to the loss of dopaminergic stimulation due to the direct basal ganglia involvement. Central breathing dysfunction may explain at least in part the abnormal perception of dyspnoea (POD), as reported in some papers. Many patients with spirometric abnormalities may be asymptomatic, and blunted POD may contribute. The reduced response to hypoxia described by ONODERA et al. [10] may play an important role. This report, however, is not consistent with a more recent study, which demonstrated an increased POD in PD compared to controls [13]. Physiologically, dyspnoea is perceived as respiratory muscle effort, and the degree of perception is linked to the strength of respiratory muscles [39]. The patients examined by WEINER et al. [40] had an abnormal pulmonary test function (restrictive pattern and inspiratory muscle endurance) and a more severe disease compared to the cohort of ONODERA et al. [10] so in this case mechanical factors may have contributed to the increased POD. Mechanical factors, independently or in addition to a central dysfunction, may also explain the exacerbated POD in patients experiencing respiratory dyskinesias [40]. Apnoea in Parkinson's disease The presence of apnoea syndrome has been studied in PD as well. Apnoea syndrome is probably related to a central dysfunction of the brainstem respiratory centres and/or a peripheral airways involvement. However, different studies have produced conflicting results, probably according to the different samples of patients and methods used. Apnoea occurring during sleep could be classified as central (if the airflow drops down due to a failure in activation of respiratory muscles), obstructive (if the occlusion of the upper airways stops the airflow despite respiratory muscle effort) and mixed [41]; nonetheless, these patterns have not been studied systematically in PD and a clear stratification is not available in the current literature. Most studies focused on obstructive apnoea rather than central. Conflicting results have been reported about the prevalence of obstructive apnoea syndrome in PD patients; MARIA et al. [42] identified a higher prevalence of obstructive apnoea in PD populations, whereas others found less occurrence of obstructive apnoea compared to controls [43,44], or even no apnoea or sleep abnormalities [45]. DE COCK et al. [44] tried to explain this phenomenon, postulating a possible protective contribution due to rapid eye movement (REM) sleep behaviour disorder (RBD), in which the physiological muscle atonia during REM sleep is absent and may prevent upper airway closure. Surprisingly, the authors found that patients with abnormal persistence of chin muscle tone still presented obstructive apnoea during REM sleep, and similar findings were described by HUANG et al. [46]. It may be reasonable that there is a correlation between motor disability and apnoea, as suggested in some studies [42,44], but the role of PD medications is not clear, and it has to be specified that in these studies, motor disability and apnoea were assessed in the "on" state, so the real contribution of dopaminergic drugs could not be clearly assessed. Continuous positive airway pressure (CPAP) seems to be effective in reducing events, improving oxygen saturation, and deepening sleep in patients with PD and obstructive sleep apnoea [47,48]. The issue of pneumonia in Parkinson's disease Aspiration pneumonia represents a dramatic complication that may explain the acute/subacute onset of fever and respiratory insufficiency in a PD patient. Physiologically, swallowing requires adequate coordination between pharyngeal and respiratory musculature, but this mechanism is frequently impaired in PD [49]. Dysphagia is typical in the advanced stages of disease, on average 10-11 years after motor symptoms onset [50], when bradykinesia, rigidity and dyskinesias are predominant; however, a cough dysfunction in more than 50% of asymptomatic PD patients has been demonstrated [51] and this may also contribute to silent aspiration and increased risk of pneumonia [52]. Moreover, in these patients the cough mechanism becomes weak because of cough reflex impairment and chest wall rigidity, further increasing the risk of aspiration [53]. A blunted urge to cough (UTC), a respiratory sensation that precedes the cough reflex, is also present and correlates with the severity of dysphagia and consequently, with an increased risk of aspiration [54]. The key for adequate management of aspiration pneumonia is prevention. A soft mechanical diet is usually the first step, followed as dysphagia progresses, by liquid thickening. A chin-down posture while swallowing may be helpful, and sometimes a speech or swallowing therapist may be required. The beneficial role of dopaminergic stimulation is controversial; despite the importance of dopaminergic basal ganglia circuits in the swallowing process [55], conflicting results have been reported by different studies [56,57]. Finally, for patients with marked sialorrhea, who may have an increased risk of aspiration, treatment with anticholinergics drugs or botulinum injections in the salivary glands may be indicated. Effects of dopaminergic therapy: risk or protection? Studies have provided controversial results about the therapeutic effects of dopaminergic stimulation, and the role of drugs commonly used in the treatment of PD is still debated, strictly depending both on disease stage and administration modality. Most papers strengthen the role of anti-Parkinsonian drugs as a protective factor against the development of respiratory failure. Levodopa increases inspiratory muscle function in anaesthetised dogs [58], and dopamine improves diaphragm function during acute respiratory failure in patients with COPD [59]. In early stages, the levodopa equivalent daily dose does not correlate with pulmonary functional testing; as the disease progresses, anti-Parkinsonian medications may be responsible for the maintenance of the maximal inspiratory mouth pressure and sniff nasal inspiratory pressure [16]. Accordingly, bedtime controlled-release levodopa (Sinemet CR) is associated with less severe obstructive sleep apnoea in PD [60]. Because dopamine is not known to increase muscle strength, it may ameliorate respiratory function by improving muscle coordination by a central activity [16]. Among the side effects of anti-Parkinsonian drugs, we have to consider pleura-pulmonary fibrosis induced by dopamine agonists like bromocriptine [61], and levodopa-induced diaphragmatic dyskinesias, which may present as marked dyspnoea [40,62,63]. The presence of other dyskinesias more commonly seen in PD, such as trunk, face or limb abnormal involuntary movements, should alert the physician to the presence of diaphragmatic dyskinesias in patients complaining of breath shortness. Many authors have investigated the effect of dopaminergic therapy on aforementioned respiratory dysfunction, especially on obstructive and restrictive patterns (table 2). Indirect evidence of the beneficial role of dopaminergic therapy on the UAO has been supported by the acute respiratory failure that may occur after these medications are suspended [65,66], or by the response of UAO to intravenous apomorphine [67,68]. Further evidence about the beneficial effect of dopaminergic therapy on UAO was provided by HERER et al. [11]. In contrast, other authors strengthened a key role of dopaminergic drugs in reversing, at least partially, restrictive changes [12,14,25,64]. However, a recent meta-analysis of four major clinical trials showed no clear effects of dopaminergic stimulations on the obstructive pattern [11,12,14,64], proving some efficacy on restrictive pattern parameters instead [69]. In this view, there are only few data about the effects of dopaminergic agents on brainstem ventilatory control. Interestingly, WEINER et al. [13] demonstrated an attenuated POD after levodopa intake; given that the respiratory muscle strength was not significantly different in the "on" compared to the "off" condition, the authors speculated about a possible central effect of levodopa contributing to the decrease of POD. These discrepancies may be explained at least in part by the different study design and the different characteristics of the cohort such as number of patients, PD duration and severity; differences in the dosage of levodopa administered in the "off" stage and in the duration of pharmacological washout may also play a role. Only one of those studies is considered to have specified a different washout timing for levodopa and dopamine agonist [25], and only one has assessed spirometry after a standardised weight-based levodopa intake [11]. Finally, a growing body of evidence suggests that both a sudden withdrawal and a significant reduction of anti-Parkinsonian drugs are risk factors for the so-called neuroleptic malignant-like syndrome (NMLS), a rare but severe clinical condition, resembling the well-known neuroleptic malignant syndrome, characterised by hyperthermia, impaired consciousness, autonomic dysfunction (e.g. respiratory failure) and elevated serum creatine kinase levels. Independent risk factors for NMLS are the use of cholinesterase inhibitors, a rapid switchover from bromocriptine to pergolide and enteral nutrition, as high protein intake critically impairs the absorption of levodopa [70][71][72]. Finally, only few data have been reported concerning the relationship between the enteral infusion of levodopa and the development of respiratory dysfunctions, except for sporadic cases of pneumonia and pulmonary embolism [73]. Correlation between pneumological drugs and PD In this scenario, the effects of drugs commonly used by the pneumologist should also be considered. For instance, some studies recently reviewed by HOPFNER et al. [74] postulated the possible correlation between β-adrenoreceptors (both agonists and antagonists) and PD [75]. Anticholinergic drugs are frequently used for obstructive pulmonary disorders and systemic anticholinergics may play a part in PD [76]. Acetylcholine has a key role in modulating dopaminergic activity in the basal ganglia, and its inhibition may increase central dopaminergic tone [77]. Anticholinergic bronchodilators might have central effects, as reported by some authors [78]. An effect on motor disturbances in PD may be reasonable, even if to our knowledge this has not been investigated in the current literature. However, it should be considered that anticholinergics may be associated with cognitive impairment and delirium [78], and these adverse effects may be even more common in the advanced stage of PD, when dementia is a very common feature. Deep brain stimulation and respiratory failure DBS is an effective strategy for the treatment of advanced PD, thus improving motor fluctuations and bradykinesia. Nonetheless, the classical target of the subthalamic nucleus (STN)-DBS reserves stimulation-induced side effects in the long-term period, comprising gait and speech impairment, as well as a progressively worsening of tremor. In this scenario, only few papers have specifically investigated respiratory failure. In particular, STN-DBS may increase the risk of a fixed epiglottis and modify velopharyngeal control [79]; these effects seem to strictly depend on frequency parameters, with low-frequency stimulation leading to a clinical improvement, whereas higher frequencies are associated with a detrimental effect on velopharyngeal control [80]. In support of this view, HAMMER et al. [81] have recently found that in STN-DBS patients, respiratory changes do not correlate with limb function, but speech-related respiratory and laryngeal control may benefit when the stimulation is delivered at low frequencies (145 Hz) and shorter pulse width (60 µs). In addition to stimulation frequency, other factors may account for these correlations, including variability in localisation of the active DBS electrodes, individual variability in somatotopic organisation of STN, stimulation fields and potential current spread beyond the STN target (e.g. internal capsule). Data on the relationship between respiratory changes and novel DBS targets, such as the pedunculopontine nucleus (PPN), have not been extensively reported so far. PPN has been only recently suggested as a new target for DBS in PD, given its key role in gait control and posture maintenance [82]. PPN surgery may modify central ventilation control, as PPN directly changes sympathetic activity [83]; moreover, PPN could indirectly modulate both breathing regulation, through cholinergic projections to RTN, and expiratory output arising from the parafacial respiratory group in the ventrolateral medulla [84]. A recent study has confirmed beneficial effects of low-frequency PPN-DBS on the upper airways function, also showing a significant correlation between the increase of oscillatory α-band activity and forced respiratory manoeuvres [85]; this effect was particularly marked when the rostral PPN was stimulated, as a part of the "mesencephalic locomotor region" (MLR); in animal studies, the MLR has been shown to project directly to a medullary respiratory generator and plays a key role in changes in respiration linked to motion [86]. Respiratory dysfunction in Parkinsonisms As described above, the presence of respiratory dysfunction in PD may be explained, at least in part, by dysregulation in basal ganglia and in other brainstem structures that control the central respiratory drive or peripheral airway muscles. In this scenario, it is reasonable to assume the presence of some kind of dysfunction in other forms of Parkinsonism, either secondary or primary degenerative, in which these structures may be involved [22,87]. Besides this, to the best of our knowledge, systematic studies on degenerative parkinsonians are still lacking, with only few data currently available about MSA and DLB, two degenerative disorders belonging to α-synucleinopathies along with PD. Respiratory dysfunction is considered one of the "red flags" that may help to distinguish PD from MSA [88], and includes nocturnal stridor and obstructive sleep apnoea [89]. In MSA, deposition of synuclein preferentially involves the caudal brainstem and the ventral medullary region, a key area for the vocal cord control and central respiratory drive [90]. Respiratory dysfunction, including sleep disordered breathing as inspiratory stridor, represents a typical feature of MSA and probably reflects degeneration of brainstem respiratory nuclei involved in respiratory rhythmogenesis and chemosensitivity, including the pre-Bötzinger complex, nucleus raphe pallidus and nucleus raphe obscurus; the same nuclei are also impaired in DLB, although less severely than in MSA [91]. In addition to the reduced ventilatory response to hypercapnia, and in line with PD, respiratory dysfunctions in DLB also comprise both impaired cough reflex and UTC responses [92][93][94]. In particular, UTC seems to be controlled by the insula, a region primarily and critically involved during DLB progression [94]. Inspiratory stridor is probably related to vocal cord paralysis or vocal cord and laryngeal dystonia, leading to glottis closure [95,96], and the presence of nocturnal stridor is classically considered an important predictor of sudden death in these patients [97]. No data about the role of dopaminergic therapy or DBS are available in MSA, and some authors proposed an approach with CPAP or botulinum toxin injection into vocal cords [98,99]. Obstructive sleep apnoea has been related to pharyngeal narrowing due to brainstem neurons degeneration [100], and similarly to other forms of obstructive apnoea CPAP is the preferential treatment. Among tauopathies, respiratory dysfunctions have been investigated in PSP, where a critical impairment of voluntary respiratory control has been reported, while automatic and limbic control seem to be preserved; accordingly, nocturnal respiratory abnormalities were not found even in the most severely disabled patients [101,102]; in particular, the conflict between volitional and automatic breathing in PSP may explain the "respiratory ataxia" sometimes described in these patients [102]. Practical recommendations for the clinician Neurological and pneumological dysfunction are strictly connected in PD patients. Pneumologists should be aware that breathing problems in this class of patients may be a direct consequence of disease progression and/or of the dopaminergic stimulation, as already mentioned for dyspnoea due to levodopa-induced diaphragmatic dyskinesias. Moreover, pneumologists should consider the spirometric abnormalities that could be found even in the early stages of the disease, and the potential therapeutic role on the airways function exercised by dopaminergic stimulation more than that seen with conventional inhaled drugs. Neurologists, in the same way, should always consider the role of pneumological evaluation in the clinical history of a PD patient and focus on respiratory function as a potential therapeutic target to improve quality of life in a patient complaining of breathing disturbances. Finally, the physician should remember also the potential benefit of pulmonary rehabilitation on functional respiratory tests and exercise tolerance even in the early stages [103], and it is reasonable to consider a respiratory training program in parallel with dopaminergic therapy in patients who report respiratory symptoms. Conclusions PD is frequently associated with respiratory disturbances, even in pre-motor stages and these should be considered as a part of the disease itself rather than a different problem. In this view, the presence of breathing symptoms should alert the physician of a PD not well controlled or in progression. Even if the role of anti-Parkinsonian drugs is still controversial, it should be considered that they may have a potential role in ameliorating pulmonary function as well as the possible negative contribution to muscle incoordination and worsening of shortness of breath in patients experiencing dyskinesias. DBS may be considered for PD, and stimulation of the STN does not significantly impair respiratory drive, when delivered at low frequencies and short pulse width, even if no data are currently available on novel DBS targets and the development of respiratory alterations. In the near future, new targets such as the PPN may induce a better control of axial motor symptoms, potentially avoiding respiratory changes at the same time. Finally, the presence of respiratory symptoms should be considered in patients with other form of Parkinsonism, even if more systematic studied are needed to investigate this topic, as well as needing more proof of the exact impact of a dopaminergic beneficial role in respiratory dysfunctions.
2020-10-10T05:03:58.814Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "ee6fc6ce5b257ec531189442ea1719bcb65bf493", "oa_license": "CCBYNC", "oa_url": "https://openres.ersjournals.com/content/erjor/6/4/00165-2020.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee6fc6ce5b257ec531189442ea1719bcb65bf493", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
225291460
pes2o/s2orc
v3-fos-license
Underwater square-root cubature attitude estimator by use of quaternion-vector switching and geomagnetic field tensor This paper presents a kind of attitude estimation algorithm based on quaternion-vector switching and square-root cubature Kalman filter for autonomous underwater vehicle (AUV). The filter formulation is based on geomagnetic field tensor measurement dependent on the attitude and a gyro-based model for attitude propagation. In this algorithm, switching between the quaternion and the three-component vector is done by a couple of the mathematical transformations. Quaternion is chosen as the state variable of attitude in the kinematics equation to time update, while the mean value and covariance of the quaternion are computed by the three-component vector to avoid the normalization constraint of quaternion. The square-root forms enjoy a continuous and improved numerical stability because all the resulting covariance matrices are guaranteed to stay positively semidefinite. The entire square-root cubature attitude estimation algorithm with quaternion-vector switching for the nonlinear equality constraint of quaternion is given. The numerical simulation of simultaneous swing motions in the three directions is performed to compare with the three kinds of filters and the results indicate that the proposed filter provides lower attitude estimation errors than the other two kinds of filters and a good convergence rate. Introduction The attitude estimator is to determine the orientation of a body-fixed coordinate framework with respect to a refer- ence one. Be it ground, marine or aerial, controlling an autonomous vehicle usually needs some knowledge on its attitude angles [1,2]. The attitude estimation system is one of the satellite sub-systems which plays an important role in stabilizing the satellite. Helicopter operations on moving helidecks require monitoring the roll, pitch, inclination and heave motions in the helideck's center, and these measurements are sent to the helicopter operator prior to take-off from the shore station. Seabed mapping applications using multibeam echo sounders with wide swathe require accurate attitude measurements to ensure the minimum depth error in the outer beams. Motion damping systems or ride control systems need the foil control system receiving the roll and pitch angles and angular rate measurements. On the other side, light and moderate cost inertial measurement units are appropriate for lightweight unmanned aerial vehicles [3]. Attitude estimation is able to be solved by the deterministic approach or by the recursive algorithm which combines the dynamics and/or kinematic model with the sensor [4]. Deterministic approaches such as the three-axis attitude determination (TRIAD) algorithm, the quaternion estimator (QUEST), and the fast optimal attitude matrix (FOAM), require measurements of at least two vectors to determine the attitude matrix [5 -8]. An advantage of both QUEST and FOAM is that the attitude can be estimated by the measurement of more than two vectors, and this is accomplished by minimizing a quadratic loss function. However, all deterministic methods fail when only one measurement vector is available, for example, only magnetometer data. The recursive algorithm utilizes a dynamic and/or kinematic model and subsequently estimates the attitude using the measurement of a single reference vector. Sun sensors, magnetometers, star trackers, horizon scanners providing vector observation are used as attitude reference sensors, and usually consist of noisy vector measurement at a low frequency. Inertial sensors such as the gyroscope can improve the measurement which provides the angular rate relative to the inertial framework. Three-axis gyroscopes with high-precision are the most applicable to attitude determination especially in the case of underwater passive navigation and control. However, the accuracy of the sensor is limited by noise and bias error and cannot work alone during unbounded error in attitude estimation over time. A variety of representations are used to describe the attitude including direction cosine matrix (DCM), Euler angle, rotation vector and quaternion. The quaternion is widely used to represent the attitude because of its minimal nonsingular global attitude and bilinear kinematics equation. However, the quaternion has to obey a normalization constraint addressed in attitude filtering algorithms. By far, different unconstrained and constrained approaches have been used to overcome this difficulty [9]. The unconstrained approach is a natural way of maintaining the normalization constraint, which uses an unconstrained three-component vector to represent the local attitude error, and adopts the quaternion for propagation and update. The multiplicative quaternion extended Kalman filter (MQEKF) based on vector observation reconstructs state variables to reduce the system dimension and to avoid the normalized constraint of quaternion [10]. The unscented quaternion estimator (USQUE) only uses a three-component representation for the attitude errors and quaternion multiplication to perform the updates, and the singularity is never encountered in practice [11]. The local attitude error vector in the MQEKF is the attitude error angles in the body framework, whereas the generalized Rodrigues parameters (GRPs) are chosen to represent the local attitude error in the USQUE. One method of using the norm-constrained Kalman filter to seek solution of attitude quaternion is to use pseudomeasurements [12]. The algorithm is to introduce a perfect measurement consisting of the constraint equation into the estimation solution, however, a perfect measurement results in a singular estimation for processing noise-free measurements in a Kalman filter. Another approach is to project the Kalman solution into the desired and constrain subspace. A performance index can be defined to find the optimal projection for the linear state equality constraint problem [13], and projection of the Kalman solution can be done at any time, not only during the update. A twostep constrain application algorithm for handling nonlinear equality constraints about quaternion normalization is constructed in the Kalman filtering framework [14]. The first step applies the projection method to the unconstrained estimate. As a result, the probability distribution of the estimate is constrained to lie along the constraint surface. In the second step, the distribution is translated so that its mean value lies on the constraint surface. A new estimator structure is derived by minimizing a constraint cost function, where the constrained estimate is equivalent to the brute force normalization of the unconstrained estimate [15]. The norm constraint of the quaternion particle filter associated with vector observation is naturally maintained, and this filter is augmented with a maximum likelihood estimator of the gyro biases, which is implemented via the use of a genetic algorithm [16]. The re-parameterization approach simply re-parameterizes the system so that the equality constraint is not required [17]. The cubature Kalman filter (CKF) is rooted in the third-degree spherical-radial cubature rule for numerically computing Gaussian-weighted integrals and the weights of the cubature point are always positive, which makes its numerical stability and accuracy better than the unscented Kalman filter (UKF) with the probability of negative weights in high dimension system. The negative weight of the sigma-points for UKF introduces great truncation error into the moment integrals and deteriorates the filtering precision. A few quaternion-based attitude estimation algorithms in the cubature Kalman filtering framework were studied. A kind of quaternion square-root CKF estimates the quaternion directly in vector space and uses the two-step projection theory to maintain the quaternion normalization constraint along the estimation process [18]. A robust strong tracking nonlinear filtering problem is the case of model uncertainties including the model mismatch, unknown disturbance and status mutation in the spacecraft attitude estimation system with the quaternion constraint. Two multiple fading factor matrices are employed to regulate the predication error covariance matrix to guarantee its symmetry, and the quaternion constraint is maintained by utilizing the gain correction method [19]. The autonomous underwater vehicle (AUV) needs to float on the sea surface to receive signals from the global positioning system (GPS) or the star sensor, and will lose its good concealment performance. The Earth's gravity field measured by the accelerometer is very sensitive to the motion of AUV. Magnetic compass is a kind of attitude sensor with long history, however, small error in calculating the tilt angle due to magnetic disturbances leads to large inaccuracy of the head angle. Measurement of the field gradient or tensor delivers more geological details than a scalar measurement of a single component or of the scalar total magnetic intensity. In addition, the measurement is relatively insensitive to orientation noise and diurnal variations. The magnetic field tensor including five independent elements reserves the advantage of removing the variation with day of the Earth's magnetic field and is not sensitive to the direction of the field [20]. The Earth's normal magnetic field is expressed by the Gauss spherical harmonic model, and generally its tensor is only several nT/km. The magnetic storm originating from a sunspot activity would be several hundreds of nT. Nevertheless, because the source of the magnetic storm is very far apart from the Earth surface, the field tensor produced by the magnetic storm is very weak in a short range compared with that of the Earth's magnetic anomaly field from underground rock, mineral, buried ferromagnetic targets or geology structure [21]. Consequently, the Earth's magnetic field tensor with a short baseline of about 1 m can be regarded as that of the Earth's magnetic anomaly field. The motivation of this paper is to present a kind of underwater square-root cubature quaternion attitude estimation algorithm through switching between the quaternion and the three-component vector, where the squareroot CKF as the attitude filtering framework is employed in the observation of the geomagnetic field tensor with nonlinearity of quaternion. Switching between the quaternion and the three-component vector through a pair of transform and anti-transform is adopted to meet the quaternion constraint condition. The rest of this paper is organized as follows. The geomagnetic field tensor and its transformation relationship between the navigation coordinate system and the body coordinate system are given in Section 2. The attitude kinematics and the measurement model using five independent components of the field tensor are described in Section 3. Section 4 is devoted to the quaternion attitude estimator in the filtering framework of CKF using both quaternionvector switching and iterative weighted-mean, and gives its form of square-root. Section 5 provides the results of the numerical simulation and presents the comparisons of the two proposed filters and CKF. Section 6 summarizes the results and the conclusions. Geomagnetic field tensor B x , B y and B z represent respectively the three components of the geomagnetic field vector in the x, y and z directions, and the geomagnetic field tensor is the spatial derivatives of three orthogonal components of the geomagnetic field vector. Its expression is where T B is the matrix of the geomagnetic field tensor with nine components B ij (i, j = x, y, z). The magnetic full-tensor gradiometer can be generally achieved in two ways. One is the superconducting magnetic full-tensor gradiometer, such as the airborne magnetic tensor gradiometer in Germany [22], the superconducting quantum interference device (SQUID) magnetic tensor gradiometer-GETMAG [23] and the magnetic tensor gradiometer with a pyramidal structure developed lately in Australia [24]. The other is the fluxgate magnetic tensor gradiometer [25]. The construction of a magnetic full-tensor gradiometer is composed of ten single-axis with a planar cross structure [26]. The vehicle-carried northeast-down (NED) system is associated with the AUV and represented by o n x n y n z n . Its origin denoted by o n is located at the center of gravity of the AUV. The x-axis denoted by x n points toward the geodetic north. The y-axis denoted by y n points toward the geodetic east. The z-axis denoted by z n points downward along the ellipsoid normal. The body coordinate system is vehicle-carried and is directly defined on the body of the AUV. Its origin denoted by o b is also located at its center of gravity. The x-axis denoted by x b points forward, lying in the symmetric plane of the AUV. The y-axis denoted by y b is starboard and the z-axis denoted by z b points downward to comply with the right-hand rule. The transformation relationship of the geomagnetic field tensor between the NED frame coordinate and the body frame coordinate can be expressed [27] by where T n B and T b B are respectively the geomagnetic field tensors in NED and body frames, the symbol ' ' is the operation of matrix transpose, and C b n is the DCM from the NED frame to the body frame. The optimal quaternion can be estimated by use of Newton Down-hill to optimize the object function about quaternion according to (2) [27]. However, it is a deterministic method and not able to compensate gyro, and it wastes much calculation time due to the use of the optimization algorithm. The reference tensor T n B is interpolated by geomagnetic field tensor surveying or calculated from the geomagnetic anomaly field data, and it is pre-stored into the navigation computer as a reference map of the tensor. The location provided by other navigation systems is used to withdraw the local tensor from the reference map. The tensor T b B is the real-time measurement using magnetic full-tensor gradiometer strapped to the vehicle. System model of quaternion attitude estimator 3.1 Attitude kinematics with gyro model Various parameters describe the attitude of a rigid body such as the Euler angles, the quaternion parameters, the Gibbs vector and the DCM. The quaternion parameters are the most desired and widely utilized means of attitude representation due to linear propagation equations and their non-singular characteristics for any arbitrary rotation angle. Compared to the quaternion, the attitude representation of the three Euler angles leads to the appearance of singularity in the motion modelling due to the trigonometric function. In the representation of the Gibbs vector, the rotation itself is represented as a three-dimensional vector, which is parallel to the axis of rotation. The transform of its three components is covariant on change of coordinates, and however, 180 • rotations cannot be represented in the Gibbs vector. The DCM contains no singularities and is frequently used by the aeronautics community to avoid the possibility of gimbal lock. However, there are nine components in the DCM, which cannot be independent. As the smallest attitude representation with global non-singularity at the cost of normalization constraint, the 4-dimensional quaternion q with a scalar and a 3-dimensional vector part is a hypercomplex number defined as where q 13 = q 1 q 2 q 3 is the vector part, e = e 1 e 2 e 3 ∈ R 3 is an eigen-axis vector and α ∈ R 1 is the rotation angle about the Euler axis. It is well known that the quaternion parameters propagate in the time domain according tȯ where where ω BI = ω BI,x ω BI,y ω BI,z denotes the rotation angular velocity vector of the body against the inertial frame. The unknown true angular velocity vector ω BI is usually measured or estimated in the sensor frame. A common sensor measuring the angular velocity is rate-integrating gyro. Both the gyro frame and the body frame are combined each other to simplify the model of rate gyro, which is given by whereω BI (t) is the measured angular rate with respect to the inertial frame, β(t) is the gyro bias vector, η v (t) and η u (t) are independent zero-mean Gaussian white-noise processes with variances of σ 2 v and σ 2 u , respectively. System state equation For the attitude estimation problem using both quaternion representation and measurements of gyro, the state vector is defined as x = [q , β ] . From (4) and (6), the system state equation can be rewritten as Measurement equation of geomagnetic field tensor The DCM from the NED frame to the body frame with the use of quaternion is given by The divergence and rotation of the geomagnetic field are all zero, so the tensor T n B is a symmetric square matrix with zero trace. The trace of matrix is similarity-invariant, and it means that the tensor T b B is also a symmetric square matrix with zero trace according to (2). There are only five independent components for the tensor T b B , which are chosen as the observation vector related to the attitude. Substituting (8) to (2), the measurement equation of the quaternion attitude estimator based on the geomagnetic field tensor is given as follows: where the observation matrix of the tensor is given by Discrete equations of filtering model The gyro bias vector is a constant, i.e, β k , during the very small sampling interval T s of gyro. It is worthy of noting the third term in the right-hand side of the first equation for (7), and the quaternion q in that term can be approximated by q k since it mainly affects the strength of the random noise [28]. The system state equation is discretized as where t k Ω (ω BI )dt, and ζ k is given by where Discrete-time gyro measurements can be generated according to the following equations [29]: where N v and N u are zero-mean Gaussian white-noise processes with covariance given by the identity matrix. From (10), the discrete-time quaternion attitude measurement model for the geomagnetic field tensor is given by where z b B,k and z n B,k are the observation-vector pair acquired at time t k in the body and NED coordinates, respectively, K m (q k ) and v k are the measurement matrix K m and noise v at time t k , respectively. The filtering algorithm The attitude kinematics equation about quaternion has a linear form and discrete analytical solutions. The CKF is suitable to estimate the attitude using quaternion representation with minimal computational effort for dimensionality due to the nonlinearity of the tensor measurement model [30]. As a four-dimensional vector to describe three dimensions, the independence among four components is denoted by the normalization constraint of the quaternion, i.e, q q = 1. In the time-update of the CKF-based quaternion attitude estimator, the unsatisfied normalization constraint in computing the weighted-mean value of quaternion produces the extra attitude error. Square-root CKF as the filtering is applied to the quaternion attitude estimator based on the geomagnetic field tensor, where switching between the quaternion and the three-component vector is employed by a pair of transforms and an iterative algorithm is used to calculate the weighted-mean value of quaternion. A pair of transforms between the quaternion and the three-component vector are defined as (20) and (21) in [31] as where q is a quaternion, and e u is a unit vector representing the direction of rotation axis. The iterative steps to calculate the weighted-mean valuē q of quaternion cubature points {q i , i = 1, 2, . . . , N, N = 2m} for m-dimensional state vector are given as follows: Step 1 j = 0, initialize the reference quaternion q r0 , the maximum iterative number j max , and a small threshold ε u . Step 3 If δu rj < ε u and j < j max , the iteration returns Step 2, otherwise goes to Step 4. Step 4 The iteration stops and outputsq =q j . After the iteration stops, the covariance of the quaternion prediction is calculated by The algorithm of the quaternion attitude estimator embedded with both the iterative calculation of the weightedmean value in the framework of square-root CKF and the observation of the geomagnetic field tensor is summarized as follows: (i) Initialization: where E[·] denotes expectation operator, chol(·) denotes a Cholesky decomposition of a matrix returning a lower triangular Cholesky factor, x − 0|0 and P − 0|0 are the estimated value and error covariance of the decrement state vector, respectively. (ii) Loop, k = 1, 2, . . . , ∞: i) Calculate the cubature points: where X − i,k|k (i = 1, 2, . . . , 2m−2) are the cubature points of the decrement state vector by the cubature rule, [1] i is used to denote the ith point from the set [1], which is the following set of points: where is the cubature point of the decrement state vector. ii) Time update equations: whereq k+1|k is the weighted-mean value of the quaternion part using the aforementioned iterative algorithm, Q k is the covariance of the process noise, tria(·) is a general matrix triangulation algorithm generating a lower triangular matrix, X * l,k+1|k (l = 1, 2, . . . , 2m) is the cubature points related to the predicted state variable by use of S k+1|k , [1] l also denotes the lth point from the set [1] like (32). Simulation results In this section, a simulation with one field tensor measurement and three angular rate measurements is used to determine the attitude of a rotating AUV, whose angular motion model expressed by three Euler angles is where ψ, θ and γ are head, pitch and roll angles of the AUV with the rotation order of z-x-y. The performance comparison between the two proposed filters and CKF is demonstrated through the simulation. The reference map of the geomagnetic field tensor is produced by six spheres and four rectangles. The longitude and latitude of the original point for the reference map are respectively 120 • and 28 • , and the size of the reference map is 2 × 2 with 256 × 256 grids. The simulation parameters of six spheres and four rectangles are listed in Table 1, where r is the radius of the sphere, S is the size of the rectangle, M is magnetization intensity, I and D are respectively magnetic inclination and magnetic declination, and P is the position in the reference map. The motion model of the AUV is the constant velocity of 10 m/s and 8 m/s in the two different directions with the perturbations from the acceleration variance of 0.05 m/s 2 and inverse correlation time constant of 0.04 h. Compared with a constant acceleration trajectory for nonmaneuver norm, the trajectory of the motion model is the important special case of the constant acceleration trajectory with zeros acceleration, and is more applicable to aircraft, ship and submarine targets because of more vehi-cle information [32]. The total simulation time is 180 s with the sampling frequency of 100 Hz. The uncertainty of the magnetic full-tensor gradiometer is 0.02 nT/m, and the standard deviations σ v and σ u for the gyro model are 0.06 • · s −1 and 0.008 • · s −2 , respectively. The initial attitude errors of head, pitch and roll are 2 • , 3 • and -1 • , respectively. The five components of the tensor map are shown in Fig. 1 as the reference map of the attitude estimator, where Fig. 1(a), Fig. 1(b), Fig. 1(c), Fig. 1(d) and Fig. 1(e) are respectively the maps of B xx , B yy , B xy , B yz and B xz , and the black line is the moving trajectory of the AUV with the initial position of 300 m and 300 m in x and y directions relative to the map original point. The absolute errors δ ψ , δ θ and δ γ of the three Euler angles are respectively defined by To refrain from the normalized constraint of quaternion, the transformation from quaternion to the three-component vector using (20) and an iterative algorithm to calculate the weighted-mean of quaternion are applied in the socalled iterative switching quaternion cubature Kalman filter (ISQCKF). The square-root forms of ISQCKF have the added benefit of numerical stability and guaranteed positive semi-definiteness of the state variances, which is called by the iterative switching quaternion square-root cubature Kalman filter (ISQSRCKF). The three kinds of quaternion cubature attitude estimators using respectively CKF, ISQCKF and ISQSRCKF are simulated to compare the errors of the attitude determination. The absolute errors of the attitude using the three kinds of filters with the same initial error attitude and gyro measurements are shown in Fig. 2, where Fig. 2(a), Fig. 2(b) and Fig. 2(c) are the absolute error curves of head, pitch and roll angles, respectively. Fig. 2 Curves of attitude determination errors using respectively CKF, ISQCKF and ISQSRCKF The black, red and blue lines denote the attitude determination errors using CKF, ISQCKF and ISQSRCKF, respectively. From Fig. 2, we know that all of the three filters can estimate the three Euler angles of the attitude with a fast coverage rate and a good stability. The collapse times of estimating the three Euler angles using the three filters are shown in Table 2, and the three filters only run about 2.5 s to estimate precisely these Euler angles, which indicate clearly the fast convergence of filters. To demonstrate clearly the Euler angle error using the three filters, comparisons on the mean values and standard deviations (STD) of the absolute angle errors δ ψ , δ θ and δ γ using the three filters are shown in Table 3, where the data to calculate the mean values and STD are all from 50 s to the end. Both the mean value and STD using ISQCKF are lower than the related ones using CKF, and this is be-cause ISQCKF frees from the normalization constraint of quaternion through switching between the quaternion and the three-component vector compared to CKF. And moreover, both the mean value and STD using ISQSRCKF are also lower than the related ones using ISQCKF, and it is because ISQSRCKF enjoys a continuous and improved numerical stability compared to ISQCKF. Conclusions Few passive attitude estimator methods can be effectively applied to underwater navigation and control besides gyrobased attitude determination, however, integrating the angular rate measurements with noise and other inaccuracy issues causes a slow degradation in attitude knowledge over time. If the error is not compensated for or corrected, all attitude knowledge will eventually be lost. Building an attitude determination system that can compensate for attitude drift is a non-trivial problem. In order to improve the precision of the attitude estimator in the underwater application, a passive way of UAV quaternion attitude determination based on the geomagnetic field tensor and quaternion-vector switching is studied in the paper due to the normalization constraint of quaternion and the insensitivity to orientation noise and diurnal variations of geomagnetic field tensor measurement. This algorithm only uses geomagnetic field and squareroot CKF to estimate the quaternion and gyro drift, and then to calculate the three Euler angles by the estimated quaternion, where the estimated gyro drift is used to compensate the output of angular rate measurement. The results of quaternion attitude determination are demonstrated by the numerical simulations and are compared with the three filters including the standard CKF, ISQCKF and ISQSRCKF. The comparison shows that attitude determination errors using the ISQSRCKF are lower than the other two kinds of filters in the period of the steady run for the three filters.
2020-09-03T09:12:40.220Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "7df44cb7ab10846f47659ab24f139c05eb57de4e", "oa_license": null, "oa_url": "https://ieeexplore.ieee.org/ielx7/5971804/9180128/09180147.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3f981ee8732ea826cca98343988501461ee8a381", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
221012195
pes2o/s2orc
v3-fos-license
Adding Concurrent Chemotherapy to Intensity-Modulated Radiotherapy Does Not Improve Treatment Outcomes for Stage II Nasopharyngeal Carcinoma: A Phase 2 Multicenter Clinical Trial Purpose: To explore the efficacy of concomitant chemotherapy in intensity-modulated radiotherapy (IMRT) to treat stage II nasopharyngeal carcinoma (NPC). Methods and Materials: In this randomized phase 2 study [registered with ClinicalTrials.gov (NCT01187238)], eligible patients with stage II (2010 UICC/AJCC) NPC were randomly assigned to either IMRT alone (RT group) or IMRT combined with concurrent cisplatin (40 mg/m2, weekly) (CCRT group). The primary endpoint was overall survival (OS). The second endpoints included local failure-free survival (LFFS), regional failure-free survival (RFFS), disease-free survival (DFS), distant metastasis-free survival (DMFS), and acute toxicities. Results: Between May 2010 to July 2012, 84 patients who met the criteria were randomized to the RT group (n = 43) or the CCRT group (n = 41). The median follow-up time was 75 months. The OS, LFFS, RFFS, DFS, and DMFS for the RT group and CCRT group were 100% vs. 94.0% (p = 0.25), 93.0% vs. 89.3% (p = 0.79), 97.7% vs. 95.1% (p = 0.54), 90.4% vs. 86.6% (p = 0.72), and 95.2% vs. 94.5% (p = 0.77), respectively. A total of 14 patients experienced disease failure, 7 patients in each group. The incidence of grade 2 to 4 leukopenia was higher in the CCRT group (p = 0.022). No significant differences in liver, renal, skin, or mucosal toxicity was observed between the two groups. Conclusion: For patients with stage II NPC, concomitant chemotherapy with IMRT did not improve survival or disease control but had a detrimental effect on bone marrow function. INTRODUCTION Nasopharyngeal carcinoma (NPC) has the highest incidence among head and neck cancers in Southeast Asia. Radiotherapy (RT) is the mainstay treatment modality for NPC. Concurrent chemoradiotherapy (CCRT), with or without adjuvant chemoradiotherapy, has been confirmed to have a significant survival benefit vs. RT alone for locally advanced NPC according to many prospective clinical trials and meta-analyses (1)(2)(3)(4)(5)(6). Based on these studies, the NCCN guidelines have recommended CCRT with/without adjuvant chemotherapy as the standard treatment modality for patients with stage II-IVb (before the AJCC 8th edition) NPC since 2010 (7). The benefit of concurrent chemoradiotherapy in locally advanced NPC is unquestionable. However, for stage II NPC, the role of concurrent chemotherapy remains unclear. The recommendation for concurrent chemoradiotherapy was based on only one phase 3 randomized trial published in 2011 by Chen et al. (8). In that trial, all patients were treated with twodimensional radiation technique (2-DRT). Intensity-modulated radiotherapy (IMRT), which is characterized by advantageous dose distribution and reduced normal tissue exposure, has become to the mainstay radiation technique for NPC since the late 1990s. In the last two decades, the local control (LC) and overall survival (OS) of NPC have reached unprecedented levels with the use of IMRT, especially for patients with stage I/II disease, leading to almost 100% 3 year LC and OS, respectively (9). Therefore, it is rational to query whether any additional benefit can be introduced by the use of concurrent chemotherapy in stage II NPC treated with IMRT. To answer this, we conducted a multicenter phase 2 trial to assess whether concurrent chemotherapy could be omitted for patients with stage II NPC without compromising the overall treatment outcomes, yet avoiding the acute treatment-related toxicities associated with chemotherapy (1, 10-12). Study Design This was a multicenter, randomized, phase 2 study. Eligible patients from three large cancer centers were registered and randomly assigned to receive either IMRT alone (IMRT group), or concurrent chemotherapy with IMRT (CCRT group). Patients were stratified according to the tumor (T) and node (N) classification using a central randomization method. The detailed study design is shown in a CONSORT flow diagram (Figure 1). Patient Eligibility Eligible patients were required to have newly pathologically proven stage II NPC according to the 2010 UICC/AJCC staging system (T2N0, T1N1, or T2N1), a Karnofsky performance status (KPS) > 70, age ranging from 18 to 70 years, adequate hematological function (leukocyte count > 4 × 10 9 /L and platelet count >100 × 10 9 /L), normal renal function [serum creatinine level ≤ 1.25 × the upper limit of normal (ULN)], and normal hepatic function [alanine aminotransferase (ALT), aspartate transaminase (AST), and bilirubin (BIL) ≤ 1.25 × ULN]. Exclusion criteria included previous receipt of chemotherapy or radiotherapy, any other cancer history within 5 years, and any severe comorbidities that contraindicated the treatment in the procedure. Before registration, all patients should receive the following workups: physical examination; endoscopy examination of the nasopharynx; magnetic resonance imaging (MRI) and computed tomography (CT) of nasopharynx and neck; chest CT; and abdominal and pelvic CT or ultrasound. This study was approved by the Ethics Committee of the Cancer Hospital, Chinese Academy of Medical Sciences, and was registered with ClinicalTrials.gov (NCT01187238). Written informed consent was obtained from all patients before enrollment in the study. Treatment All patients were treated using IMRT. Patients were immobilized using thermoplastic masks and simulated via a planning CT with 3 mm-thick slices. Intravenous contrast was strongly recommended. Target delineation was completed on the planning CT with the assistance of fused MRI images. The gross tumor volume of the nasopharynx (GTVnx) was defined as the nasopharyngeal primary lesion displayed on simulation CT and diagnostic MRI. Cervical nodes with a short axis larger than 1 cm, with central necrosis, or a cluster of nodes large than 8 mm at level II, were considered positive and were named as GTVnd. The high-risk region of tumor invasion or nodal metastasis was defined as clinical tumor volume 1 (CTV1), including the entire nasopharynx, retropharyngeal nodal region, skull base, clivus, pterygopalatine fossa, parapharyngeal space, sphenoid sinus, and the posterior third of the nasal cavity/maxillary sinuses. CTV1 also included the regions with a high risk of nodal involvement, such as the level II nodal region for N0 patients or the corresponding level plus the adjacent level of positive nodes for N1 patients. Other nodal regions, including the supraclavicular fossa, were defined as CTV2. GTVnx, GTVnd, CTV1, and CTV2 were uniformly expanded by a 3-mm margin to generate the planning target volumes PGTVnx, PTV1, and PTV2, respectively. Radiotherapy was delivered using simultaneous-integrated boost (SIB) IMRT, and all doses were prescribed to the PTVs. Generally, an RT dose of 69.96 Gy/2.12 Gy/33 fractions and 60.06 Gy/1.82 Gy/33 fractions were prescribed to the PGTVnx/GTVnd and PTV1, respectively. If there was a prophylactic neck volume (CTV2, prescribed dose was 50.96 Gy/1.82 Gy/28 fractions), the patients were treated using a two-phase plan. First, 28 fractions were delivered to all PTVs, and then the remaining five fractions were only delivered to PGTVnx/GTVnd and PTV1. If there were retropharyngeal lymph nodes with a diameter > 2 cm, the prescribe doses were 2.24-2.36 Gy/fraction for 33 fractions. The dose constraints for organs at risk were as follows: Maximum dose (Dmax) to 3 mm of the brain stem planning organ at risk volume (PRV): < 54 Gy; Dmax of 5 mm of the spinal cord PRV: < 45 Gy; Dmax of the optic nerve, chiasm, and temporal lobe: < 54 Gy; and the percentage of the volume receiving 30-35 Gy (V30-35) of the parotid gland was < 50%. The patients randomized to the CCRT group also received concurrent chemotherapy of weekly cisplatin at 40 mg/m 2 , which was started on the first day of IMRT. A maximum of seven cycles of chemotherapy could be administered during radiotherapy. Follow-Up and Outcomes All patients were followed up at 1 month after the completion of protocol treatment, every 3 months for the first 2 years and every 6 months for the 3rd to 5th years, and once a year thereafter. If there was suspicion of progression or toxicity, more frequent evaluations were allowed. Statistical Consideration The primary endpoint of this study was overall survival (OS), which was defined as the period of time from the start of treatment to death from any cause. Secondary endpoints included local failure-free survival (LFFS), regional failurefree survival (RFFS), progression-free survival (PFS), distant metastasis-free survival (DMFS), and treatment-related acute toxicities. The National Cancer Institute Common Toxicity Criteria of adverse event (version 4.0) was used to assess treatment-related acute toxicities (13). The SPSS 20.0 software (IBM Corp., Armonk, NY, USA) was used to analyze the data. The survival data were estimated using the Kaplan-Meier method, and the survival intervals of two groups were compared using the log-rank test. The chi-squared test was used to compare differences in acute toxicities and patient characteristics between two groups. Patient's Characteristics Between May 2010 and July 2012, a total of 90 patients from three large cancer centers were screened. Six patients withdrew from the study after providing signed informed consent. Finally, 84 patients entered this study and completed the required treatment as per protocol, with 43 in the IMRT group and 41 in the CCRT group. The patients' general characteristics are listed in Table 1. The baseline characteristics were well-balanced between the groups. The median age was 48 and 46 years old for the CCRT and IMRT groups, respectively. There was no difference in terms of T and N stage between the two groups. All patients received the RT dose as per protocol, with a median of 70 Gy for both groups. With regard to the CCRT group, a median of 6 cycles of concurrent chemotherapy were completed, including 13 patients (31.7%) receiving 7 cycles, 20 patients (48.8%) receiving 6 cycles, 5 patients (12.2%) receiving 5 cycles, and the other 3 patients receiving ≤ 3 cycles of chemotherapy. Treatment Results No patients were lost to follow-up. With a median follow-up time of 75 months, four patients died, including one from the IMRT group and three from the CCRT group. The 5 year OS, DFS, LFFS, RFFS, and DMFS for the whole cohort were 97.5, 88.7, 93.9, 96.4, and 94.9%, respectively. As shown in Figure 2, A total of 14 patients, 7 from each group, experienced treatment failure. There was no difference concerning the failure pattern between the two groups ( Table 2). Five patients suffered distant metastasis with or without local-regional failure, including 4 patients with T2N1 and the other 1 with T2N0 diseases. With regard to treatment-related adverse events, more grade 2-4 acute hematological (p = 0.02) and gastrointestinal (p = 0.02) toxicities were observed in the CCRT group than in the IMRT group. For hematological toxicity, a total of 5 patients presented ≥ G3 events in the entire study cohort. In the CCRT group, 3 grade 3 and 1 grade 4 events were observed whereas only 1 patient with grade 3 toxicity was reported in the IMRT alone group. A total of 5 patients experienced GI toxicities in the CCRT group, including 4 patients with grade 2 and 1 patient with grade 3 events. No ≥ G2 GI toxicity was observed in the IMRT group. There was no significant difference in terms of liver, renal, skin, and oral mucosa toxicities between the IMRT and CCRT groups ( Table 3). No grade 3 xerostomia was observed in either group. DISCUSSION Our study demonstrated that adding concurrent cisplatin to IMRT did not improve treatment outcomes in patients with stage II NPC but increased treatment-related acute hematological and gastrointestinal toxicities. There are limited data assessing the role of concurrent chemoradiotherapy for stage II NPC. The only published phase 3 trial (8) that compared CCRT with RT alone found that the use of concurrent chemotherapy significantly improved the 5 year OS (94.5% vs. 85.8%; p = 0.007), PFS (87.9% vs. 77.8%; p = 0.017), and DMFS (94.8% vs. 83.9%; p = 0.007). However, it should be noted that all patients in that study underwent twodimensional conventional radiotherapy, which has been proven to be inferior to IMRT. In addition, in Chen's study (8), 31 out of 236 patients (13.1%) had N2 disease (stage III) according to the 7th AJCC staging system. Therefore, the advantage of CCRT for this subgroup might have confounded the overall evaluation, leading to an overestimation of the role of concurrent chemotherapy for patients with pure stage II disease. It should also be noted that in that study, CCRT did not improve local regional control, with 5 year loco-regional relapse-free survival rates of 93.0% vs. 91.1% (p = 0.29), but did improve the distant metastasis-free survival, with the rates of 94.8% vs. 83.9%; p = 0.007), indicating that the decreased distant metastasis-free survival contributed to the improved OS. In the present study, all patients were staged according to the 7th AJCC staging system with the assistance of MRI and CT imaging; therefore, the patients' tumor burden was more homogeneous compared with that of the abovementioned study. Additionally, the patients were pre-stratified by N status, leading to a minimized influence of N stage on the treatment results. Correspondingly, the 5 year DMFS of the CCRT and RT alone groups were 95.2% vs. 94.5% (p = 0.77), which were numerically higher than those reported in the previous study. Hence, the need for CCRT to decrease distant metastasis in our study was relatively unnecessary. Although it has been widely confirmed by many randomized studies and meta-analyses that concurrent chemoradiotherapy could offer better treatment results than radiotherapy alone in locally advanced NPC, all of these studies showed that concurrent chemotherapy increased treatment-related toxicities, especially hematological, gastrointestinal, oral mucosal, and skin toxicities (1,4,10,11). Our study also verified that CCRT increased treatment-related toxicities, even in patients treated using IMRT. In the last two decades, IMRT has been widely used because of its advantage of dose distribution (14)(15)(16). Studies have confirmed that the advantage of dose distribution could translate into clinical benefit, in terms of either OS or treatment-related toxicities, especially for patients whose tumor was located in the center of the skull base and is surrounded by many critical organs (17). For patients with T1/T2 or stage I/II disease, Kwong et al. (9) reported the survival results of 33 patients with T1, N0-N1, and M0 NPC treated by IMRT and revealed that the 3 year LC, DMFS, and OS were all 100%. Several large sample studies from NPC epidemic regions reported 5 year OS rates of 80-85% in the IMRT era (18)(19)(20)(21)(22)(23). Stage II NPC has a relatively low tumor burden and a low risk of distant metastasis and indeed excellent LC, OS, and DMFS could be achieved using IMRT; therefore, doubts were expressed as to whether concurrent chemoradiotherapy is really needed in the era of IMRT. Fangzheng et al. (24) analyzed 242 patients with stage II disease treated by IMRT retrospectively and observed no significant differences between patients who received IMRT alone (n = 37), induction chemotherapy plus IMRT (n = 48), induction chemotherapy plus CCRT (n = 132), and CCRT (n = 25), with 5 year OS rates of 94.7, 98.7, 92.9, and 93.4%, respectively. There have been few randomized studies focusing on the role of CCRT for stage II NPC treated by IMRT. Chen et al. (25) reported a randomized study with the same design as the present study and obtained similar findings. In Chen's study, 168 patients were recruited, of whom 160 were eligible for intent-to-treat analysis, with 81 in the CCRT group and 79 in the IMRT alone group. With a median follow-up of 61.5 months, the 5 year OS rates for the CCRT and IMRT alone groups were 91.4 and 88.6%, (p = 0.562). The 5 year DMFS rates were 93.82% in the CCRT arm and 93.67% in the IMRT alone arm (p = 0.967). There were significantly higher acute systemic side effects in the CCRT arm, especially the incidence of grade 3-4 hematological and gastrointestinal events (p = 0.000). Most of the locoregional recurrence (6/8, 75.0%) and distant metastases (6/7, 85.7%) occurred in the T2N1 group. Xu et al. (26) performed a systemic review and meta-analysis focusing on the value of chemoradiotherapy (CRT) in stage II NPC compared with that of RT alone. By including both 2D-RT and IMRT techniques, patients receiving CRT or RT alone achieved an equivalent OS, LRRFS, and DMFS (p = 0.14). Considering that stage II consists of three subgroups (T1N1, T2N0, and T2N1), the prognosis and failure patterns might differ among these subgroups. In our study, a total of 5 patients experienced distant metastasis, including 4 harboring T2N1 tumor and the other 1 with T2N0 disease. Because of the relative small sample size and few events of distant metastasis in our study, it was not statistically meaningful for us to analyze the prognostic difference among the three subgroups. However, the T2N1 subgroup indeed accounted for the highest proportion of overall patients with distant failure. A series of publications provided retrospective evidence for this hypothesis. Leung et al. (27) (30) found that the accumulated distant metastasisfree survival rate was 81.2% for the T2N1 group, while the rates in the T1N1 and T2N0 groups were 95.6% and 97.5%, respectively, with corresponding 5 year OS rates of 73.1%, 95.6%, and 97.5%, respectively (p = 0.000). Even in Chen's (25) randomized study in which the design was similar to that of the present study, the T2N1 group demonstrated relatively worse outcomes compared with those of the other stage II subgroups, mainly because of increased failure in distant sites. The results of Chen's phase 3 study (8) confirmed that CCRT can decrease the distant metastasis rate for stage II NPC. Therefore, it would be important to distinguish patients with a higher risk of distant metastasis from general stage II patients to provide them with a more tailored treatment strategy. In recent decades, plasma-based Epstein-Barr virus DNA (EBV-DNA) evaluation has become an attractive prognostic biomarker. Leung et al. (27) observed that the probability of distant failure was significantly higher in patients with higher pretreatment plasma EBV-DNA levels (>4,000 copies/mL, p = 0.0001). Likewise, Du et al. (31) also verified that plasma EBV-DNA ≥ 4,000 copies/mL was independently associated with worse distant metastasis-free survival (DMFS) in 296 patients with stage II (AJCC 7th) NPC treated using IMRT. In conclusion, this randomized phase 2 study demonstrated that adding concurrent chemotherapy to IMRT might not be necessary for stage II NPC. Considering the relatively small sample size and the implicit heterogeneity among patients with stage II disease, a further phase 3 study is warranted to confirm this finding in selected patients with stage II NPC with a lower risk of distant metastasis. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT This study was approved by the Ethics Committee of the cancer hospital, Chinese Academy of Medical Sciences, and was registered with ClinicalTrials.gov (NCT01187238). Written informed consent was obtained for all patients before enrollment to the study.
2020-08-07T13:05:53.073Z
2020-08-07T00:00:00.000
{ "year": 2020, "sha1": "55540518969d34b0d004b5f2ef8aabbec3da8419", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2020.01314/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "55540518969d34b0d004b5f2ef8aabbec3da8419", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125718377
pes2o/s2orc
v3-fos-license
Langmuir Solitons in Solar Type III Radio Bursts: STEREO Observations The source regions of solar type III radio bursts are regions of very intense Langmuir wave packets excited by the bump-on-tail distributions of energetic electrons accelerated during solar flares. We report the high time resolution observations of some of these wave packets, which provide unambiguous evidence for Langmuir solitons formed as a result of oscillating two-stream instability (OTSI), since (1) they occur as intense localized one-dimensional magnetic field aligned wave packets, (2) their measured half-widths and peak amplitudes are inversely correlated with each other, so that the narrower the wave packet is, the greater its amplitude; this inverse correlation is the characteristic feature of Langmuir solitons formed as a result of balance between the nonlinearity related self-compression and dispersion related broadening of the wave packets, (3) their FFT spectra contain peaks corresponding to sidebands and low frequency enhancements in addition to pump Langmuir waves, whose frequencies and wave numbers satisfy the resonance conditions of the four-wave interaction known as the OTSI, and (4) they are accompanied by their ponderomotive force induced density cavities. The implication of these observations for theories of solar radio bursts is discussed. Introduction The purpose of this paper is to report the detection of Langmuir solitons by the time domain sampler (TDS) of the STEREO WAVES experiment (Bougeret et al. 2008) in the source regions of solar type III radio bursts. Langmuir waves excited by the bump-on-tail distributions of electron beams accelerated during solar flares are known to be responsible for these bursts (Ginzburg & Zheleznyakov 1958;Gurnett & Anderson 1977;Lin et al. 1981Lin et al. , 1986. When the intensities of these one-dimensional Langmuir wave packets exceed certain thresholds, theory predicts that they form quasi-stationary selfcompressed intense localized wave packets called Langmuir solitons as a result of balance between the nonlinear effect of self-compression and the oppositely acting dispersion induced broadening (Vedenov & Rudakov 1964;Rudakov 1972;Zakharov 1972;Rudakov & Tsytovich 1978). Langmuir solitons are argued to play a very important role in the disruption of the resonance between the Langmuir waves and the electron beam, so that the bump-on-tail distribution of energetic electrons can survive over large distances against the quasi-linear relaxation (Papadopoulos et al. 1974;Galeev et al. 1977;Gorev et al. 1977;Goldman 1983). Because of their large peak intensities, Langmuir solitons are also predicted to be very efficient emitters of electromagnetic waves at the fundamental as well as at higher harmonics of the electron plasma frequency, f pe (Galeev & Krasnoselskikh 1976;Galeev & Krasnoselskikh 1978;Papadopoulos & Freund 1978;Goldman et al. 1980). From early on, the in situ wave measurements have shown that Langmuir waves occur as bursty field structures in space environments (Gurnett & Anderson 1977;Kellogg et al. 1992). A comprehensive review of Langmuir wave observations across the heliosphere can be found in Briand (2015). The high time resolution in situ wave observations obtained by the ULYSSES and GALILEO spacecraft in the source regions of solar type III radio bursts have provided evidence for several nonlinear processes (Gurnett et al. 1993;Thejappa et al. 1993Thejappa et al. , 1995Thejappa et al. , 1999Thejappa et al. , 2003Hospodarsky & Gurnett 1995;Thejappa & MacDowall 1998;Nulsen et al. 2007). The much superior high time resolution in situ wave data from the TDS of the STEREO WAVES experiment (Bougeret et al. 2008) have given us a unique opportunity to identify the signatures of several interesting weak as well as strong turbulence processes in type III bursts Kellogg et al. 2009;Malaspina & Ergun 2008;Henri et al. 2009;Malaspina et al. 2010Malaspina et al. , 2011Graham et al. 2012aGraham et al. , 2012bGraham et al. , 2014aGraham et al. , 2014bThejappa et al. 2012aThejappa et al. , 2012bThejappa et al. , 2012cThejappa et al. , 2013aThejappa et al. , 2013b. It was reported that one of these type III burst associated Langmuir wave packets provides evidence for the four-wave interaction called the oscillating two-stream instability (OTSI; Thejappa et al. 2012c). The trispectral analysis techniques enabled Thejappa et al. (2012a) to show that the spectral components of this wave packet are coupled to each other with a high degree of phase coherency as expected of OTSI. Graham et al. (2012a) argued that the OTSI may not be a viable process for this event because of its three-dimensional nature. However, Thejappa et al. (2013a) have confirmed the findings of Thejappa et al. (2012c) by showing that the parallel as well as perpendicular components of this wave packet contain the spectral signatures of OTSI. Thejappa et al. (2012b) have reported the evidence for OTSI in the high time resolution observations of Langmuir wave packets associated with three different type III events, and have argued that the OTSI probably is a commonly occurring phenomenon in type III bursts. The searches for Langmuir solitons have also been conducted in the in situ high time resolution wave data obtained in solar type III bursts (Kellogg et al. 1992;Thejappa et al. 1999) as well as in Earth's foreshock regions (Kellogg et al. 1999). In one of these studies, Thejappa et al. (1999) reported the evidence for Langmuir envelope solitons in the data obtained by the Ulysses URAP Fast Envelope Sampler (Stone et al. 1992) in the source regions of type III radio bursts. However, in that study, the widths of the wave packets were estimated by assuming that the solar wind velocity and magnetic field vectors were parallel to each other, since the data were not available. In this paper, we report the results of our recent search for Langmuir solitons in the high time resolution in situ wave data obtained by the TDS of the STEREO WAVES experiment in the source regions of 10 solar type III radio bursts. This has resulted in the identification of several localized magnetic field aligned one-dimensional strongly turbulent wave packets, which are unique in the sense that (1) their measured halfwidths agree very well with the expected half-widths of Langmuir solitons, (2) they are accompanied by the density cavities created probably by their ponderomotive forces, and (3) their spectra contain signatures of the four-wave interaction called the OTSI. We present the observations of these wave packets and argue that most probably they correspond to Langmuir solitons formed as a result of OTSI. In Section 2, we briefly describe the form of Langmuir solitons, and in Sections 3 and 4, we present the observations, and the discussion and conclusions, respectively. Forms of Langmuir Solitons In the subsonic limit, the electric field E(Z, t) and the corresponding density perturbation dn n e e of the one-dimensional Langmuir soliton can be described as (Rudakov & Tsytovich 1978;Nezlin 1993): where Z is the longitudinal coordinate, t is the time, and k 0 and u, respectively, are the wave number and velocity of the soliton. The frequency and wave number of oscillations of the soliton are defined as (Nezlin 1993) . 4 is the normalized peak energy density (E t is the peak amplitude of the wave packet, ò 0 is the dielectric constant, n e and T e are the electron density and temperature, respectively), λ De is the Debye length, k L is the wave number of the Langmuir wave, and = w p f pe 2 pe is the electron plasma frequency. The velocity of the soliton u is usually assumed to be zero (Kellogg et al. 1999). The half-width L E is related to the peak amplitude E t of the Langmuir soliton as This relationship is obtained from Equation (1) by writing the ratio of the half-power amplitude to peak amplitude of the soliton as This implies that any Langmuir wave packet of peak amplitude E t can be identified as the Langmuir soliton, if its measured half-width L 1 2 is comparable to the expected half-width L E of the Langmuir soliton of peak intensity E t as given in Equation (5). Observations The observations consist mainly of the high time resolution waveforms of Langmuir waves captured by the TDS of the STEREO WAVES experiment (Bougeret et al. 2008) in the source regions of type III solar radio bursts. The high time resolution voltage differences V X , V Y , and V Z , returned by the TDS, are usually converted into the wave electric field components E X , E Y , and E Z in the spacecraft frame using the transformation matrix given by Bale et al. (2008). In the following, we describe the high time resolution observations of Langmuir waves associated with one of the local type III bursts. In Figure 1, we show the dynamic spectrum of a typical solar type III radio burst and its associated in situ wave activity, obtained by the high-and low frequency receivers of the STEREO A WAVES experiment. Here, the fast drifting emission band from ≈16 MHz to ≈26 kHz corresponds to the type III radio burst, and the nondrifting bursty emissions in the 19-22 kHz range correspond to the Langmuir waves. The TDS has resolved these Langmuir wave bursts into 43 intense waveforms. Each of these waveforms contains 16,384 samples, acquired at a rate of 250,000 samples per second (a time step of 4 μs for a total duration of 65 ms). After examining all these wave packets, we have identified one of them as the probable Langmuir soliton, since (1) it is a localized one-dimensional magnetic field aligned wave packet with a single peak, (2) its peak intensity E t satisfies the threshold conditions for OTSI and related strong turbulence processes, (3) its spectrum contains the signatures of OTSI, and (4) it is accompanied by the ponderomotive force induced density cavity. In the following, we describe the characteristics of this wave packet, and show that its measured half-width L 1 2 is approximately equal to that of the expected half-width L E of the Langmuir soliton of peak amplitude E t . Physical Characteristics In Figure 2, we present the waveforms of the E X , E Y , and E Z components of this unique TDS event. The peak amplitudes (in spacecraft coordinates) of these components are 27.8 mVm −1 , 40.4 mVm −1 , and 7.5 mVm −1 , respectively. These E X , E Y , and E Z components are subsequently transformed from the spacecraft into more useful magnetic field (B) aligned coordinate system, whose X-, Y-, and Z-axes are assumed to be aligned along the b,b v, and´( ) b v b , respectively. The unit vectors of the solar wind velocity v and the magnetic field b are provided by the STEREO PLASTIC (Galvin et al. 2008) and the STEREO IMPACT magnetic field (Acuna et al. 2008) experiments, respectively. In this study, we use b=(−0.60747, −0.79512, 0.005553) and v=(0.9917, −0.12865, 0.0060756) as given in aten.igpp.ucla.edu/forms/ stereo/. In Figure 3, we present the field components of this wave packet in the B-aligned coordinate system, where the top panel shows the parallel component E P and the middle and bottom panels show the perpendicular components E ⊥1 and E ⊥2 , respectively. The peak amplitudes of these field components E P ≈48.6 mVm −1 , E ⊥1 ≈6.7 mVm −1 , and E ⊥2 ≈8.9 mVm −1 show that the inequalities E P ?E ⊥1 and E P ?E ⊥2 are easily satisfied for this wave packet. This suggests that this wave packet is mostly a one-dimensional magnetic field aligned wave packet. Here, we note that Langmuir wave packets captured by the TDS in the source regions of type III bursts are mostly onedimensional in nature with  E E 1 and E P ?E ⊥2 . This is consistent with the spectrum of waves excited by the onedimensional electron beam propagating along the open solar wind magnetic field lines. As discussed by Smith et al. (1979), the growth rate of the beam-plasma instability γ b can be written as: where n b is the beam density, v b is the beam speed, Δv b is the velocity spread in the beam, and Ψ is the angle between the wave vector k L and the magnetic field B. From this expression, it is clear that γ b is maximum when Ψ∼0 and it rapidly decreases to zero for Y~p 2 , i.e., field aligned waves grow much faster in comparison with those of off-angle waves. Smith et al. (1979) have shown that the beam-plasma instability is confined to a narrow cone with an open angle less than 7 • about the direction of the magnetic field, i.e., one can consider that the spectrum is one-dimensional. Here, we note that although the wave packet presented in Figures 2 and 3 is strictly not a one-dimensional wave packet, with constraints of a single spacecraft's observations, we assume that the inequalities E P ?E ⊥1 and E P ?E ⊥2 best reflect the one-dimensional nature of the wave packet. However, there exist some exceptions, where the wave packets do occur as two-and three-dimensional field structures. For example, the wave packet presented in Thejappa et al. (2012c) is a threedimensional wave packet with E P ∼E ⊥1 and E P ∼E ⊥2 . Figure 1. Dynamic spectrum of a local type III radio burst (fast drifting emission from ≈16 MHz down to ≈26 kHz) and associated Langmuir waves (nondrifting emissions in the frequency interval 19-22 kHz). As seen from Figure 4, the time profile of total electric field 2 clearly shows that this is an intense localized wave packet with peak amplitudeẼ 49 t mVm −1 . This yields the normalized peak energy density as . Here, for the electron temperature (T e ), we have assigned a typical value of 10 5 K since the measurements of T e are not available, and by assuming that the intense peak (L) in the spectrum of the parallel component E P corresponds to Langmuir waves excited at the local electron plasma frequency, f pe ≈20 kHz ( Figure 5 m. The modified dispersion relation of an intense Langmuir wave can be written as where k L is the wave number of Langmuir waves. From Equation (8), the threshold condition for OTSI can be written as (Zakharov 1972) If the Langmuir waves correspond to beam-excited Langmuir waves, we can estimate k L by using the speeds of the electron beam derived from the frequency drift df dt of the type III event. If we assume that the electron density of the solar wind n e (m −3 ) is given by the Radio Astronomy Explorer (RAE) density model (Fainberg & Stone 1971): where n 0 =5.52×10 13 , a=2.63, and r is the solar altitude (in units of R e ), and the type III burst is excited at the second Figure 2. Waveforms of the E X , E Y , and E Z electric field components of one of the unique Langmuir wave packets associated with the type III event of Figure 1 in the spacecraft frame of reference. harmonic of the electron plasma frequency, f pe by the electron beam traveling along the spiral magnetic field lines with a velocity β (units of velocity of light c), we can express the frequency drift of the type III burst in terms of the velocity of the corresponding electron beam as (Papagiannis 1970) where c is the velocity of light, f is the angle of exciter direction to Sun-Spacecraft line, and f is the midpoint of frequency interval df. Using the frequency drift of the type III burst presented in Figure 1, we have estimated the beam speed v b as ∼0.37c, where it is assumed that the path length traveled by the electron beam is increased by a factor of α=1.7 (Lin et al. 1973;Alvarez et al. 1975;Fokker 1984) Here, we note that if the Langmuir waves correspond to condensate formed as a result of induced scattering or electrostatic decay (ESD) of beam-excited Langmuir waves into daughter Langmuir and ion sound waves, the threshold for OTSI and soliton formation can be written as (Zakharov 1972 where m e and m i are the electron and ion masses, respectively. This condition is also easily satisfied for the current event, since »´-7.7 10 Spectral Characteristics To examine the spectral characteristics of this wave packet, we have computed the FFT spectrum of its parallel component E P . The logarithmic spectrum in a narrow frequency interval of 19-21 kHz, presented in Figure 5(a) clearly shows an intense peak (L) corresponding to the beam-excited Langmuir waves at f pe ≈20 kHz, and two sidebands, corresponding to the spectral peaks (D) and (U) at ≈19.7 kHz and ≈20.3 kHz, respectively. The linear spectrum from 0 to 0.8 kHz presented in Figure 5(b) shows the low frequency wave activity below ≈500 Hz with a peak around 60 Hz, probably corresponding to ion sound waves. The spectral peaks corresponding to sidebands with accompanying low frequency enhancement are the expected spectral signatures of OTSI, which arises as a result of coupling of two Langmuir waves with frequencies and wave numbers ( f L , k L ) with the up-and down-shifted sidebands with ( f U , k U ) and ( f D , k D ) via a purely growing ion sound mode with ( f S , k S ). We can identify the observed spectral peaks with the modes involved in OTSI, if they satisfy the following frequency, wave number, and phase matching conditions: where, the subscripts L, D, and U correspond to the beamexcited Langmuir wave, down-and up-shifted sidebands, respectively. The frequency matching condition = + f f f 2 L D U is easily satisfied, since the frequency shifts Δf of the down-and upshifted sidebands are symmetric with respect to the Langmuir wave pump, being ≈300 Hz and ≈300 Hz, respectively. Here, the frequency differences D = - . This implies that the matching condition =  k k k U,D L S is also satisfied. Here, we note that in one of our earlier studies (Thejappa et al. 2012b), using the trispectral analysis techniques, we have shown that the phase coherence condition (15) is also well satisfied for this event. Thus, the frequency, wave number, and phase coherence matching conditions of OTSI are easily satisfied, for this event, which implies that the spectral peaks seen in the FFT spectrum of the wave packet (Figure 5(a)) probably correspond to the beamexcited Langmuir wave and daughter products of OTSI. As far as the linear spectrum presented in Figure 5(b) is concerned, it shows four peaks at ∼60 Hz, ∼137 Hz, ∼200 Hz, and ∼300 Hz with powers of ∼1.5×10 −2 (mV m −1 ) 2 Hz −1 , ∼3.5×10 −3 (mV m −1 ) 2 Hz −1 , ∼1.0×10 −3 (mV m −1 ) 2 Hz −1 , and ∼1.8×10 −3 (mV m −1 ) 2 Hz −1 , respectively. These values are much higher than the respective base level powers, which are ∼1.7×10 −3 (mV m −1 ) 2 Hz −1 at 60 Hz and ∼7.5×10 −4 (mV m −1 ) 2 Hz −1 in the frequency range from 100 to 300 Hz. This suggests that the spectral enhancements seen below 500 Hz in Figure 5(b) are real. Although the waves corresponding to spectral peak at 300 Hz satisfy the resonance conditions of OTSI, with the observational constraints in the estimation of the frequencies, we cannot rule out the involvement of the ion sound waves corresponding to other spectral peaks, especially to the intense peak at ∼60 Hz in OTSI. Here, we note that the spectral enhancements corresponding to sidebands in Figure 5(a) are not as sharp as those presented in Thejappa et al. (2012c). This may be due to different conditions in the source regions of these type III bursts. However, the spectra of some of the wave packets identified as the probable Langmuir solitons in the present study exhibit relatively sharper spectral peaks. In Figure 6, we present one of such spectra. This spectrum is that of the Langmuir wave packet associated with the local type III event of 2010 September 12. In Figure 6(a), we present the logarithmic spectrum in a narrow frequency interval of 23-27 kHz. This narrow spectrum clearly shows an intense peak (L) corresponding to the beam-excited Langmuir waves at f pe ≈25.2 kHz, and two spectral peaks (D) and (U) at ≈24.9 and ≈25.4 kHz, corresponding to sidebands, respectively. In Figure 6(b), we present the linear spectrum of this event from 0 to 0.5 kHz. This linear spectrum clearly shows an enhancement in low frequency wave activity below ≈500 Hz. Notable in this linear spectrum are the peak around 200 Hz and a general enhancement below ∼140 Hz with peaks at ∼100 Hz and ∼40 Hz. We note that these low frequency spectral peaks are real, since the observed powers are much higher than the base level power of , where L probably corresponds to the beam-excited Langmuir wave at ≈20.0 kHz, and D and U correspond to the down-shifted sideband at ≈19.7 kHz, and up-shifted sideband at ≈20.3 kHz, respectively. (b) The low frequency spectrum below 800 Hz, where the enhancement below 500 Hz with a peak at ∼60 Hz probably corresponds to ion sound waves. S is also satisfied. Thus, the frequency and wave number matching conditions of OTSI are easily satisfied for this event. Here, we have used the beam speed v b ∼2.2c obtained from the estimated frequency drift of the type III burst with the help of Equation (11). Here we note that in addition to frequency and wave number matching conditions (Equations (13)-(15)), the amplitudes of the sidebands D and U and the pump Langmuir wave L should probably also satisfy certain conditions in the case of OTSI. These conditions may be obtained by solving the appropriate wave kinetic equations, which is beyond the scope of the present study. Here we also note that (1) the amplitude of anti-Stokes mode (U) in Figure 5(a) is lower by 2 orders of magnitude in comparison with that of the Stokes mode (D), which in turn is less than the amplitude of the Langmuir wave pump by 2 orders of magnitude, and (2) the D and U peaks in Figure 6(a) are much weaker than the central peak L. Ponderomotive Force Induced Density Cavities In the following, we examine whether the observed wave packet is associated with any ponderomotive force induced density cavity as expected of a Langmuir soliton (Equation (2)). In the present case, since the inequality Figure 6. (a) The narrow spectrum of the parallel component E P of the wave packet around f≈f pe ≈25.2 kHz, where L, D, and U correspond to the beam-excited Langmuir wave at ≈25.2 kHz, down-shifted sideband at ≈24.9 kHz, and up-shifted sideband at ≈25.4 kHz, respectively. (b) The low frequency spectrum below 500 Hz, where the enhancement with peaks at ∼200 Hz, ∼100 Hz, and ∼40 Hz probably corresponds to ion sound waves. is easily satisfied, one can see from Equation (8) that ω L <ω pe , and, therefore, the Langmuir wave packet is trapped inside the self-generated density cavity. The depth of this density cavity is probably the density cavity created by the ponderomotive force of the wave packet. As seen from Figure 7, the observed e 1 -power width of this density cavity ∼64λ De is less than that of the wave packet ∼142λ De . Here, we have used the Equation (17) to convert the e 1 -power temporal widths of the wave packet and corresponding density cavity into respective spatial scales. With experimental constraints involved in the measurements of density fluctuations, this is a reasonably good agreement. Here, we note that even in the laboratory experiments (Antipov et al. 1978), usually, the solitons are broader than the corresponding density cavities, and each of these solitons is associated with more than one density cavity. Thus the spectral evidence for OTSI, the inequality  Figure 8(b) we show its spectrum only from 100 to 500 Hz. As seen in Figure 8, the spectral peaks of the E P of the wave packet in this frequency range agree very well with those of dn n e e . This suggests that the density fluctuations corresponding to spectral peaks in Figure 8(b) probably correspond to OTSI excited ion sound waves. As far as the spectral peak at ∼60 Hz in Figure 5(b) is concerned, it does not correspond to any density fluctuations presented in Figure 7. This is because Equation (16) limits the estimation of dn n e e to 100-2000 Hz frequency range. However, this does not rule out the link between the waves corresponding to spectral peak at ∼60 Hz (Figures 5(b) and 8(a)) and density fluctuations. Measured and Predicted Half-widths As seen from Figure 4, the half-power duration τ 0.5 of the wave packet is ≈8.46 ms. If we assume that the observed wave packet is stationary in the solar wind, this half-power duration τ 0.5 measured in the spacecraft frame can be converted into the half-width L 1/2 of the wave packet in the solar wind frame using the relation where v sw is the solar wind speed and θ is the angle between the solar wind velocity and the electric field vector. In the present case, the observed values obtained by the STEREO PLASTIC (Galvin et al. 2008) and the STEREO IMPACT magnetic field (Acuna et al. 2008) experiments are v sw ≈525.7 km s −1 and θ=120°, where θ is between the solar wind velocity and the magnetic field vectors. Since the observed wave packet is the one-dimensional magnetic field aligned wave packet, this angle can be considered as the angle between the solar wind velocity and the electric field. Thus, we obtain » L 1 2 t q l » v cos 227 0.5 sw De for τ 0.5 ≈8.46 ms, v sw ≈526 km s −1 , θ≈120°, and λ De ≈9.8 m. As far as the expected half-width . Thus the measured half-width L 1/2 ≈ 227λ De is equal to the expected half-width L E ≈227λ De of the Langmuir soliton (Equation (5) . This suggests that the observed wave packet probably is the Langmuir soliton formed as a result of the balance between the compression due to nonlinearity and broadening due to dispersion. The accompanying density cavity with  d W n T n n e e e e L further confirms that this wave packet probably is the Langmuir soliton, trapped inside the self-generated density depression. More over, the signatures of OTSI in the FFT spectrum of this wave packet strongly suggest that this soliton is probably formed as a result of OTSI. Larger Data Set The TDS has captured 412 Langmuir wave packets during 10 local type III bursts. In Figure 9, we present the histogram of the peak intensities of these wave packets. This histogram shows that it is a skewed distribution. A distribution is usually called skewed right if, as in this histogram, the right tail (larger values) is much longer than the left tail (small values). After a detailed examination of all these wave packets, we have selected 17 of them as the probable candidates for Langmuir solitons. For a wave packet to be qualified as a possible Langmuir soliton, it should satisfy the following conditions: (1) it should be a localized wave packet with a single prominent peak, (2) it should be a one-dimensional magnetic field aligned wave packet with E P ?E ⊥1 and E P ?E ⊥2 , (3) it should satisfy the threshold condition for OTSI and associated (4) its spectrum should contain the signatures of OTSI, namely, the spectral peaks corresponding to sidebands and low frequency ion sound waves in addition to pump Langmuir wave, which satisfy the relevant resonance conditions, and (5) it should be accompanied by a density cavity with  dn n W n T e e ee L . In Figure 9, we have overlaid the distribution of the identified Langmuir solitons on the histogram of the wave packets observed during the 10 type III events. The blue shows the distribution of these solitons. This comparative histogram clearly shows the subpopulation of solitons in the data. This subpopulation is in the tail of the histogram, i.e., the solitons correspond mostly to very intense wave packets of the distribution. Thus only a small fraction of the observed wave packets can be identified as Langmuir solitons. The majority of TDS events (red in the histogram) are relatively weaker wave packets with a variety of shapes and structures. As seen from this histogram, there are several events with peak intensities equal to or greater than those of the identified solitons. These events are also not selected because of their complex shapes, structures, and spectra. In Table 1, we present the observed characteristics of the identified Langmuir solitons. As seen from this table, only the event, observed on 2013:06:21 at 03:54:35.8411 does not show spectral signatures of OTSI, although it satisfies all other conditions. This may be because the intensities of sidebands and ion sound waves are below the background levels. For these wave packets, we have estimated the peak normalized energy densities W n T e e L , the expected half-widths L E and the halfpower widths L 1/2 using their measured peak intensities E t , electron densities n e derived using the fact that the most intense peaks in their FFT spectra correspond to Langmuir waves, halfpower durations τ 0.5 , and, respective solar wind velocities v sw and angles θ between the solar wind and the magnetic field vectors. For electron temperature T e , we have assigned a typical value of ∼10 5 for all the events. In Figure 10, we plot the expected widths L E estimated by assuming that observed wave packets are the one-dimensional Langmuir solitons (where the peak normalized energy densities W n T e e L of the wave packets are used) versus the half-widths L 1/2 estimated directly from the corresponding measured half-power durations τ 0.5 , solar wind speeds v sw and the angles θ between the solar velocity and magnetic field vectors. As seen from this plot, the agreement between L E and L 1/2 is excellent with the correlation coefficient of 0.98. This indicates that L 1/2 ≈L E , which is equivalent to the inverse correlation between W n T e e L and L 1/2 . This suggests that the larger the amplitude is, the narrower the wave packet, as expected of Langmuir solitons. Here we note that in Figure 10 we have used only 17 events because only these events enabled us to unambiguously estimate L 1/2 from the measured τ 0.5 . Because of the complicated shapes, we could not estimate τ 0.5 for the rest of the events. Therefore, we could not include them in this plot. As far as the origin of these wave packets is concerned, solitary as well as nonsolitary wave packets probably correspond to beam-excited waves. For a variety of reasons, such as the beam velocity and density, and other conditions in the ambient plasma, these wave packets probably show different characteristics. However, based on these observations, we cannot exactly pinpoint the origin of these different kinds of wave packets. Since, we have used a value = T 10 e 5 K for all 17 events due to lack of electron temperature measurements, it would have introduced some error in Figure 10. Therefore, in Figure 11, we plot -E t 1 versus L 1 2 , where L 1/2 and E t are in units of m and Vm −1 , respectively. Figure 11 also shows that E t and L 1/2 are inversely correlated as expected of Langmuir solitons with a correlation coefficient of ≈0.91. This confirms the inverse correlation seen in Figure 10. Thus, the characteristics of the observed wave packets agree very well with the expected characteristics of Langmuir solitons. Discussion and Conclusions We have presented the results of our recent search for Langmuir solitons in the high time resolution in situ wave data obtained by the TDS of the STEREO WAVES experiment in the source regions of 10 local solar type III radio bursts. In these data, we identified 17 unique intense localized onedimensional magnetic field aligned Langmuir wave packets as probable candidates for the Langmuir solitons. The theories of solar type III bursts predict that the bump-on-tail distributions of electron beams propagating along the coronal and interplanetary magnetic fields excite one-dimensional magnetic field aligned Langmuir wave packets. The waveforms identified in this study represent such wave packets. Our analysis has revealed that the peak intensities of these wave packets easily satisfy the threshold condition for excitation of OTSI and related strong turbulence processes. For verification of the threshold condition, we have used the wave numbers k L derived from the beam speeds obtained from the negative frequency drifts of the type III bursts. As expected from such strongly turbulent wave packets, with the help of spectral analysis, we have found that 16 out of 17 wave packets exhibit the characteristic signatures of OTSI, namely, a resonant peak at f pe , Stokes and anti-Stokes peaks at  f f pe S , and a low frequency enhancement below f S , where f pe and f S are the electron plasma and ion sound frequencies, respectively. We have shown that these spectral components easily satisfy the frequency and wave number resonance conditions of the OTSI type of four-wave interaction. It is interesting to note that none of the wave packets contain spectral signatures of the ESD, i.e., the decay of the beam-excited Langmuir wave into a daughter Langmuir wave and an ion sound wave. This suggests that the Langmuir wave packets presented in this study probably correspond to the waves excited directly by the electron beam, i.e., they may not correspond to the condensate Finally, we have shown that these localized one-dimensional wave packets are unique in the sense that they show high degrees of agreement between their measured half-widths (L 1/2 ) obtained from the observed temporal widths (τ 0.5 ), solar wind velocity v sw , and the angle θ between the solar wind and magnetic field vectors, and the expected widths (L E ) of Langmuir solitons of peak amplitudes equal to those of the wave packets. This agreement is a clear indication that the observed wave packets correspond to Langmuir solitons in which the spreading of the wave packets due to dispersion is balanced by the self-focusing due to nonlinearity. This also suggests that the more localized the wave packet or the larger the spread in the wave vector space, the greater the nonlinearity must be and hence the peak intensity. There are two kinds of solitons: (1) envelope solitons with L 1/2 >λ and l < ( ) k In the case of an envelope soliton, since ω L >ω pe , it is not trapped inside the density cavity. On the other hand, the Langmuir soliton is trapped inside the self-generated density cavity, since ω L <ω pe . Therefore, the observed characteristics L 1/2 <λ and l > ( ) k W n T L De 2 e e L and L 1/2 ;L E indicate that the observed wave packets correspond to Langmuir solitons formed as a result of OTSI. The width of a stable Langmuir soliton should be less than the wavelength of the Langmuir wave (λ). Here, we should mention that the formation of solitons is the initial state of strong Langmuir turbulence. Although in onedimensional approximation, these solitons are expected to be stable, they are known to be unstable for transverse perturbations especially in the case of weak magnetic fields with  f f ce pe (Rypdal & Juul Rasmussen 1989;Hadzievski et al. 1990;Newman et al. 1994; f ce is the electron cyclotron frequency). The observed inequalities E P ?E ⊥1 and E P ?E ⊥2 probably indicate that these wave packets are in the initial state of this soliton instability, i.e., these field structures, whose half-widths range from ∼93λ De to ∼465λ De are still very close to the stable state of one-dimensional solitons. As a result of soliton instability, these wave packets will eventually undergo rapid spatial collapse to very localized intense three-dimensional wave packets. The three-dimensional wave packet presented in our previous study (Thejappa et al. 2012c) is probably one of such wave packets. Theoretically, it is shown that (Robinson 1991;Melatos & Robinson 1993) once these wave packets collapse to spatial scales of the order of ∼20λ De the transit time damping suddenly sets in and leads to the complete absorption of the collapsing fields, i.e., burntout empty density cavities will be left behind. We have not found any such three-dimensional wave packets in the present data set. In conclusion, we have clearly demonstrated that the Langmuir wave packets presented in this study provide what is believed to be the first observational evidence for the quasistable one-dimensional magnetic field aligned Langmuir solitons formed as a result of OTSI in the source regions of solar type III radio bursts. We have also demonstrated that these solitons are most probably trapped inside the selfgenerated density cavities. The implication of these findings is that the strong turbulence processes, such as the OTSI and Langmuir solitons play important roles for stabilization of the electron beams as well as emission of the fundamental and higher harmonic electromagnetic waves, which are the longstanding unresolved issues in solar radio astronomy. The SWAVES instruments include contributions from the Observatoire of Paris, University of Minnesota, University of California, Berkeley, and NASA/GSFC. G.T. acknowledges the support from the NASA STEREO and WIND projects. We thank the anonymous referee for valuable comments, which clarified ideas and presentation.
2019-04-22T13:12:28.790Z
2018-09-05T00:00:00.000
{ "year": 2018, "sha1": "25854ecfa73a4ad5171e6c2f0750357f50ede9eb", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3847/1538-4357/aad5e4", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "e380ecca6214a11bb9b5b09b6c663a18e3f838f6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10961246
pes2o/s2orc
v3-fos-license
SOX9 Duplication Linked to Intersex in Deer A complex network of genes determines sex in mammals. Here, we studied a European roe deer with an intersex phenotype that was consistent with a XY genotype with incomplete male-determination. Whole genome sequencing and quantitative real-time PCR analyses revealed a triple dose of the SOX9 gene, allowing insights into a new genetic defect in a wild animal. Introduction Sexual development in mammals depends on a complex network of regulatory genes controlled by the presence or absence of the testis-determining gene SRY (sex-determining region on the Y chromosome) [1]. The protein encoded by SRY up-regulates the expression of transcription factor SOX9. The activation and maintenance of SOX9 expression triggers other genes (e.g. SOX8) required for testis and male differentiation, whereas SOX9 itself actively suppresses the expression of genes involved in ovarian development. In the absence of SRY, the alternative, female pathway is activated, resulting in ovary formation [2]. Derangements in these processes cause malformations of internal and external genitalia, varying from sexual ambiguity to complete sex reversal. The sex reversal syndrome in humans and other mammals is a congenital abnormality, characterized by inconsistency between chromosomal and gonadal sex [3]. In humans, sex reversal is a rare disorder of sterile XX males or XY females, with an incidence of one in 20,000-25,000 males [4]. XX sex reversals occur in different mammals, including European roe deer (Capreolus capreolus) [5]. Phenotypes can range from females carrying antlers to true hermaphrodites, although pronounced sexual dimorphism is usually evident [5]. C. capreolus has a high population density in Europe [6] and is the commonest cloven-hoofed game in Germany [7]; nonetheless, encountering sexual defects in natural populations appears to be extremely rare. Here, we studied a roe deer specimen with an intersex phenotype, resulting from incomplete maledetermination. In order to explore the genetic basis of this defect, we sequenced the genome of roe deer. Results Fortuitously, we had access to a bagged, one year-old C. capreolus with a distinct intersex phenotype ( Figure 1). It had cranial outgrowths typical of a buck and a rear phenotype characteristic of a doe. Close inspection revealed no vaginal opening, abdominal testes and a retro-posed pizzle. In contrast to a healthy male C. capreolus, there was no evidence of the single copy amelogenin (AMEL) gene [8] in the Y chromosomespecific region, indicating a female sex chromosome status, in spite of male external genitalia. We then proceeded to explore the genetic basis of this defect by investigating ten genes [sex-determining region Y gene (SRY), SRY-related HMG-box gene 3 (SOX3), SRYrelated HMG box gene 9 (SOX9), SRY-related HMG box gene 10 (SOX10), R-spondin 1 (RSPO1), forkhead transcription factor FOXL2 (FOXL2), double-sex-and MAB3-related transcription factor 1 (DMRT1), fibroblast growth factor 9 (FGF9), Wilms tumor gene 1 (WT1), androgen receptor (AR)] which are all known, in humans and mice, to be involved in sex determination or SRY-negative XX sex reversal, if mutated [3]. Since no sequence data are presently available for roe deer, we sequenced genomic DNA from a healthy male European roe deer using Illumina technology to produce 920 million paired-end reads (21-fold coverage). We assembled contigs from these reads, identified the aforementioned ten full-length genes of roe deer by comparison with homologous genes in the bovine genome (90-95% similarity), and independently sequenced each of these genes in the intersex, male and female control deer to verify the sequences in assembled contigs. We identified 42 sequence variations in SOX3, SOX9, SOX10, RSPO1, FOXL2, DMRT1, FGF9, WT1 and AR (Table 1) but none in SRY. Most of these alterations (n=25) are located intronically or in untranslated regions, and do not involve splice sites or regulatory regions. All variations in exonic regions are silent, except for two, which were also present in healthy roe deer controls, thus excluding them as a causal link to intersexuality in the present roe deer case. Two sequence variations identified in the X-chromosomally-located genes AR and SOX3 were detected in a heterozygous state in the intersex deer. Together with the lack of AMEL and SRY gene sequences, these data indicate the sole presence of two X chromosomes, representing the female sex chromosome status. Since in humans, female-to-male sex reversal can be due to a translocation of Y chromosome sequences to an X chromosome encompassing SRY [9], we analyzed this gene. Yet, no SRY-specific fragment was detectable by PCR in the intersex deer, in contrast to XY male controls. Since the copy number of some genes have been shown to interfere with normal sexual differentiation [10], we investigated the dosages for SOX3, SOX9, SOX10, RSPO1, DMRT1, WT1, RSPO1, FOXL2 and AR using selected PCR amplicons for initial duplication deletion screening via quantitative real-time PCR analysis. While the copy numbers were identical in the intersex as compared with male and female control deer, the SOX9 gene showed a triple dosage compared with the controls. In order to exclude the possible involvement of a pseudogene, indepth analyses of all three individual exons, both introns and the 5'-and 3'-untranslated regions were performed for the intersex as well as for one female and one male control deer. The analysis revealed the presence of three copies of the SOX9 gene in the intersex roe deer, in contrast to controls. Also the dosage of the 5'-and 3'-untranslated regions of SOX9 was increased by 3-fold, but further 5'-upstream and 3'downstream regions showed normal, double dosage in the intersex individual ( Figure 2). Duplication breakpoints can be suspected > 1.5 kb downstream of SOX9 and in a 24 bp region 890 bp upstream of the gene, including the entire coding region of SOX9 and the promoter (compared with the highly homologous sequence of SOX9 promoters in humans and mouse [11]). This information suggests that the extra copy of the SOX9 gene is functionally active, although long-range PCR analyses did not reveal the exact location or orientation of the extra copy of SOX9. Discussion Here, we report on a specimen of C. capreolus with a distinct intersex phenotype. It had cranial outgrowths typical of a buck, and a rear phenotype characteristic of a doe. Close inspection revealed no vaginal opening, abdominal testes and a retroposed pizzle. Our results of molecular genetic testing demonstrate a case of SRY-negative XX sex reversal in the investigated intersex roe deer. Similar cases of SRY-negative XX sex reversal have been reported previously for dogs, goats, horses and pigs [12][13][14][15] but rarely in wild mammals, such as roe deer [5]. A number of genes other than SRY (e.g. WT1) have been reported as underlying causes of mammalian sex reversal in humans and mouse models [10], but the genetic causes have usually remained elusive. Several genes known to be involved in sex determination or SRY-negative XX sex reversal (SOX3, SOX10, RSPO1, DMRT1, WT1, RSPO1 and FOXL2) were inferred not to be linked to the phenotype in the present intersex roe deer by sequencing analyses and CNV analyses except of SOX9. For this gene, we identified a 3-fold increased gene dosage in this deer. SOX9 is a direct downstream target of SRY and plays a critical role in male sexual differentiation [16]. SOX9 mutations can lead to haplo-insufficiency, being responsible for campomelic dysplasia, a skeletal malformation syndrome often associated with XY sex reversal in humans [17]. In contrast, duplication of SOX9 has been implicated in XX sex reversal [18][19][20]; its ectopic expression in mice induces testis formation in XX gonads [21]. Together with this information, our findings indicate that the extra copy of SOX9 is sufficient to initiate testis differentiation in the absence of SRY, inferring the intersex phenotype in the present roe deer. In this case, the link between phenotype and genotype serves as an excellent example of how a deep sequencing-based approach can gain insights into naturally occurring genetic defects in wild animals, which also has broad implications for understanding complex sex disorders. Roe deer A one-year-old European roe deer was bagged in Schmallenberg (Germany; 51'14'' N, 8'22'' O; 700 m altitude). This deer has an intersex appearance, including short unbranched antlers and a displaced penis brush appearing from the distance as a Schürze (characteristic tuft of hair in females located just above the vaginal entry). Upon closer inspection the deer presented abnormal male external genitalia, namely small inguinal testicles which had not descended into the scrotum. DNA was isolated from muscle and kidney of the intersex deer. Two healthy European roe deer specimens with normal sexual appearance (bagged in other regions of Germany) were used as male and female controls, respectively. For whole-genome sequencing, EDTAblood of a healthy male European roe deer from Hohenstein-Born (Germany; 50'09'' N, 8'05'' O; 400 m altitude) was used. Genomic DNA was extracted from different tissues (blood cells, muscle and kidney) using a standard protocol [22]. All investigated European roe deer specimens were killed by gunshots from a distance. JTE is a licensed hunter in Germany (Jahres Jagdschein Nr. 86/2002, Bundesrepublik Deutschland). Based on Rehwild-Abschusspläne (shooting schedules for roe deer), he has to bag a certain number of roe deer specimen in a given area (Revier). Such specimens were used for this investigation without having been killed specifically for this study. Whole genome sequencing High molecular weight genomic DNA was isolated from roe deer (C. capreolus), and DNA integrity was confirmed. A shortinsert (300 bp) genomic DNA library was prepared and pairedend sequenced using the Illumina sequencing platform (Illumina HiSeq) according to manufacturer's instructions. The sequence data generated were verified, and low quality sequences removed. The size of the genome and the level of heterozygosity within the deer sample used for sequencing were estimated by establishing the frequency of occurrence of each 17 bp k-mer within the genomic sequence data set. Genome size was estimated using a modification of the Lander Waterman algorithm, where the haploid genome length in bp is: G = (N×(L-K+1)-B)/D, where N is the read length sequenced in bp, L is the mean length of sequence reads, K is k-mer length (defined here as 17 bp) and B is the number of k-mers occurring less than four times. Heterozygosity was evaluated throughout the genome assembly by assessing the distribution of the k-mer frequency for the sequence data set (see Figure S1 in File S1). Paired-end sequence data was assembled using SOAPdenovo [23] employing a k-mer value of 43 or 63 bases (see Table S1 in File S1). Assembly quality and completeness of each assembly was assessed based on the minimum length of sequence contigs and scaffolds of > 100 bp which contained 50% and 90% of the sequence data (N50 and N90, respectively). Assembled genome scaffolds have been deposited in public genome resource databases (http:// bioinfosecond.vet.unimelb.edu.au/GasserData/Deer/deer.html; EMBL study accession PRJEB4372). Genomic DNA libraries (300 bp insert size) were sequenced on the Illumina platform generating a total of 918,961,602 paired-end sequences (average read length of 96 bases). The genome size was estimated to be ~3 .54 Gb, suggesting that sequence data permitted an average of 21-fold coverage of the genome. Genomes were assembled using k-mers of 43 or 63 and scaffolds were used to generate a sequence homology BLAST database (http://gasser-research.vet.unimelb.edu.au/ Deer/wblast6.html). PCR amplification, fragment analysis and Sanger DNA sequencing For sexing of the investigated European roe deer with intersexual appearance, the amelogenin (AMEL) gene was amplified based on the genomic sequence of Bos taurus (assembly Baylor Btau_4.6.1/bosTau7, The University of California Santa Cruz (UCSC) Genome browser database) under standard conditions. Fragment analyses were performed as described previously [24] on a Beckman-Coulter CEQ 8000 capillary sequencer following standard protocols and using the included software. The described primer sequences from Pajares et al. (2007) were optimized under our own conditions. To investigate potential translocation of SRY sequences from Y to X chromosome, this gene was amplified by PCR using primers from Takahashi et al. (1998). As internal quality control, a bovine autosomal microsatellite locus BOVIRBP (bovine interphotoreceptor retinoid-binding protein) [25] was amplified by PCR with published primers [26]. All male control samples showed the respective amplifiable sequences. To investigate the localization and organization of the extra copy of the SOX9 gene, the respective regions were analyzed with several primer combinations by long-range PCR analyses using expand high fidelity PCR system (Roche). All primer sequences used in this study are listed in Table S2 in File S1. Quantitative real-time PCR The quantitative real-time PCR was performed to assess the copy number of the genes SOX3, SOX9, SOX10, RSPO1, FOXL2, DMRT1, FGF9, WT1, AR and SRY in the genomic DNA of the intersex roe deer as well as the male and female controls. Primers specific to these genes were designed using Primer Express software v2.0 (Applied Biosystems). All realtime PCR assays were carried out using the KAPA TM SYBR Fast qPCR Master Mix (peqlab) as described by the manufacturer on a StepOnePlus TM Real-time PCR detection system (Applied Biosystems). Baseline and threshold values were set automatically and threshold cycle (Ct) values were determined using OneStepPlus software (Applied Biosystems). Copy numbers were calculated using the ΔΔ-CT method [27] with normalization to the house-keeping gene encoding albumin. A healthy female roe deer with normal sexual appearance was used as standard for the PCR assay. All measurements were run in triplicate in at least two independent analyses. A statistical analysis was performed for quantitative real-time PCR results of the entire SOX9 gene and regions 5' and 3' of this gene. The mean gene dosage ratio for the region spanning 0.9kb 5' to 1.5kb 3' from SOX9 gene from the intersex roe deer relative to the female control roe deer was 1.6513±0.1321 (p<0.0001; unpaired t-test). Supporting Information File S1. Supporting information. Figure S1, Distribution of the k-mer frequency for the roe deer genomic DNA sequence data set. Table S1, Detailed information of the de novo deer assembly. Table S2, Primer sequences used in this study.
2016-05-12T22:15:10.714Z
2013-09-06T00:00:00.000
{ "year": 2013, "sha1": "1d87ec77f3b3cf591f7cdd3d853bb61db569e52b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0073734&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d87ec77f3b3cf591f7cdd3d853bb61db569e52b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118270936
pes2o/s2orc
v3-fos-license
Wave packet dynamics in 2DEG with spin orbit coupling: splitting and zitterbewegung We study the effect of splitting and zitterbewegung of 1D and 2D electron wave packets in the semiconductor quantum well under the influence of the Rashba spin orbit coupling. Results of our investigations show that the spin orbit interaction induces dramatic qualitative changes in the evolution of spin polarized wave packet. The initial wave packet splits into two parts with different spin polarization propagating with unequal group velocity. This splitting appears due to the presence of two branches of electron spectrum corresponding to the stationary states with different chirality. It is demonstrated also that in the presence of external magnetic field B$ perpendicular to the electron gas plane the wave packet splits into two parts which rotates with different cyclotron frequencies. It was shown that after some periods the electron density distributes around cyclotron orbit and the motion acquire an irregular character. Our calculations were made for both cases of weak and strong spin orbit coupling. I. INTRODUCTION Producing and detecting spin polarized currents in semiconductor nonmagnetic devices is the ultimate goal of spintronics. The intrinsic spin orbit interaction 1 existing in low dimensional systems which couples electron momentum to its spin is one of the most promising tools for realizing spin polarized transport. For this reasons, during the last years a substantial amount of work has been devoted to study effects of spin orbit interaction on the transport properties of nanostructures (for a review, see, e. g. 2,3,4 ). At first time the electron wave packet dynamics including the problem of zitterbewegung in semiconductor quantum well under the influence of the spin orbit Rashba and Dresselhaus coupling has been considered by Schliemann, Loss and Westervelt 5 . In this work the oscillatory motion of the electron wave packets reminiscent of the zitterbewegung of relativistic electrons was studied for free electron motion i.e., in the absence of electric or magnetic field. The authors of 5 predicted the resonance amplification of zitterbewegung oscillations for the electron moving in a quantum wire with parabolic confinement potential and propose to observe this fundamental phenomena experimentally using high resolution scanning probe microscopy imaging techniques. The zitterbewegung of the heavy and light holes in 3D semiconductors was investigated in 6 .In this paper the semiclassical motion of holes in the presence of a constant electric field was studied by numerical solution of the Heisenberg equations for momentum and spin operators in the Lattinger model of spectrum. It was shown that the hole semiclassical trajectories contain rapid small amplitude oscillations reminiscent the zitterbewegung of relativistic electrons. It should be noted, however, that the spatial structure of the wave packet and the changing of its shape due to effect of splitting in 5,6 was not considered. At the same time the splitting of spin polarized electron beams in the systems with spin orbit coupling was investigated in a series of works. In particular, the authors of papers 7,8 propose to use the lateral interface between two regions in gated two-dimensional heterostructure with different strength of spin orbit coupling to polarize the electron. They have shown theoretically that in this structure a beam with a nonzero angle of incidence splits into some spin polarization components propagating at different angles. The similar effect of electron spinpolarized reflection in heterostructures and spatial separation of the electron beams after reflection has been observed experimentally in 9 . The transverse electron focusing in systems with spin orbit coupling at the presence of perpendicular magnetic field was theoretically analyzed in 10 where it was shown that in the weak magnetic field regime and for a given energy, the two branches of states have different cyclotron radii. The effect of spatial separation of the electron trajectories of different spin states in a perpendicular magnetic field has been experimentally observed in 11 . In this work we study the striking dynamics of the electron wave packets in a narrow A 3 B 5 quantum well at the presence of the spin orbit k-linear Rashba coupling, which arise due to structural inversion ("up-down") asymmetry. The splitting of the wave packets in two parts appear due to the presence of the electron states with "plus" and "minus" chirality, which propagate with different group velocity. These two parts of the split packet can be characterized by different spin density. It is found that electron trajectories contain small amplitude damped oscillation. We show that the packet splitting leads to the damping of zitterbewegung. The splitting and zitterbewegung of wave packet is naturally accompanied by its broadening due to effect of dispersion. We investigate also the atypical cyclotron dynamics of the wave packet in a perpendicular magnetic field. It was shown that due to the spin orbit coupling the packet with spin parallel to the magnetic field splits into two parts which rotate with different cyclotron frequencies. We determine the moments when two parts of the packet are located at opposite points of the cyclotron orbit and after that they return many times back to their initial state. With the time due to the incommensurability of the cyclotron frequencies and the ordinary packet broadening the electron density distributes randomly around the cyclotron orbit. All our calculations were made for the material parameters of the real semiconductor structures with a relatively strong and weak spin orbit and Zeeman interaction. The paper is organized as follows. In Sec. II we introduce the Green functions for two dimensional electrons in the presence of Rashba spin orbit interaction and analyze the evolution of 1D wave packet. The analytical and numerical results illustrate the effects of packet splitting and zitterbewegung. In section III we describe in details the time development of the 2D wave packets. Finally, in Sec. IV we discus the manifestation of the spin orbit interaction in the evolution of coherent wave packet in a magnetic field perpendicular to electron gas plane. The splitting of the initial coherent packet and distribution electron probability via cyclotron orbit is considered. Section V concludes with a discussion of the results. The Appendix provides the mathematical details necessary to obtain Eqs. (36a) and (36b). II. THE DYNAMIC OF THE ONE-DIMENSIONAL WAVE PACKETS In this section we consider the specific character of the wave packet dynamics in the systems with Rashba spin orbit coupling 1 . The Hamiltonian of the system under consideration reads where p = −ih∇ is the momentum operator, m is the electron effective mass, α is the Rashba coupling constant, and the components of the vector σ denotes the spin Pauli matrices. The eigenfunctions for the in-plane motion identified by the quantum numbers p(p x , p y ) are Here ϕ is the angle between the electron momentum p and x axis, so e iϕ = px+ipy p , s = ±1 denotes the branch index. The energy spectrum of the Hamiltonian (1) corresponding to two branches has the form where p = p 2 x + p 2 y . Using the definitionv = dr dt = ī h [H, r], one can obtain from Eq.(1) the velocity operator componentsv To analyze the time evolution of electron the initial states we use the Green's function of the nonstationary equation, which is a non diagonal 2 × 2 matrix Here i, k = 1, 2 are matrix indexes and matrix elements can be written as an integrals In the present section we examine in details the dynamics of the quasi-1D wave packet in 2D system with spin orbit coupling. This problem allows the analytical solution. Let at the initial time t = 0 wave function to be a plane wave with wave number p 0x modulated by a Gaussian profile and spin polarized along the z direction where coefficient C is equal to ( 1 dLy √ π ) 1/2 , L y is the size of the system in the y direction. The variance of the position operator < (∆x) 2 > in this case is equal to d 2 /2 and the variance < (∆y) 2 > exceed this value. The variance of the momentum operator p x is < (∆p x ) 2 >=h 2 /2d 2 , and the averagep is equal to p 0x . One may consider the initial wave function as the limiting case of a 2D packet with the width along y direction much greater than along x i.e., L y ≫ d. The electron wave function at any arbitrary moment of time can be found with the help of the Green's function where matrix elements G 11 and G 21 of the matrix (5) are determined by Eqs.(2),(3) and (6) By using the formula e iq cos ψ = J 0 (q) + 2 n=1 J 2n (q) cos(2nψ)+ +2i n=1 J 2n−1 (q) sin((2n − 1)ψ). (11) and by integrating over the angle variable in Eqs. (9), (10) we finally have where J 0 and J 1 are Bessel functions. Substituting Eqs. (12), (13) and (7) into Eq. (8) and integrating over x ′ and y ′ , we find the analytical expression for the spinor components ψ 1,2 (x, t).It should be noted that two electron bands with chirality "plus" and "minus" give different contribution to the electron wave functions. The calculation of the expressions for Ψ 1,2 leads to the following electron probability densities |Ψ 1 | 2 and |Ψ 2 | 2 at any arbitrary moment of the time where γ =h/d 2 m is the inverse broadening time p 0x = hk 0 . As follows from Eqs.(14a), (14b) the shape of the function ρ(x, t) essentially depends on the parameter In the case wide packet when the momentum variance is much more (mα) 2 and the inequality η ≪ 1 takes place, the evolution looks like at the absence of Rashba term. Otherwise when η ≫ 1 the initial wave packet splits into two parts which propagate with different group velocity, so the distance between these two parts increases linear in time. This two parts correspond to the first and second terms in square brackets in Eq.(14a) and Eq.(14b).The third terms in Eq.(14a), (14b) describe the oscillation of the components of electron density |Ψ 1 | 2 and |Ψ 2 | 2 in the region of the overlapping of two split parts of the packet. It is clear that these oscillations originates from the interference between the states of different spectrum branches. When two parts of the packet move away from each over the amplitude of the oscillations decreases. The period of these oscillations along the x direction depends on the initial width of the packet d and equals to ∆x = πd 2 (1 + γ 2 t 2 )/αγt 2 . So, if inequality γt ≪ 1 takes place the period of oscillation decreases with time and equals to ∆x = πmd 4 /αht 2 and when γt ≫ 1 the oscillation period is not depend on the time ∆x = πh/mα. Here one can clearly see that initial Gaussian wave packet Eq.(7) splits up at t > 0 into two parts propagating along the x direction. The width of each part of the packet increases in time as for the case of free particle. To analyze spin dynamics one can consider the time evolution of the spin density Using Eqs.(14a) and (14b) we immediately find the expression for spin density s z =h 2 (|Ψ 1 (r, t)| 2 − |Ψ 2 (r, t)| 2 ), which demonstrate the oscillatory behavior as a function of x (see Fig.1(b)). The period of oscillation here is the same as for the functions |Ψ 1,2 (x, t)|. For the spin density s y (x, t) the following result can be obtained According to this equation both pats of the initial wave packet moving along the x direction with different velocities are characterized by the opposite spin orientation (at the same time the average spin component S y = s y (x, t)dr is equal to zero). Note that the components of wave function depend only on coordinate x, that leads top y = p y = 0, however the velocityv y (t) = 0. Really, using the definition Eq. As follows from these equations the averagev y velocity performs the oscillations in the transverse direction (zitterbewegung or jittering) with the frequency 2k 0 α and the damping time is determined by the parameter d/α. III. EVOLUTION OF TWO DIMENSIONAL PACKETS AT THE PRESENCE OF SPIN ORBIT COUPLING We consider now the evolution of two dimensional wave packet at the presence of spin orbit coupling. Let us consider the following form of the Gaussian packet at the initial moment t = 0: where p 0x =hk 0 is the average momentum and C = 1/ √ πd. Then, using a Green's function method we arrive after some algebra at the following equations for the components of spinor (in the momentum space) After that Ψ 1,2 (r, t) can be obtained directly by 2D Fourier transform of C 1,2 (r, t): where J 1 and I 0 are the Bessel and the modified Bessel functions of the first and the zeroth order, ϕ is asimutal angle in the xy plane. These expressions become simpler if the average momentum of a wave packet is equal to zero, i.e. p 0x = 0. In this case As in the case of 1D packet the shape of the full electron density ρ(x, t) = |Ψ 1 | 2 + |Ψ 2 | 2 at t > 0 depends on the parameter η = m 2 α 2 d 2 h 2 . In Fig. 2. we show the electron density ρ(x, t) for the case p 0x = 0 at the time t = 5 (in the units of d α ) and η = 2, 7. As one can see the spinorbit coupling qualitatively change the character of the wave packet evolution, so that during the time the initial Gaussian packet turns into two axially symmetric parts. As follows from our analytical and numerical calculation the outer part propagates with group velocity which is greater than α and the inner part moves with group velocity lower than α. If η ≪ 1, i.e. the packet is narrow enough, its evolution remained the standard broadening of the Gaussian packet of free particle. In Fig. 3(a) it is shown the packet evolution for the casep 0x =hk 0 = 0. It is clear that in this case the cylindrical symmetry is absent, and two maximums of the electron density spread along the x direction with not equal velocities. Each one of these two parts are spin polarized. Fig. 3(b) illustrates the distribution of the spin polarization s y (x, y, t)for the initial state, polarized along z axis, Eq.(18). It is a smooth function which has different sign in the regions for two maximums of the electron density. Whenp x = 0 the motion of the wave packet center along x accompanied by the oscillation of the packet center in a perpendicular direction, or zitterbewegung. Below we consider the effect of damping of zitterbewegung oscillation for 2D packet which was not predicted in 7 . Using Eq.(22a) and Eq.(22b) we calculate the average value of the operatorŷ = ih ∂ ∂py and obtain for t > 0 the resultȳ In the case when wave packet is wide enough and the inequality a = dk 0 ≫ 1 takes place, one can obtain a simple asymptotic formula forȳ(t). To show this we represent Eq. (22) as a sum of two terms where we denote pd To evaluate Z we replace the modified Bessel function I 1 (2dk 0 u) by it's asymptotic formula I 1 (x) = e x √ 2πx , which valid for the case k 0 d ≫ 1. After that the integral with respect to u can be evaluated using the stationary phase method that leads to the simple result Substituting this expression into Eq.(23) we finally havē The last result demonstrates clearly thatȳ(t) experience the damped oscillations with the frequency 2αk 0 decaying for the time d α . In the real 2D structures the frequency of the zitterbewegung have the order of 10 11 − 10 12 sec −1 for k 0 ≈ 10 −5 − 10 −6 cm. The amplitude of the zitterbewegung is proportional to the electron wavelength in x. At Fig. 4 we plot the functionȳ(t) determined by Eq. (22) which demonstrates in accordance with Eq.(24) the effect of zitterbewegung damping. When t ≫ d α the oscillations stop and the center of the wave packet is shifts in direction perpendicular to the group velocity at the value of 1/2k 0 . The last result coincides with 7 . Since the packet moves with constant velocity, the time oscillations ofȳ(t) can be easily converted to the oscillation of the wave packet center in real x, y space. IV. CYCLOTRON DYNAMICS OF 2D WAVE PACKET IN A PERPENDICULAR MAGNETIC FIELD In this section we examine the cyclotron dynamics of electron wave packet rotating in a magnetic field B(0, 0, B) which is perpendicular to the plane of 2D electron gas. In this case the one-electron Hamiltonian including Rashba term reads Here e is the electron charge, m is the effective mass, p x,y are the momentum operator components, α is the parameter of Rashba coupling, g is the Zeeman factor, and µ B is the Bohr magneton. Bellow we use the Landau gauge for the vector potential A = (−By, 0, 0). Then the eigenvalues and the eigenfunctions of the Hamiltonian (25) indicating by quantum numbers n, k x , s = ±1 and corresponding to two branches of levels can be evaluated analytically (see, e.g., 13 ) where E + 0 =h ωc 2 − gµ B B is the zero Landau level, n = 1, 2, 3, . . ., ω c = eB mc is the cyclotron frequency, ℓ B = h mωc is the magnetic length. The eigenspiniors are Here coefficients D n are given by: linear oscillator wave functions, y c = ℓ 2 B k x is the center of oscillator. It should be noted that for enough weak magnetic field the dependence of energy E − n on quantum number n (n ≫ 1) resembles the behavior of the function ε − (p), Eq. (3). Namely, for small n the values of energy E + n are negative, decreasing with n, like for the hole states. Using the Eqs.(26),(27) we can obtain components of the matrix Green's function which permits us to find the time evolution of the initial state. The usual definition where the time-dependent coefficients f n (t) and g n (t) are given by and Let the initial state coincides with the wave function of the coherent state in a magnetic field Such choice of wave function Ψ(r, 0) is motivated by the following: as it is well known, at the absence of spin orbit coupling the dynamics of coherent states in a magnetic field looks like the dynamics of a classical particle. To analyze the time evolution in our case one needs to calculate the wave function at t > 0. Straightforward algebra with using Eqs.(29a), (29b), (31) leads to the final expressions where ϕ(x, y, u) = iu x ℓB − (p0xℓB /h−u) 2 2 − u 2 4 − y−uℓB 2ℓB . The electron density obtained by numerical evaluations of the integrals Eqs.(32a) and (32b) is represented in Fig. 5 for relatively weak spin orbit coupling and strong magnetic field. The calculations was made for the material parameters of two dimensional GaAs heterostructure: m = 0, 067m 0 , α = 3, 6 · 10 5 cm · sec −1 , g = −0, 44, B = 1T and k 0x = p 0x /h = 1, 5 · 10 6 cm −1 . It is not difficult to verify that the series in Eqs. (32a) and (32b) converge very rapidly as n increases. So for our parameters it suffices to take n max = 25 to calculate the components ψ 1 (r, t) and ψ 2 (r, t). At t > 0 the initial wave packet (Fig. 5(a)) splits on two parts (Fig. 5(b)) which "rotate" with different incommensurable cyclotron frequencies. In accordance with Eqs.(26) these frequencies can be determine by the expressions The effective n in this equation is connected with cyclotron radius via relation R c (t) = p0x mωc = √ 2nℓ B . Be- in the case of a weak spin orbit coupling or strong magnetic field, one can obtain from (33) the approximate expression for the difference between cyclotron frequencies Fig.5(b) demonstrates for the case ς ≪ 1 the distribution of electron density at the moment when two parts are located at opposite points of the cyclotron orbit. The correspondent time can be determined from the relation (ω + c − ω − c )t 0 = π and hence for the GaAs structure we 2α 2 m = 45 2π ωc . After some cyclotron periods two split packets merge again which is demonstrated in Fig.5(c). With time due to the effect of the broadening electron probability distributes randomly around cyclotron orbit that is shown in Fig.5(d). In the opposite case of relatively strong spin orbit coupling or weak magnetic field when the inequality ς = 2nα 2h2 ℓ 2 B E 2 0 ≫ 1 holds true the difference between two cyclotron frequencies, as it follows from Eq.(33), equals For the InGaAs structure with parameters m = 0, 05m 0 , α = 3, 6 · 10 6 cm · sec −1 , g = −10, B = 1T and k 0x = p 0x /h = 1, 5 · 10 6 cm −1 , we have ς = 8 and the divergence time t 0 ≈ 2, 3 · 2π ωc . One can analyze the effects of the periodic splitting and reshaping of the wave packet in magnetic field as well as the process of distribution around cyclotron orbit by considering the time dependence of the cyclotron radius determined as R(t) = {x(t)} 2 + {ȳ(t)} 2 . To do this we represent the average value of coordinates x 1 = x, x 2 = y asx where ψ 1 and ψ 2 are determined by Eqs. (32a) and (32b). The lengthy calculations (see the Appendix) eventually yield the explicit expression for thex(t) andȳ(t): As one can see, the dependence ofx(t) andȳ(t) on the time are determined by both the factors cos ω c t and sin ω c t as well as by functions which describe the additional time dependence due to spin precession. Note that the frequencies δ k = 1 h are incommensurable. As a check on this formalism, it is not difficult to show that in the absence of Rashba coupling (α = 0) as it follows from Eqs. (36a) and (36b)x that correspond to the classical motion of charged particle in the magnetic field with a constant radius. The time dependence of the cyclotron radius R(t) in the system with Rashba is presented at Fig. 6. It is clear that the oscillations of R(t) are connected with the effects of periodic splitting and reshaping of wave packets. The radius has the minimal values at the moments when two parts of the packet are located at the opposite point of cyclotron orbit. This situation is shown at FIG 5.(b). The first minimum labeled by the letter b at FIG . 6 One can see that the time of the first minimum approximately coincides to our estimation made above: t 0 ≈ 45T c . The radius is maximal at the moments of the packet reshaping that is shown at Fig. 5(c) (two of these points labeled by the letters a and c. Due to the effects of incommensurability of the cyclotron frequencies and the packet broadening the amplitude of the oscillations decrees with the time After that the electron density distributes around cyclotron orbit, the amplitude of the oscillation ceases and the electron density distribution acquire the no regular character (Fig.5(d)). We evaluate also the distribution of the electron density for the structure with relatively strong spin orbit coupling. For such systems instead of the repeated process of the splitting and restoring of the wave packet discussed above the transition to the irregular distribution along the cyclotron orbit can be realized for the time of the order of one cyclotron period. This conclusion is confirmede by the simple estimation made for the InGaAs/GaAs structure discussed above. V. CONCLUSIONS We have analyzed the evolution of 1D and 2D wave packets in 2D electron gas with linear Rashba spin orbit coupling. We showed that the electron packet dynamics differs drastically from usual quantum dynamics of electrons with parabolic energy spectrum. Depending on the initial spin polarization packet splits in two parts which propagate with different velocities and have different spin orientation. At the time when two parts of wave packet overlap, the packet center performs oscillations in much the same way as for a relativistic particle. The direction of these oscillations is perpendicular to the packet group velocity. When the distance between split parts exceeds the initial width of the packet these oscillations stop. In the 2D semiconductor structures placed in a perpendicular magnetic field the spin orbit coupling changes the cyclotron dynamics of charged particles. As at the absence of magnetic field the initial packet splits in two parts, which rotate in a perpendicular magnetic field with different incommensurable cyclotron frequencies. As a result, after some cyclotron periods these parts join again. The corresponding time t 0 essentially depends upon the ratio of the energy of spin orbit coupling and the distance between Landau levels, Eq.(26): ς = 2nα 2h2 Thus, for the systems with weak and relatively strong spin orbit coupling e.g. GaAs and InGaAs hetrodtructures, the time t 0 equals to 45T c and 2, 3T c , respectively. The splitting and zitterbewegung of the wave packets in nanostructures with spin-orbit coupling can be observed experimentally in low dimensional structures. In particular, these effects should determine the electron dynamics and high-frequency characteristics of the field effect transistor by Datta and Das 13 , and other spintronic devices. Simple estimations show that during the time of the wave packet propagation through the ballistic transistor channel where the distance between emitter and collector is of the order of 1µm, the distance between two split parts of the wave packet becomes comparable with its initial size. In this situation the high-frequency characteristics of the field effect transistor should be substantially affected by the spin-orbit coupling. Moreover, the atypical semiclassical dynamics of a spin-orbit system placed in a magnetic field will influence the shape of the cyclotron resonance line in 2D systems with spin orbit coupling. An important feature of these experiments is that the electron transport is in the ballistic regime and thus the momentum relaxation time τ should be considered much more greater compared with the typical splitting time.
2008-06-07T09:16:37.000Z
2008-05-29T00:00:00.000
{ "year": 2008, "sha1": "3f36c2dd899bb58002813fca87b7bfbc149b6235", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0805.4489", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3f36c2dd899bb58002813fca87b7bfbc149b6235", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253395770
pes2o/s2orc
v3-fos-license
Older Adults and Social Isolation and Loneliness During the COVID-19 Pandemic: An Integrated Review of Patterns, Effects, and Interventions Abstract A scoping review was conducted to identify patterns, effects, and interventions to address social isolation and loneliness among community-dwelling older adult populations during the COVID-19 pandemic. We also integrated (1) data from the Canadian Longitudinal Study on Aging (CLSA) and (2) a scan of Canadian grey literature on pandemic interventions. CLSA data showed estimated relative increases in loneliness ranging between 33 and 67 per cent depending on age/gender group. International studies also reported increases in levels of loneliness, as well as strong associations between loneliness and depression during the pandemic. Literature has primarily emphasized the use of technology-based interventions to reduce social isolation and loneliness. Application of socio-ecological and resilience frameworks suggests that researchers should focus on exploring the wider array of potential pandemic age-friendly interventions (e.g., outdoor activities, intergenerational programs, and other outreach approaches) and strength-based approaches (e.g., building community and system-level capacity) that may be useful for reducing social isolation and loneliness. Introduction The new Coronavirus (COVID-19) is a highly contagious disease that was discovered near the end of 2019. The rapid and ubiquitous spread of COVID-19 has produced a global pandemic that has created new challenges for public health, continuing and long-term care (LTC) systems, community support organizations, businesses and the economy, families, and individuals. COVID-19 has been conceptualized as a "gero-pandemic", defined as a disease that has spread globally with heightened significance and deleterious consequences for older populations (Wister & Speechley, 2020). This has raised the profile of aging-related pandemic challenges facing societies and older individuals. As of the end of January 2022, COVID-19 cases surpassed 2,900,000 in Canada and 364,000,000 worldwide. The number of deaths has passed 33,000 in Canada and has exceeded 5,600,000 globally (Government of Canada, 2022). Approximately 15% of positive cases are among persons 60 years of age and older, and over 90% of deaths are among this age group (Government of Canada, 2022). Furthermore, residents of long-term care facilities account for approximately 3 per cent of COVID-19 cases and 43 per cent of deaths (Canadian Institute for Health Information, 2021). Yet, those living in the community also face risk of infection, and the effects of the pandemic response (Cohen & Tavares, 2020). Although age has become a major focal point in the pandemic (Morrow-Howell, Galucia, & Swinford, 2020;Shahid et al., 2020), there are also other aspects that result in increased risk of and vulnerability to pandemic social isolation and loneliness among older persons. These include mental health or lowered psychological well-being (Alonzi, La Torre, & Silverstein, 2020;Barber & Kim, 2021); physical health conditions (e.g., cardiovascular disease, cancer, diabetes, obesity, and chronic obstructive pulmonary disease) (Mauvais-Jarvis, 2020; Mitra et al., 2020); and multimorbidity (Wister, 2021a, b;Wong et al., 2020). Also at increased risk are individuals who are marginalized as a result of poverty; sexual orientation; race, ethnicity, or culture; immigration status; rural/remote environment; or other vulnerabilities (Alonzi et al., 2020;Wister, 2021a, b). To reduce the spread of COVID-19, governments have implemented public health measures such as physical/social distancing recommendations, closure of non-essential businesses and public spaces, implementation of lockdowns and stay at home orders, mask mandates, travel restrictions, and restrictions on visitors to LTC facilities. Although these measures have resulted in some successes in reducing transmission of COVID-19, concerns have been raised about the potential negative impacts of the prolonged periods of physical/social distancing and reduced social interactions, with specific attention paid to impacts on older adults (e.g., Morrow-Howell et al., 2020;Smith, Steinman, & Casey, 2020). In pandemic research social distancing has been associated with social isolation in community-dwelling older adult populations (Adepoju et al., 2021). Smith et al. (2020) use the term "COVID-19 Social Connectivity Paradox" to refer to the paradox that meaningful interactions and social connections are important for the health of older adults, yet pandemic restrictions require older adults to avoid friends, family, and sources of social support. Indeed, it is well established that social isolation is a common public health concern among community-dwelling older adults, which has been exacerbated by the pandemic, in particular physical/social distancing measures (Shahid et al., 2020). Pre-pandemic literature has demonstrated that social isolation and loneliness among older adults increases morbidity and mortality; reduces the ability to engage in healthy behaviours; increases anxiety, depression, and stress; decreases health-related quality of life, psychological well-being, and happiness; and results in lower access to health care services and lower health care utilization (Burholt et al., 2020;Courtin & Knapp, 2017;Fakoya, McCorry, & Donnelly, 2020;Golden et al., 2009;Kirkland et al., 2015;Leigh-Hunt et al., 2017;National Seniors Council, 2014a, b;Newall, McArthur, & Menec, 2015;Wister, Cosco, Mitchell, Menec, & Fyffe, 2019;Wister, Menec, & Mugford, 2018). Therefore, this article addresses current knowledge about the effects of social isolation and loneliness on community-dwelling older adults during the COVID-19 pandemic through the combination of a scoping review, grey literature scan, and new data, given that the research in this area is in its formative stages. We utilize a spectrum of international academic literature, given the dearth of Canadian literature. To add a Canadian perspective to the article, we also include analysis of data on the prevalence of loneliness among older Canadians from the Canadian Longitudinal Study on Aging (CLSA). Also, because of the fast-paced and dynamic nature of the pandemic, we incorporate a scan of Canadian grey literature on pandemic interventions. Data are presented on the extent of the problem, risk and protective factors (that either increase or decrease the likelihood of social isolation and/or loneliness), effects on older adults, and strategies and interventions that have been recommended or implemented to reduce social isolation and loneliness among older adults during the pandemic. Defining Social Isolation and Loneliness Among Older Adults Social isolation is commonly defined as "a lack in quantity and quality of social contacts" and "involves few social contacts and few social roles, as well as the absence of mutually rewarding relationships" (Keefe, Andrew, Fancey, & Hall, 2006, p.1). A concept closely related to social isolation is loneliness "defined as a distressing feeling that accompanies the perception that one's social needs are not being met by the quantity or especially the quality of one's social relationships" (Hawkley & Cacioppo, 2010, p.1). The key distinction between social isolation and loneliness is that social isolation refers to the objective level of social connections, whereas loneliness reflects the perception of being disconnected from others (Courtin & Knapp, 2017). An example of a common instrument to measure loneliness is the UCLA-3 Loneliness Scale (and its longer version), consisting of three subjective questions that participants score on a three-point scale (e.g., "How often do you feel that you lack companionship?") (Hughes, Waite, Hawkley, & Cacioppo, 2004). Social isolation tends to be measured with a broader set of scales. For example, the Abbreviated Lubben Social Network Scale consists of six questions about contact with friends and family (e.g., "How many relatives do you see or hear from at least once a month?") (Lubben et al., 2006). Although loneliness and social isolation often overlap, there can be unique associations; for instance, some older adults may not feel lonely even if they have low levels of social connectedness, and some people who have many social contacts feel lonely. Conceptual Framework Two complementary models are useful in framing social isolation and loneliness among older adults during the pandemic. Socioecological (or socio-environmental) (SE) theory posits that individuals, social systems, and the environment are interrelated and interdependent (Bronfenbrenner,1994;Stokols, 1992;2017). Thus, the SE framework differentiates and connects each of the nested ecological domains (e.g., individual, interpersonal, organizational, neighbourhood, municipal, health regional, provincial, country, and global levels) to understand aging experiences. The SE framework has been applied as a useful conceptual framework in the areas of housing (Lawton, 1980), homelessness (Canham, O'Dea, & Wister, 2019), green spaces and walkability (Chaudhury et al., 2011), healthy public policy (Wister & Speechley, 2015), and recently, COVID-19 (Andrew et al., 2020). For example, in a study of frailty and LTC, Andrew et al. (2020) identify COVID-19 risk, vulnerabilities, and responses at the individual level (e.g., preexisting conditions); family level (e.g., policies limiting physical contact with relatives in LTC); community level (e.g., making public transportation systems less risky to use); and policy level (e.g., increased funding for pandemic response). Figure 1 provides an illustration of how the SE framework can be applied to the issue of social isolation and loneliness during the pandemic. A second framework that directly builds on and extends the SE framework, adding a new dimension to understanding the pandemic, is a complex systems resilience framework that stems from disaster response research (Klasa, Galaitsi, Trump, & Linkov, 2021;Klasa, Galaitsi, Wister, & Linkov, 2021). This framework attempts to: (1) link and quantify the different individual and environmental-level spheres of influence observed within the existing SE framework, and (2) apply a resilience lens whereby focus is placed on how and why individuals and systems respond to adversity (see Klasa, Galaitsi, Trump &Klasa, Galaitsi, Wister & for a full description of this model). According to the National Research Council (2012), the ability of a system to adapt to, plan with regard to, recover from, and absorb adversity represents four key resilience processes. These have been applied to structural systems, individuals, and families. This framework has also been applied to COVID-19 (Klasa, Galaitsi, Trump & Linkov, 2021;Linkov, Keenan, & Trump, 2021). First, planning and preparing for adverse or stress-inducing events, such as the COVID-19 pandemic, requires targeted reductions in risk and vulnerabilities. Pandemic planning necessitates an understanding of the characteristics of the COVID-19 pandemic, such as population susceptibility, severity, and behavioural response. Second, mitigation of outcomes associated with the adversity (e.g., social isolation and loneliness due to COVID-19 physical/social distancing policies) is necessary to produce positive resilience or coping responses. The ability of an individual or system to overcome pandemic adversity is a primary component of resilience. Third, recovery relies on various forms of strength-based resilience embedded in the individual, family, community, and structural system levels. Recovery is essential for counteracting the weakening of any system so that it can respond to future adversity such as a COVID-19 variant infection wave or a different pandemic on the horizon. Finally, a complex systems resilience framework focuses attention on fostering the strengths of individuals, families, and communities; for example, some Indigenous reserve communities have utilized strong community connections and leadership to support pandemic mitigation strategies even though health care resources tend to be weaker (National Collaborating Centre for Methods and Tools & National Collaborating Centre for Indigenous Health, 2020). The complex systems resilience framework helps to identify the weaknesses and strengths in nested ecological systems that influence risk and response to social isolation among older adults during the pandemic (Klasa, Galaitsi, Wister & Linkov, 2021;Pearman, Hughes, Smith, & Neupert, 2021;Wister, Klasa & Linkov, 2022). CLSA Data The CLSA is a national, longitudinal study that aims to follow 50,000 Canadians 45 years of age and up for 20 years (see Raina et al. 2009;; for more details). In order to estimate the increase in loneliness during the pandemic, we use unique data drawn from three separate waves of the CLSA. CLSA Baseline data (collected 2011n = 51,338);Follow-up One data (collected 2015-2018n = 44,817); and data from the CLSA COVID-19 Study (collected April to December, 2020; n = 28,559) are employed. Loneliness is measured in all surveys using an identical measure derived from the single loneliness item from the Center for Epidemiologic Studies (CES)-D depression scale. Participants who reported being lonely some of the time, occasionally, or all of the time were deemed to be lonely (compared with those who reported being lonely rarely/ none of the time). At the time of writing this article, only the COVID-19 Baseline survey data were available, and only in secondary descriptive form on the CLSA Web site (www.clsa-elcv.ca/) (i.e., not available for analysis and/or linkage to the other CLSA surveys). Therefore, the surveys are employed as individual samples showing age-sex patterns in loneliness for two pre-pandemic periods and one early pandemic survey, albeit with different sample sizes as a result of attrition and non-response. Scoping Review Methods The literature review followed the five steps outlined by Arksey and O'Malley (2005) for scoping reviews: (1) identify the research question, (2) identify relevant studies, (3) select relevant studies, (4) chart the data, and (5) collate, summarize, and report the results. The research questions guiding the review were: 1. How has the pandemic impacted patterns of social isolation and loneliness among older adult populations internationally? 2. What effects have experiences of social isolation and loneliness had on older adults during the pandemic? 3. What strategies and interventions have been recommended to reduce the social isolation and loneliness of older adults during the pandemic? 4. What interventions have been implemented to reduce the social isolation and loneliness of older adults during the pandemic? An initial search of English-language literature from academic journals was conducted the week of January 11, 2021, using the search engine Ebscohost. Ebscohost can search multiple databases simultaneously (e.g., AgeLine, Cumulative Index to Nursing and Allied Health Literature [CINAHL], MEDLINE ® , Social Sciences, PsycInfo, Academic Search Premier). Because of the fast-paced and changing nature of the pandemic, two supplementary searches were conducted the weeks of February 15 and March 8. The keywords used in the searches were social isolation OR loneliness, AND older adults (or synonyms) AND COVID-19 (or synonyms). Articles were included in the review if they focused on one of the following subjects: (1) patterns (e.g., prevalence, changes, associated factors, effects) of social isolation or loneliness among older adult populations during the COVID-19 pandemic; or (2) strategies or interventions (recommended or implemented) to reduce social isolation or loneliness among older adult populations during the COVID-19 pandemic. For category (2), non-empirical academic literature (e.g., commentaries, descriptions of programs) were considered for inclusion given the paucity of empirical literature. Articles were excluded if they met any of the following exclusion criteria: (1) not written in English, (2) did not focus on a community-dwelling older adult population (articles that included multiple age groups were retained if there was deemed to be sufficient analysis/findings focusing on older age groups), (3) not related to social isolation and/or loneliness, or (4) did not focus on the COVID-19 pandemic context. As shown in Figure 2, a total of 67 articles were included in this review based on the inclusion and exclusion criteria. Given the expected lag between implementation of interventions during the pandemic and publication of journal articles, a supplementary scan of Canadian grey literature (e.g., newspapers, organizational Web sites, and reports) was undertaken between February and March of 2021 to identify common types of interventions being implemented in Canada during the pandemic. Most interventions were identified via news stories through a Google search of the news, using the keywords social isolation OR loneliness AND older adults OR seniors AND Canada. Web sites of organizations serving older adults that were known to the authors were also searched for information on programs (e.g., Healthy Aging CORE BC, A & O). Results Patterns of Loneliness in the CLSA during the Pandemic Table 1 presents descriptive data on percentages of CLSA participants feeling lonely. Using pre-pandemic cross-sectional data at CLSA Baseline (2011)(2012)(2013)(2014)(2015), among older women 65-74 years of age we observed that 25 per cent reported feeling lonely, whereas among women 75-84 years of age, a total of 31 per cent reported feeling lonely. For men, the rates were lower for these same age groups, where 18 and 19 per cent reported being lonely, respectively. The rates for the CLSA Follow-up One (2015-2018) were almost identical to Baseline levels, with only men 75-84 years of age showing a slight rise from 19 to 23 per cent being lonely. Turning to the CLSA COVID-19 Study (2020), the absolute percentages increased significantly. Among women 65-74 years of age, 41 per cent reported feeling lonely, with 42 per cent among those 75-84 years of age reporting feeling lonely. For men, the percentages also increased, but were lower than for women. Among men 65-74 and 75-84 years of age, 26 per cent felt lonely at least some of the time. The relative rate increase in loneliness (percentage change between CLSA Baseline and COVID survey time periods, divided by CLSA Baseline percentage) was striking. There was a 67% increase in loneliness for women 65-74 years of age, and a 37 per cent increase for those 75-84 years of age. Smaller increases were observed for men, for whom there was a 45 per cent relative rise for those 65-74 years of age and a 33 per cent relative increase for the oldest group (see Table 1). Patterns of Social Isolation and Loneliness Internationally during the Pandemic The complex systems resilience framework outlines the need to understand the characteristics of a problem, factors that contribute to vulnerabilities and resilience, and the effects of the problem. In the international literature, a total of 38 articles were identified that reported on patterns of social isolation and loneliness during the pandemic (prevalence and changes, factors, effects). Most articles reported on multiple types of patterns: 22 reported on prevalence or changes in patterns of social isolation and/or loneliness during the pandemic, 24 on associated factors, and 14 on the effects of social isolation or loneliness on older adults. A wide variety of instruments were used in the studies to measure social isolation and loneliness, and some of the measurements utilized were unvalidated or differed from typical measurements for the concept (e.g., reporting on social isolation based on an objective perception). We have reported these findings based on the conceptualization used by the authors. Most studies were cross-sectional (n = 25) or longitudinal (n = 10). There were two hybrid studies that used both cross-sectional and longitudinal data. Only one relevant qualitative study was identified. Prevalence and Changes in Levels of Social Isolation and Loneliness Twenty-two studies reported on patterns of social isolation and/or loneliness among older adult populations during the pandemic. The majority originated in the United States (n = 8) or European countries (n = 10) (see Table 2 for an overview of the studies and their findings). Most of the longitudinal studies comparing levels of loneliness pre and peri-pandemic reported that levels of loneliness increased during the pandemic. Studies from the United States (Krendl & Perry, 2021), Switzerland (Macdonald & Hülür, 2021), Netherlands (Van Tilburg, Steinmetz, Stolte, van der Roest, & de Vries, 2020), and Hong Kong (Wong et al., 2020) reported greater levels of loneliness peri-pandemic than pre-pandemic. In Chile, no statistically significant change in mean loneliness scores was observed, although there was a small increase in the proportion of older adults classified as lonely by the authors based on a dichotomous classification of loneliness scores . A Swedish study found no differences in loneliness between COVID-19 and pre-pandemic data collection points (Kivi, Hansson, & Bjälkebring, 2021). Additionally, an Austrian study compared data from two different cross-sectional samples and reported an increase in loneliness scores during the pandemic (Heidinger & Richter, 2020). Five longitudinal studies reported data on loneliness collected at multiple points during the pandemic. Kotwal et al. (2021) collected data in the United States over the period from April to June 2020, and levels of severe loneliness ranged from 23 to 36 per cent. Levels of loneliness were highest during the first period of data collection (4-6 weeks after shelter in place orders began). Although levels of loneliness tended to level off over the course of the pandemic, a subgroup of respondents reported persistent or higher rates of loneliness. Luchetti et al. (2020) collected data in the United States between February and April 2020 and similarly found an increase in loneliness that plateaued over time. Another American study by Choi et al. (2021) measured loneliness between April and June 2020, reporting stable levels of loneliness. Stolz et al. (2021) observed a small decline in loneliness levels between the lockdown phase in Austria (March to April 2020) and re-opening (May to June 2020), although levels of loneliness remained above regular levels. In a study from Spain by Losada-Baltar, Martínez-Huertas, et al. (2021), there was an increase in loneliness scores over time (March to May 2020) for both older adults and younger age groups. The leveling or declining of loneliness during the pandemic in most jurisdictions may be indicative of processes of resilience at the individual, social network, and/or community system levels, because older adults appeared to either find ways to cope with restrictions, or increased supports were provided to them through their social and community networks. Loosening of pandemic restrictions in some jurisdictions may also have stabilized or UCLA-3 Loneliness Scale • Loneliness levels higher in pandemic than pre-pandemic • Loneliness higher during lockdown phase than during re-opening (decline from 33% to 27%), although re-opening levels were still higher than before pandemic • Living alone and reporting more exposures to pandemic restrictions were associated with higher levels of loneliness A smaller number of studies reported on levels of social isolation during the pandemic. In the longitudinal study by Kotwal et al. (2021) levels of social isolation ranged from 33 to 49 per cent during the pandemic and were highest during the first periods of data collection. Herrera et al. (2021) in Chile found that social isolation scores had decreased during the pandemic compared with prepandemic. In a cross-sectional survey, Gaeta and Brydges (2020) reported that 56 per cent of older adults felt isolated; however, this report was based on the subjective perceptions of social isolation among older adults rather than an objective measure, such as levels of contact or network size. In an Australian study, Strutt et al. (2021) reported that 20 per cent of older adults had a limited social network during the pandemic. Additionally, other studies reported that between 43 and 69 per cent of older adults were engaging in self-isolation or strict social distancing (Brown et al., 2021;Emerson, 2020;Kobayashi et al., 2021;Lehtisalo et al., 2021;Röhr et al., 2020). Factors Associated with Social Isolation and Loneliness during the Pandemic Twenty-four studies reported on factors associated with loneliness or social isolation during the pandemic (almost all focused on associations with loneliness, and only a few examined social isolation) (see Table 2). Furthermore, most of the factors identified were risk factors ("risk" means an increase in the likelihood of experiencing loneliness and social isolation based on a particular factor), rather than protective factors. Our findings are organized in the following sections based on the domains of SE model; studies primarily focused on the individual level, with some attention also paid to interpersonal and policy level factors. At the individual level, key factors that were reported in articles included age, gender (binary), subjective perceptions of aging, and health conditions. In the studies reviewed, associations with age and gender were equivocal. Gender (being female) was associated with higher rates of loneliness in some studies (Choi et al., 2021;Röhr et al., 2020;Whatley, Siegel, Schwartz, Silaj, & Castel, 2020;Wong et al., 2020), but not in others (e.g., Cihan & Gökgöz Durmaz, 2021;Parlapani et al., 2020;Polenick et al., 2021). When comparing segments of the older adult population, some studies found that more advanced age was associated with greater loneliness (Cihan & Gökgöz Durmaz, 2021;Lehtisalo et al., 2021), whereas other studies reported the opposite pattern (Choi et al., 2021;Kobayashi et al., 2021). Other studies reported higher levels of social isolation , but not loneliness among older adults compared to younger age groups (Losada-Baltar, , Luchetti et al., 2020Minahan, Falzarano, Yazdani, & Siedlecki, 2021). In a comparison of older and younger age groups Luchetti et al. (2020) found that only older adults experienced a statistically significant increase in levels of loneliness during the pandemic. Older subjective age (Schorr et al. 2021b;Shrira et al., 2020) and negative perceptions about aging (Losada-Baltar, Whatley et al., 2020) were identified as risk factors associated with loneliness. Additionally, multiple chronic conditions and poor health were risk factors for loneliness (Whatley et al., 2020;Wong et al., 2020). Protective factors identified at the individual level included: being a person of color (Choi et al., 2021;Polenick et al., 2021) and individual resilience, based on perceived ability to cope with stress (Röhr et al., 2020). Living alone was consistently identified as an interpersonal level risk factor associated with higher rates of loneliness during the pandemic (Choi et al., 2021;Cihan & Gökgöz Durmaz, 2021;Emerson, 2020;Fingerman et al., 2021;Lehtisalo et al., 2021;Parlapani et al., 2020;Stolz et al., 2021;Strutt et al., 2021;Wong et al., 2020). However, a study by Heidinger and Richter (2020) reported a heightening in reported loneliness during the pandemic among older individuals living with others but not in those living alone. Living alone also was associated with not having in-person contact with others during the pandemic . Identified interpersonal protective factors included: social support , living with others Van Tilburg et al., 2020), stronger social networks (Strutt et al., 2021), more social interactions (Macdonald & Hülür, 2021), and satisfaction with communication (Macdonald & Hülür, 2021). At the policy level, several studies investigated the relationship between pandemic restrictions and loneliness. Choi et al. (2021) found the following aspects of social distancing are associated with loneliness: cancelling/postponing social activities and avoiding close contact with people in the household. In a study by Stolz et al. (2021) older adults were asked whether they had been negatively affected by seven types of pandemic restrictions; reporting more negative exposures to pandemic restrictions was associated with higher levels of loneliness. Self-isolating and reduced social interactions were also associated with loneliness (Röhr et al., 2020;Strutt et al., 2021;van Tilburg et al., 2020), but few studies focused exclusively on social isolation. The risk factors associated with loneliness during the pandemic underscore the relevance of the SE domains of influence (particularly at the individual and interpersonal levels). The combination of risk (e.g., living alone) and protective factors (e.g., social support, satisfaction with communication) indicate that interpersonal factors, in particular, likely play a role in fostering resilience to loneliness. The research also suggests that targeted interventions are needed to enhance the resilience and adaptability of older adult populations who are vulnerable to loneliness (e.g., older adults living alone, older adults experiencing multi-morbidity). Effects of Social Isolation and Loneliness on Older Adults during the Pandemic Fourteen studies reported on potential negative effects of social isolation and loneliness on older adults during the pandemic (see Table 2). It is important to acknowledge that most of the studies included in this section are cross-sectional (n = 11); therefore, the directionality of associations cannot be conclusively determined. On the other hand, studies focusing on the effects of social isolation on depression and anxiety during the pandemic have been equivocal. Several studies showed no effects of social isolation on these psycho-social outcomes Kotwal et al., 2021;Minahan et al., 2021). In contrast, a Bangladeshi study identified social isolation as being positively associated with depression (Mistry et al., 2021). In addition, in an analysis of data from 62 countries, Kim and Jung (2021) found that social distancing was associated with COVID-19 related psychological distress. Furthermore, a qualitative study conducted with older adults with cognitive impairments who lived alone found that participants were experiencing significant psychological distress as a result of pandemic isolation (Portacolone et al., 2021). Whitehead and Torossian (2021) asked older adults to identify sources of stress in their lives during the pandemic, and loneliness/ isolation was the third most frequently identified stressor. Comparison of the responses of different demographic groups revealed that older women, low-income older adults, and single/widowed older adults ranked loneliness/isolation as their number one stressor. Strategies to Reduce Social Isolation and Loneliness during the Pandemic The complex systems resilience framework identifies the need to respond to pandemic adversity and foster strengths to build resilience. Twenty articles were identified discussing strategies for reducing social isolation and/or loneliness during the pandemic among older adult populations. (See the next section for examples of interventions). Most articles focused on strategies at the individual and interpersonal levels, rather than opportunities for intervening at higher system levels (e.g., changes to policy, communitylevel approaches). Only two articles specifically focused on strategies to address social isolation (Dassieu & Sourial, 2021;Sixsmith, 2020) and three specifically focused on strategies to address on loneliness (Burke, 2020;Conroy, Krishnan, Mittelstaedt, & Patel, 2020;Dahlberg, 2021), while the remaining articles discussed strategies to address social isolation, loneliness, and related concepts in conjunction. Because of the significant overlap in the literature, most of the strategies will be described as potentially having applicability for addressing both social isolation and loneliness, although distinctions are made as appropriate. Technology was frequently discussed in the literature as an essential component for innovative approaches to address social isolation and loneliness and was the core focus of half of the articles reviewed (n = 10). In the literature, technology-focused strategies for addressing social isolation and loneliness were described in multiple domains of the SE model. At the individual level, technology was positioned as a means to build the resilience of older adults by increasing their access to social support, information, and resources during the pandemic. Education and training for older adults on the use of digital technology was emphasized as necessary for technology-based interventions to be successful at reducing social isolation and loneliness (Conroy et al., 2020;Daly et al., 2021;Day, Gould, & Hazelby, 2020;Seifert, Cotten, & Xie, 2021). Peer or intergenerational training programs were suggested as ways to teach older adults how to use digital technologies (Daly et al., 2021;Xie et al., 2020). At the interpersonal level, Smith et al. (2020) contend that older adults need to use technology to engage in "distanced connectivity" during the pandemic. A narrative review by Gorenko, Moran, Flynn, Dobson, and Konnert (2021) identified the following types of remotely delivered interventions as being efficacious and feasible for use during the pandemic: telephone befriending (i.e., older adults are matched with volunteers for regular phone calls), the Senior Centre Without Walls (SCWW) model (i.e., social and educational programs provided virtually or by telephone), and programs that provide training to older adults on how to use the Internet and social media. It has also been recommended that digital technology be used to facilitate social visits, group activities, and health promotion interventions during the pandemic (Conroy et al., 2020;Daly et al., 2021;Sepúlveda-Loyola et al., 2020;Smith et al., 2020;Xie et al., 2020). Hajek and König (2020) reviewed the small number of studies evaluating the effectiveness of social media use for reducing social isolation or loneliness among older adults; two studies observed no impacts, while one found lower social isolation scores. Authors also discussed the "digital divide", a meso level challenge that connects the individual and organizational/policy domains of the SE model. The term "digital divide" is used to highlight the challenge that not all older adults have access to or use the Internet and digital technologies, and certain groups are in danger of being further excluded from society because of the increasing reliance on digital technologies during the pandemic (e.g., low-income older adults, people living in rural areas, the oldest age groups, older adults with functional impairments or multi-morbidity) (Conroy et al., 2020;Seifert et al., 2021;Sixsmith, 2020;Smith et al., 2020). Recommended steps to address the challenges posed by the digital divide included: (1) offering some low-tech interventions such as telephone and mail interventions (Conroy et al., 2020;Seifert et al., 2021;Smith et al., 2020), (2) ensuring access to low-cost, high-speed Internet for all older adults (Seifert et al., 2021), and (3) developing technologies that are accessible to older adults with low technology literacy levels and functional impairments (Seifert et al., 2021). In addition to technology-based strategies, a range of additional approaches were discussed in the literature to address social isolation and loneliness. Recommended strategies to build upon the strengths and resilience of individuals included: encouraging participation in more outdoor activities (Dahlberg, 2021;Day et al., 2020;Hwang, Rabheru, Peisah, Reichman, & Ikeda, 2020), culturally and religiously grounded interventions (Giwa, Mullings, & Karki, 2020), and creative arts (Day et al., 2020). Psychological interventions (e.g., one-on-one interventions, cognitive behavioural therapy, meditation, life review therapy) were also recommended specifically to address loneliness (Conroy et al, 2020;Gorenko et al., 2021;Van Orden et al., 2020). In the literature, interpersonal level strategies for addressing social isolation and loneliness focused on providing opportunities for human and non-human companionship. Burke (2020) observed that the pandemic has reduced opportunities for natural intergenerational interactions. Programs that foster intergenerational 208 Laura Kadowaki and Andrew Wister connections were identified as important components of COVID-19 responses (Burke, 2020;Day et al., 2020;Xie et al., 2020). Sepúlveda-Loyola et al. (2020) have also emphasized the importance of staying connected with family. Social robots and pets have been proposed as forms of non-human companionship to reduce both social isolation and loneliness among older adult populations (though some might question the impact they would have on social isolation and whether one can have social interactions and relationships with a nonhuman companion). Although currently robots are not advanced enough to act as close friends, they can be used for entertainment and companionship purposes which may be beneficial given the lack of options for in-person contact during the pandemic (Henkel, Čaić, Blaurock, & Okan, 2020;Jecker, 2020). Pets were also highlighted as an important form of non-human companionship by Rauktis and Hoy-Gerlach (2020), who suggest that steps may need to be taken to support older pet owners during the pandemic (e.g., assistance with shopping for pet food). Media reports show that pet adoption increased significantly during the pandemic (e.g., Cotnam, 2020), which may be a result of Canadians seeking to address social needs through non-human companionship. At the organizational system level, it has been observed that given the unique circumstances of the pandemic traditional approaches for prevention and identifying people who are socially isolated or lonely may need to be altered (Dassieu & Sourial, 2021;Smith et al., 2020). For health care organizations, health care professionals were identified as having a role to play in addressing social isolation and loneliness through home visits for assessment and prevention initiatives (Day et al., 2020), the development of social connection plans with older adults (Van Orden et al., 2020), and online group health interventions (Day et al., 2020). Providing volunteering opportunities to older adults may also reduce social isolation and loneliness, while also magnifying the capacity of organizations (Wu, 2020;Xie et al., 2020). Indeed, volunteering became a fulcrum for the successful implementation of many programs that needed to pivot during the pandemic. Interventions Implemented to Reduce Social Isolation and Loneliness during the Pandemic A small number of articles (n = 9) described the development and/or evaluation of interventions that had been implemented during the pandemic to reduce social isolation and loneliness among older adults. Interventions were deemed to be targeting social isolation and loneliness based on consideration of their target populations, descriptions of the programs and program theories, and pre-existing knowledge of interventions to reduce social isolation and loneliness. Six of the studies described interventions implemented in the United States, two described interventions implemented in Israel, and one described interventions implemented in in Canada (discussed subsequently in this article). Whereas the articles on Zoom-based group interventions linked the interventions primarily to the aim of reducing loneliness, the articles on befriending programs and telephone outreach suggest applicability for reducing social isolation and/or loneliness. The interventions primarily targeted individual and interpersonal domains of the SE model. Four of the articles described befriending programs that matched volunteers (most often university students) with isolated/lonely older adults for regular telephone or virtual calls (Dikaios et al., 2020; Joosten-Hagye, Katz, Sivers-Teixeira, & Yonshiro-Cho, 2020; Lewis & Strano-Paul, 2021; Office, Rodenstein, Merchant, Pendergrast, & Lindquist, 2020). The articles suggested that the connections formed through befriending programs can reduce social isolation and/or loneliness or mitigate their negative effects. However, none of the studies directly measured or planned to measure whether the interventions affected levels of social isolation or loneliness. A pre-post test evaluation (Joosten-Hagye et al., 2020) and anecdotal evidence from the volunteers (Office et al., 2020;Lewis & Strano-Paul, 2021) suggest that the calls resulted in positive connections between the volunteers and older adults. Two articles described telephone outreach programs in which staff or volunteers conduct check-ins with isolated older adults. One article was a process evaluation of a virtual training program for older adult volunteers in a telephone outreach program (Lee, Fields, Cassidy, & Feinhals, 2021), while the other article reported on a telephone outreach program established by the Health Black Elders Centre for their members (Rorai & Perry, 2020). Anecdotally it was reported that older adults viewed the calls positively and some were able to be referred/connected to needed services. Three articles described Zoom-based group interventions that aimed to serve as a substitute for in-person group activities and provide opportunities for social interaction. Shapira et al. (2021) conducted a pilot randomized controlled trial of a seven-session cognitive behavioural therapy Zoom group (n = 64 intervention group, n = 18 comparison group). The intervention group had a statistically significant decrease in loneliness scores postintervention compared with the comparison group, although further studies are needed. Cohen-Mansfield, Muff, Meschiany, and Lev-Ari (2021) surveyed participants and non-participants in Zoom activities offered by a health rehabilitation company. They found that only 16 per cent of participants specifically participated to relieve loneliness; however, 42 per cent thought that social contact should be a core component of activities. Physical activity, relief from boredom, and loneliness and social interaction needs were the central factors motivating participation in the programs. An article by Zubatsky (2021) describes three group health promotion programs that have proven effective in in-person settings and were transitioned to virtual delivery during the pandemic: A Matter of Balance, Cognitive Stimulation Therapy, and Circle of Friends. The article does not include any evaluation of the virtual versions of these programs, but presumably programs that have proven effective in person could be effective in virtual settings provided they were appropriately retrofitted. Further research to confirm the effectiveness of these programs in virtual settings would be beneficial. Although only one Canadian academic article was found, a supplementary scan of the Canadian grey literature identified seven common types of programs that were being implemented/ utilized to reduce social isolation and loneliness among the older adult population during the pandemic: befriending programs, telephone help and information lines, telephone outreach, practical assistance, Senior Centre Without Walls, remote health promotion and wellness programs, and technology access and training programs. Table 3 describes the type of program, anecdotal evidence on benefits or demand for the program during the pandemic, and a selection of Canadian program examples. Many programs identified were being delivered by non-profit and voluntary organizations (e.g., see Hannah, 2020;Campbell, 2020;A & O, n.d.), but programs were also delivered by other groups such as health care organizations (e.g., Sault Area Hospital, n.d.) and student groups (e.g., Parsons, 2020). Volunteers usually played a role in befriending, practical assistance, telephone outreach, and telephone line programs. The review of the grey literature also reveals the strong reliance on both lowtech (e.g., telephone) and high-tech (e.g., Zoom) interventions. Anecdotally, evidence suggests high demand for most of these programs during the pandemic as well as perceptions by service providers or older adults of positive impacts (see Table 3), although further research is required to determine whether they are reducing social isolation and loneliness. As will be described in the Discussion, pre-pandemic literature provides evidence of the effectiveness of some of these programs at reducing loneliness or social isolation. Discussion Data from the CLSA have revealed significant increases in levels of loneliness among Canadian older adults during the pandemic compared with pre-pandemic times. These findings are in line with the findings from international longitudinal studies (e.g., Krendl & Perry, 2021;Macdonald & Hülür, 2021;Van Tilburg et al., 2020;Wong et al., 2020), although the levels of increase in loneliness vary considerably. Two notable outliers were Chile (mixed results) and Sweden (no significant differences in levels of loneliness). The findings from Sweden are likely a result of the unique "herd Table 3. Canadian examples of programs to reduce social isolation and loneliness among older adults Description of Program Type Examples of Anecdotal Evidence Canadian Examples Befriending programs: Volunteers are matched with isolated older adults and engage in regular in-person or remote (virtual or over the telephone) visits. Anecdotally, it has been reported that these programs have formed positive connections between older adults and volunteers. High demand has led to the expansion or extension of befriending programs in many communities during the pandemic (Campbell, 2020;Lyall, 2021;Parsons, 2020). • Student-Senior Isolation Prevention Partnership (Parsons, 2020) • Art of Conversation (Malbeuf, 2021) • Keep in Touch (Campbell, 2020) • Community Connects (Lyall, 2021) Telephone help and information lines: Older adults can call the line and receive emotional support and have a friendly chat with the operator. Older adults also can be provided with referrals or information on services, including those that can alleviate social isolation or loneliness. Anecdotal evidence suggests that there are rising call volumes to these lines during the pandemic (Ireland, 2020;Szperling, 2020;United Way Centraide Canada, 2020). • A Friendly Voice (Szperling, 2020) • Toronto Seniors Helpline (Ireland, 2020 There has been a high demand for practical assistance services during the pandemic (Hannah, 2020 (Zillich, 2020) Remote health promotion and wellness programs: Health promotion and wellness programs such as caregiver support groups, physical activity programs, and adult day programs have transitioned to remote delivery during the pandemic. Facilitating connections between participants is usually an objective of the programs. Many of these new remote health promotion programs are based on pre-pandemic in-person/remote interventions that had evidence suggesting their effectiveness at reducing social isolation or loneliness. immunity" approach Sweden has adopted during the pandemic, and the lack of measures in place limiting socialization and gatherings (Claeson & Hanson, 2021). The reasons for the mixed results in Chile are less clear given that COVID-19 restrictions have been in place; the authors suggest that high rates of smartphone use and close contact with family members may have influenced these results . Longitudinal studies conducted peri-pandemic generally suggested that loneliness levels have been relatively stable during the pandemic, often with a spike during the initial implementation of lock-down/stay-at-home orders, followed by a levelling off. This is possibly indicative of processes of resilience among older adults, in which past experiences of adversity, greater access to support systems, and/or changing perceptions of social connections enhanced their ability to cope with pandemic mitigation policies and other pandemic-related constraints. For example, Igarashi et al. (2021) found that 93 per cent of their sample of older adults described experiencing vulnerabilities directly linked to the pandemic; yet, approximately two thirds identified positive responses to these adversities. Moreover, despite reporting pandemic-related isolation and challenges maintaining interpersonal relationships, older adults described the deepening of pre-existing relationships and increased appreciation for these relationships. Living alone emerged as the most consistent risk factor associated with loneliness in the pandemic literature. This is unsurprising given that COVID-19 restrictions in many jurisdictions prohibit/ limit social interactions with people from outside of the household. A review of pre-pandemic literature also found living alone was associated with loneliness (Cohen-Mansfield et al., 2016). Given the paucity of research in the pandemic literature that examined other vulnerabilities underlying social isolation and loneliness, we can only speculate that there are likely a myriad additional risk factors at the micro, meso, and macro levels of influence based on well-established pre-pandemic research in this field. Strong associations emerged between loneliness and depression in the pandemic literature. On the other hand, most studies did not find a relationship between social isolation and depression. Prepandemic literature has frequently reported associations between loneliness and depression and between social isolation and depression (Donovan & Blazer, 2020). However, results for social isolation have varied depending on the measurements used. Schwarzbach Luppa, Forstmeier, König, and Riedel-Heller (2014) found the strongest evidence of an association for the following types of measures of social isolation: social support, quality of relations, and presence of confidantes. A limitation of the current research on patterns of social isolation and loneliness is that most studies identified only reported on social isolation and loneliness patterns during the early stages of the pandemic (i.e., March to June 2020), when COVID-19 was newly emerging and lockdown measures were particularly stringent. Additionally, studies tended to focus on patterns of loneliness and only a small number reported on patterns of social isolation. In addition, different measures of social isolation and loneliness were utilized ranging from validated scales to single-item proxies or questions on pandemic experiences. Differences in populations and pandemic restrictions and progression in jurisdictions make comparisons tenuous. Further research is required to understand how levels of social isolation and loneliness have been impacted over the long term and whether higher levels of loneliness will persist as the pandemic continues to change over time. The literature has strongly emphasized "pandemic age-friendly" approaches to reducing social isolation and loneliness that rely on digital technology. Although digital technology is an essential component of the lives of most Canadians, pre-pandemic data suggest that one third of older Canadians do not use the Internet (Davidson & Schimmele, 2019). Rates of Internet use are particularly low for the 80 and up age group, with only two in five using the Internet (Davidson & Schimmele, 2019). To overcome this digital divide, efforts are needed to ensure that all communities have broad-band WiFi, and that all Canadians have access to low-cost high-speed Internet in their homes, as well as digital technology training and education if needed. An additional caveat that has been provided in the literature about digital technologies and other technological forms of companionship (e.g., robots) is that they can not fully replace the need for in-person contact (Dahlberg, 2021;Henkel et al., 2020;Jecker, 2020;Sixsmith, 2020). Dahlberg (2021) also observes that in the literature there has been less focus on nontechnological options such as outdoor activities and promoting neighbourliness and community. To date, few of the interventions implemented/utilized to reduce social isolation and loneliness among older adult populations during the pandemic have been the subject of formal evaluations. Pre-pandemic literature provides evidence supporting the effectiveness of some of the identified interventions. The most robust body of evidence has been connected to digital technology interventions. Reviews of the impacts of digital technology use on older adults suggest it has positive impacts on aspects of social isolation (e.g., increasing contact with family, intergenerational relationships). However, evidence specifically on the effects of digital technology interventions on loneliness has been equivocal (Chen & Schulz, 2016;Damant, Knapp, Freddolino, & Lombard, 2017;Ibarra, Baez, Cernuzzi, & Casati, 2020). Furthermore, there is a paucity of literature evaluating other types of interventions. Evaluations of virtual programs for older adults conducted prior to the pandemic suggest that they can reduce social isolation (Botner, 2018;Gorenko et al., 2021). Some pre-pandemic evidence also exists on the effectiveness of Senior Centre Without Walls programs , telephone helplines (Preston & Moore, 2019), and practical assistance programs (i.e., meal and grocery delivery) (Thomas, Akobundu, & Dosa, 2016;Wright, Vance, Sudduth, & Epps, 2015) at reducing social isolation and/or loneliness among older adult populations. One can speculate that programs with evidence of efficacy and effectiveness pre-pandemic would also be supported during the pandemic, if they have been retrofitted to address the inherent context and constraints of the pandemic environment. Our examination of the patterns, risks, and responses to social isolation and loneliness among older individuals reveals complex systems of vulnerability and resilience occurring within the spheres of influence identified in the SE model (Bronfenbrenner, 1994;Stokols, 1992;2017). The SE model affords investigation into the larger policy framework that can both create social isolation and loneliness resulting from pandemic mitigation, while also offering insight into opportunities for intervention at different levels of the SE framework. It also points to the need to consider meso-level inequalities such as the digital divide, and the microlevel adjustments that individuals make in response to the pandemic. Although the literature explored micro-level (e.g., need for digital technology training), meso-level (e.g., digital divide), and macro-level (e.g., policy on high-speed Internet access) considerations for digital technology interventions, other potential interventions have not been afforded the same multi-level analysis. Future research should engage in multi-level analysis of strategies to reduce social isolation and loneliness among older adult populations, as well as identifying potential disparities in access to interventions (e.g., digital technology) for marginalized groups. Furthermore, the use of a resilience conceptualization elucidates how some individuals and groups adapt and respond positively to the adversities of the pandemic better than others (Klasa, Galaitsi, Trump, & Linkov, 2021;Klasa, Galaitsi, Wister, & Linkov, 2021;Wister & Speechley, 2020). For example, initial research suggests that older persons who exhibited proactive coping during the early waves of the pandemic were able to reduce the level of pandemic stress and improve psychological well-being (Pearman et al., 2021;Whitehead, 2021). Although this work is in its infancy, a strengthbased approach suggests that by identifying positive adaptations and responses to pandemic adversities, older individuals can leverage pre-existing strengths and innovative interventions can be developed that reinforce and enhance resilience. In this review, the findings on protective factors for social isolation and loneliness were limited, with the exception of several interpersonal protective elements (e.g., satisfaction with communication, social support). Further research would benefit from a greater focus on the strengths and resilience of older adults in the face of adverse circumstances (Wister et al., 2022). Several limitations of this review should be noted. First, because of the recency of the pandemic and the lag between interventions being implemented and publication of evaluation results, this review may not capture the full picture of interventions and strategies being used to reduce social isolation and loneliness among older adult populations during the pandemic. The grey literature scan that was conducted attempted to address this deficit. Second, almost all of the academic literature identified were from countries other than Canada. As was illustrated with the example of Sweden, pandemic restrictions and mitigation strategies vary by jurisdiction. As a result, caution should be used when generalizing COVID-19 research from other jurisdictions to Canada. Third, the study designs, measures of social isolation and loneliness used, and populations or sub-populations under study varied and may explain some of the inconsistencies in findings. Conclusion Analysis of data from the CLSA, the largest representative longitudinal study of aging in Canada, has revealed striking increases in levels of loneliness among older Canadians. Review of international literature suggests that many other jurisdictions are experiencing significant increases in loneliness among older adult populations during the COVID-19 pandemic as well. To date, literature has primarily discussed and emphasized the use of technology-based interventions to reduce social isolation and loneliness. However, as has been noted in the literature, a "digital divide" exists and not all older adults have access to digital technology or use the Internet. Low-tech solutions, including using telephones and volunteers to meet basic needs during lock-down phases of the pandemic, also show promise. Researchers should focus on exploring the wider array of pandemic age-friendly interventions (e.g., outdoor activities, intergenerational programs, and other outreach approaches) that may be useful for reducing social isolation and loneliness among older adult populations. Furthermore, this review has exposed the lack of evaluation of interventions to reduce social isolation and loneliness even pre-pandemic, and therefore, a greater focus on evaluating such interventions is needed moving forward. Advancement of knowledge of the risk, response, and resilience embedded in the current pandemic will help us to understand the larger processes underlying these issues and to prepare for future forms of adversity facing societies.
2022-11-09T06:16:57.274Z
2022-11-08T00:00:00.000
{ "year": 2022, "sha1": "76f5370c2905928068dbdacc195d141b6adf2d00", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/4E40FDB341D4B90844927D658D5F7E4C/S0714980822000459a.pdf/div-class-title-older-adults-and-social-isolation-and-loneliness-during-the-covid-19-pandemic-an-integrated-review-of-patterns-effects-and-interventions-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "7233cad1921b874e64601d4b6365b7bd9152d155", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
234763237
pes2o/s2orc
v3-fos-license
Yet another eigenvalue algorithm for solving polynomial systems In latest years, several advancements have been made in symbolic-numerical eigenvalue techniques for solving polynomial systems. In this article, we add to this list. We design an algorithm which solves systems with isolated solutions reliably and efficiently. In overdetermined cases, it reduces the task to an eigenvalue problem in a simpler and considerably faster way than in previous methods, and it can outperform the homotopy continuation approach. We provide many examples and an implementation in the proof-of-concept Julia package EigenvalueSolver.jl. Introduction Polynomial systems arise in many areas of applied science [42,20]. This paper is concerned with solving such systems of equations using numerical computations, that is, using finite precision, floating point arithmetic. Two important classes of numerical algorithms are algebraic algorithms [30,36] and homotopy continuation methods [16,42]. See [20,Ch. 2] for an overview. In this work, we focus on algorithms of the former type. Algebraic algorithms are also called eigenvalue algorithms. They consist of two steps. Step (A) uses linear algebra operations to reduce the problem to an eigenvalue problem or univariate polynomial root finding problem. Step (B) is to solve the eigenvalue or univariate root finding problem using numerical tools. Classical examples include Gröbner basis and resultant algorithms, see [23,Ch. 2] or [6]. These use symbolic manipulations for step (A), pushing the numerical linear algebra back to the eigenvalue computation in step (B). The reason for this is that, when performed in finite precision arithmetic, these approaches are numerically unstable for step (A), see for instance [35]. Border basis methods have been developed to remedy this unstable behaviour [39,43] and variants based on nullspace computations were introduced in [25]. Methods for performing step (B) are based on linear algebra [19] or, recently, on multilinear algebra [49]. Two special types of structured matrices play a central role in algebraic algorithms: Macaulay (or Sylvester) matrices and multiplication matrices. Macaulay matrices have a sparse, quasi-Toeplitz structure. They contain the coefficients of the equations and are manipulated in step (A). The result of these manipulations is a set of multiplication matrices. These are structured in the sense that they commute. Multiplication matrices represent multiplication operators in the coordinate ring of the solution set [23,Ch. 5] and their eigenstructure reveals the coordinates of the solutions [22,Ch. 2]. As the Macaulay matrices are typically much larger than multiplication matrices, step (A) determines the running time of the algorithm. This motivates the efforts in active research, including the present paper, to design algorithms which use smaller Macaulay matrices. In practice, to construct multiplication matrices from Macaulay matrices, we need to choose a basis for the aforementioned coordinate ring. The numerical stability of step (A) strongly depends on this choice [47]. Gröbner and border basis methods use bases corresponding to special sets of monomials. For instance, they require these monomials to come from a monomial ordering [23,Ch. 2,§2] or to be 'connected-to-1' [39]. Recent developments showed that numerical linear algebra heuristics can be applied to choose bases that improve the accuracy substantially [47]. This has lead to the development of truncated normal forms [46], which use much more general bases of monomials coming from QR factorizations with optimal column pivoting, or non-monomial bases coming from singular value decompositions or Chebyshev representations [40]. Other than making a good choice of basis, in order to stabilize algebraic algorithms it is necessary to take solutions at infinity into account. Loosely speaking, a polynomial system has solutions at infinity if the slightest random perturbation of the nonzero coefficients introduces new solutions with large coordinates. This is best understood in the language of toric geometry [24]. Situations in which there are finitely many solutions at infinity (see Assumption 1) can be handled by introducing an extra randomization in the algorithm, which was first used in [44,11]. Where classically the multiplication matrices represent 'multiplication with a polynomial g', the multiplication matrices in these papers represent 'multiplication with a rational function g/ f 0 ', where f 0 is a random polynomial that does not vanish at any of the solutions to the system. For details and a geometric interpretation, we refer to [44,11]. We will use a similar approach in this paper. In terms of our results, choosing the denominator f 0 randomly is essential in cases where the conditions in Lemma 2.1 are not satisfied for f 0 = 1, while they are for a generic f 0 (Example 3). We summarize the contributions of the present paper. First, we adapt the eigenvalue theorem [22,Ch. 2,Thm. 4.5] to reduce the problem of solving polynomial systems to the computation of eigenvalues. Our new version allows to compute solutions from matrices that need not represent classical multiplication operators; see Example 5. We propose an easy-to-state and easy-to-verify criterion for Macaulay matrices to be 'large enough' for constructing such matrices (Lemma 2.1). Moreover, we identify a broad class of overdetermined polynomial systems, namely semi-regular unmixed systems, for which these Macaulay matrices are much smaller than those in classical algorithms, e.g. [27,38]. We distil these new insights, together with the recent advances in numerical eigenvalue algorithms explained above, into an algorithm (Algorithm 2). We introduce the notion of admissible tuples (Definition 2.1), which parametrize Macaulay matrices satisfying our criterion from Lemma 2.1 and show how to construct such tuples for structured systems of equations. Additionally, we adapt [40,Sec. 4] to obtain an algorithm for computing smaller admissible tuples for overdetermined, unmixed systems (Algorithm 3). We provide a Julia implementation of our algorithms, available online at https://github.com/simontelen/JuliaEigenvalueSolver. Our experiments show the efficiency and accuracy of this package. They contain a comparison with the state-of-the-art Julia package HomotopyContinuation.jl [14]. We show that our eigenvalue methods are competitive, and in strongly overdetermined cases, they are considerably faster. To make the paper accessible to a wide audience, we state most of our results and proofs using only terminology from linear algebra. For results that require more background in algebraic (and in particular toric) geometry, we sketch proofs and provide full references. The paper is organized as follows. In Section 2, we introduce our adapted eigenvalue theorem, admissible tuples and our algorithm. In Section 3, we present constructions for admissible tuples for different families of polynomial systems. Finally, in Section 4, we demonstrate the effectiveness of our algorithms through extensive numerical experimentation. Our computations are done using the Julia package EigenvalueSolver.jl. The algorithm In this section, we present a symbolic-numerical algorithm to solve polynomial systems (Algorithm 2). We show that the solutions of the system can be obtained from the eigenvalues of certain matrices M g defined in (2.2). For some choice of input for Algorithm 2, these matrices represent multiplication operators, see Remark 2.3. In this case, the results of this section are well-known, e.g. [37]. However, in general, our matrices M g may not have this interpretation. This is illustrated in Example 5. The upshot in these cases is that they can be computed more efficiently. Consider the polynomial ring R := C[x 1 , . . ., x n ] and a tuple of s polynomials F := ( f 1 , . . ., f s ) ∈ R s , with s ≥ n. Our aim in this section is to present an algorithm for solving the system of equations F (x) = 0, where we use the short notation x for (x 1 , . . . , We say that α is the exponent of the monomial x α . In what follows, we write each polynomial f i as where c i,α ∈ C are the coefficients of f i and finitely many of them are nonzero. We define the support A i of f i as the set of exponents α ∈ N n corresponding to non-zero coefficients c i,α ∈ C, Given two subsets E 1 , E 2 ⊂ N n , we denote by E 1 + E 2 the Minkowski sum of E 1 , E 2 , that is, For a finite set of exponents E ⊂ N n , we write R E for the subvector space of R spanned by the monomials with exponent in E. That is, Observe that, given g 1 ∈ R E 1 and g 2 ∈ R E 2 , we have that Consider a tuple of s finite sets of exponents E := (E 1 , . . . , E s ), where E i ⊂ N n , and another finite set of exponents D ⊂ N n such that for every i ∈ {1, . . ., s}, D contains the exponents in A i +E i . An essential ingredient for our eigenvalue algorithm is the Sylvester map This is a linear map between finite dimensional vector spaces, so we can represent it by a matrix Matrices obtained by using the standard monomial bases for the vector spaces R E i and R D in this representation are often called Macaulay matrices. We index the rows of the matrix with the exponents belonging to D and the columns with pairs Observe that this coefficient might be zero. The ordering of the exponents is of no importance in the scope of this work. We will therefore not specify it and assume that some ordering is fixed for all tuples A i , E i , D throughout the paper. Example 1. To avoid subscripts, we replace the variables x 1 and x 2 by x and y, respectively. Consider the sets of exponents A 1 , . . . , A 3 and the system F := ( f 1 , f 2 , f 3 ) given by We construct the Macaulay matrix M(F , E; D), where E := (E 1 , E 2 , E 3 ) and Remark 2.1. Given ζ ∈ C n and a finite subset E ⊂ N n , we denote by ζ E the row vector The vector obtained by the product . Moreover, if ζ ∈ C n is such that ζ E i = 0 for all i, the opposite implication also holds. This is the case, for instance, for any solution ζ ∈ (C \ {0}) n . We Example 2 (Cont.). The system F has one solution (−1, 1) ∈ C 2 . The vector belongs to the cokernel of M(F , E; D). Moreover, we have that HF(F , E; D) = 2 and For each polynomial f 0 ∈ R A 0 , we define the matrix N f 0 as (2.1) Observe that N f 0 ∈ C HF(F,E;D)×#E 0 and the columns of N f 0 are indexed by the exponents in E 0 (more precisely, by the pairs (0, α) for each α ∈ E 0 ). Moreover, in that case, for every solution ζ ∈ C n of F such that the vector ζ D is non-zero, we have f 0 (ζ ) = 0. Proof. The ⇒ direction of the first statement follows directly from For the ⇐ direction, suppose that N f 0 has rank HF(F , E; D). Then Coker( f 0 , E 0 ; D)∩Coker(F , E; D) = {0}, which implies that the cokernel of M(( f 0 , F ), (E 0 , E); D) is trivial, and hence it has rank #D. The second statement follows from the fact that M(( f 0 , F ), (E 0 , E); D) has trivial cokernel; as ζ D is a non-zero vector, if f 0 (ζ ) = 0, by Remark 2.1, ζ D belongs to the cokernel of Example 3 (Cont.). We consider the sets of exponents A 0 , E 0 and the polynomial f 0 given by In this case, we have Observe that, even though 1 ∈ R A 0 , the matrix N 1 is not full-rank. △ In what follows, we say that a property holds for generic points of a vector space if it holds for all points not contained in a subset of Lebesgue measure zero. Note that N f 0 is a matrix whose entries depend linearly on the coefficients of f 0 . This means that if there exists f 0 ∈ R A 0 such that N f 0 has rank HF(F , E; D), then rank(N h ) = HF(F , E; D) for generic elements h ∈ R A 0 . Below, we assume that there is f 0 ∈ R A 0 such that rank(N f 0 ) = HF(F , E; D) and we fix such an f 0 ∈ R A 0 . This assumption is very mild and given F , A 0 , E, D, it is easy to check if it holds. For ease of notation, we will write γ := HF(F , E; D). Given a set of exponents B ⊂ E 0 , we define the submatrix N f 0 ,B = Coker(F , E; D) · M( f 0 , B; D) ∈ C γ×#B of N f 0 consisting of its columns indexed by B. We fix B ⊂ E 0 of cardinality γ such that N f 0 ,B ∈ C γ×γ is invertible. For each g ∈ R A 0 , we define the matrix M g ∈ C γ×γ , defined as Example 4 (Cont.). We fix the basis B = {1, x} and the matrix N f 0 ,B = −1 1 0 −1 . Then, for g = −1 + 3 x + 2 y, we have Moreover, M f 0 is the identity matrix. A key observation is that we can solve the system of equations F (x) = 0 by computing the eigenstructure of these matrices M g , for g ∈ R A 0 . For that, we adapt the classical eigenvalue theorem from computational algebraic geometry (see Remark 2.3). We say that a non-zero row vector v is a left eigenvector of a matrix M with corresponding eigenvalue λ if it satisfies v · M = λ v. Theorem 2.1 (Eigenvalue theorem). Using the notation introduced above, consider a polynomial system F and a polynomial f 0 such that N f 0 has full-rank; see Equation (2.1). For each solution ζ ∈ C n of F such that ζ D = 0, M g from (2.2) has a left eigenvector v ζ such that v ζ · Coker(F , E; D) = ζ D . The corresponding eigenvalue is g f 0 (ζ ). Conversely, if v is a left eigenvector of M g such that v · Coker(F , E; D) is proportional to ζ D = 0 for some ζ ∈ C n such that ζ E i = 0 for all i, then ζ is a solution of F . Moreover, the corresponding eigenvalue of M g is g f 0 (ζ ). Remark 2.3. In some cases, the previous theorem can be derived from the classical eigenvalue theorem from computational algebraic geometry, where M g represents the multiplication map Here F ⊂ R is the ideal generated by f 1 , . . . , f s . As we will see (Example 5), this is not always the case. In the context of computer algebra, the eigenvalue theorem was introduced in [37] (eigenvalues) and in [1] (eigenvectors). For a historic overview and a proof in terms of matrices, see [21] and [27] respectively. To prove Theorem 2.1, we need two auxiliary lemmas. Lemma 2.2. Let ζ ∈ C n be a solution of F such that ζ D = 0 and let g ∈ R A 0 be such that g(ζ ) = 0. Then, the matrix M g is singular. Proof. By Remark 2.1, ζ D belongs to the cokernel of M(F , E; D), hence there is a row vector v ζ ∈ C γ \ {0} such that v ζ · Coker(F , E; D) = ζ D . Moreover, ζ D belongs to the cokernel of M(g, E 0 ; D). Hence, v ζ · N g = 0 and so v ζ · N g,B = 0. where the last line uses ζ B = 0. Since N f 0 ,B is invertible, this implies v ζ = 0, and thus ζ D = 0. Proof of Theorem 2.1. The proof is based on the following observations. By Remark 2.2, the eigenvalues of M g correspond to the values λ ∈ C such that M g − λ id = M g−λ f 0 is singular. By Lemma 2.1, we have f 0 (ζ ) = 0 for each solution ζ of F . Let ζ be a solution of F . If λ = g f 0 (ζ ), the polynomial g − λ f 0 ∈ R A 0 vanishes at ζ . As by assumption ζ D = 0, from Lemma 2.2 we deduce that M g−λ f 0 is singular. Therefore, g f 0 (ζ ) is an eigenvalue of M g . For the associated left eigenvector, let v ζ be as in the proof of Lemma 2.2. We have v ζ ·N g−λ f 0 ,B = v ζ ·(N g,B −λ N f 0 ,B ) = 0 and multiplying from the right by N −1 f 0 ,B gives v ζ · (M g − λ id) = 0. Conversely, suppose that v is a left eigenvector of M g such that v · Coker(F , E; D) = ζ D = 0 for some ζ ∈ C n (we may assume equality after scaling). By Remark 2.1, under the assumption ζ E i = 0 for all i, ζ is a solution of F , see Remark 2.1. We now compute the corresponding eigenvalue. By definition, v · (M g − λ id) = 0 for some λ . Multiplying from the right by N f 0 ,B we see that v · N g−λ f 0 ,B = (v · Coker(F , E; D)) · M(g − λ f 0 , B, D) = ζ D · M(g − λ f 0 , B, D) = 0. By Lemma 2.3, ζ B = 0 and since f 0 (ζ ) = 0 (Lemma 2.1) we conclude λ = g f 0 (ζ ). Since it is also dense by hypothesis, we conclude that W = R A 0 as in the proof of Proposition 2.1. Example 7 (Cont.). Instead of the basis B fixed in Example 4, in what follows we consider . This way, we obtain the following matrices: Suppose that this eigenspace is spanned by the rows of the matrix V λ g . By Proposition 2.1, we simply need to check which elements in the row span of V λ g are also eigenvectors of M h , for a random element h ∈ R A 0 . Proposition 2.2 guarantees that these eigenvectors, if they exist, belong to a unique eigenvalue of M h . This is summarized in Algorithm 1. In line 8 of the algorithm, we solve the generalized eigenvalue problem (GEP) given by the pencil (B 1 , , that is, we compute all eigenvalues µ i and a basis for the left eigenspace In line 9, we select (if possible) the unique eigenvalue µ i whose corresponding eigenspace C i gives the desired intersection V = C i ·V λ g . Proposition 2.2 also has the following direct corollary. Corollary 2.1. Let λ g be an eigenvalue of M g and let V ∈ C m×γ be a matrix whose rows are a basis for the left eigenspace of M g corresponding to λ g , intersected with the eigenvectors of M. If g is generic, there is exactly one tuple (λ α ) α∈A 0 such that Algorithm 1 GETEIGENSPACE Input: An eigenvalue λ g of M g for generic g ∈ R A 0 , a matrix V λ g of size m × γ whose rows contain a basis for the corresponding eigenspace and the matrix M h for a generic h ∈ R A 0 Output: A matrix V whose rows are a basis for the intersection of the row span of V λ g with the eigenvectors of M. if C is empty then 11: Proposition 2.3 (Criterion for eigenvalues). Let λ g be an eigenvalue of M g . If λ g = g f 0 (ζ ) for some solution ζ ∈ C n of F satisfying ζ D = 0, then the tuple (λ α ) α∈A 0 from Corollary 2.1 satisfies Proof. Let V ∈ C m×γ be a matrix whose rows are a basis for the left eigenspace of M g corresponding to λ g intersected with the eigenvectors of M. If λ g = g f 0 (ζ ) for some solution ζ ∈ C n of F satisfying ζ D = 0, then by Theorem 2.1 we know that v ζ is a corresponding eigenvector of M. The results discussed above suggest several ways of extracting the coordinates of a solution ζ ∈ C n of F (x) = 0 form the eigenstructure of the matrices M g . Both the eigenvectors (Theorem 2.1) and the eigenvalues (Proposition 2.3) reveal vectors of the form ζ A for some set of exponents A ⊂ N n . We now recall how to compute the coordinates of ζ from the vector ζ A and discuss the assumptions that we need on A in order to be able to do this. For any subset A ⊂ N n , we write If A = {α 1 , . . ., α k } with α 1 = 0 and the condition ZA = Z n is satisfied, then for ℓ = 1, . . . , n, there exist integers m 2,ℓ , . . ., m k,ℓ such that m 2,ℓ α 2 + · · · + m k,ℓ α k = e ℓ , where e ℓ is the ℓ-th standard basis vector of Z n . These integers m j,ℓ can be computed, for instance, using the Smith normal form of an integer matrix whose columns are the elements of A. If this is the case, from ζ A = (ζ α 1 , . . . , ζ α k ) we can compute the ℓ-th coordinate ζ ℓ of ζ as This approach can be used to compute the coordinates of ζ ∈ (C\{0}) n from ζ A , i.e. all points with all non-zero coordinates. Note that some of the m j,ℓ may be negative, which may be problematic in the case where ζ has zero coordinates. If the stronger condition NA 0 = N n is satisfied (this implies e ℓ ∈ A, ℓ = 1, . . . , n), then the integers m j,ℓ can be taken non-negative and we can obtain the coordinates of all points ζ in C n from ζ A . We will continue under the assumption that we are mostly interested in computing points in (C \ {0}) n , as this is commonly assumed in a sparse setting. However, solutions in C n can be computed by replacing ZA = Z n in what follows by the stronger assumption NA = N n . Note that if ZA = Z n , the outlined approach suggests a way of checking whether or not a vector q ∈ C #A with q α 1 = 1 is of the form ζ A for some ζ ∈ (C \ {0}) n . Indeed, one computes the coordinates ζ ℓ = ∏ k j=2 (q α j ) m j,ℓ and checks whether ζ A = q. We turn to the eigenvalue method for extracting the roots ζ from the matrices M g . Let {ζ 1 , . . ., ζ δ } ⊂ C n be a set of solutions of F such that ζ D i = 0 for all i. By Theorem 2.1, for each of these solutions there is an eigenvalue λ g of the matrix M g and a space of dimension m ≥ 1, spanned by the rows of a matrix V, of eigenvectors of M. Suppose we have computed this matrix V (for instance, using Algorithm 1). We write A 0 = {α 1 , . . . , α k } ⊂ N n and assume that α 1 = 0. The unique eigenvalue (Corollary 2.1) of M x α j corresponding to V is denoted by λ i j and can be computed using Remark 2.4. As ζ D = 0, by Theorem 2.1, there is v ∈ V such that v · COKER(F, E, D) = C · ζ D i , for non-zero C ∈ C. Therefore, by Theorem 2.1, We would like to recover the coordinates of ζ i from the tuple (ζ α 2 i , . . . , ζ α k i ) ∈ C k−1 . Assuming ZA 0 = Z n and applying (2.3), we find (2.5) Remark 2.5. In many cases, one can take A 0 = {0, e 1 , . . . , e n }, in which case m j,ℓ = 1 if j = ℓ + 1 and m j,ℓ = 0 otherwise. Motivated by this discussion, we make the following definition. • Rank condition: There exists f 0 ∈ R A 0 such that rank(N f 0 ) = HF(F , E; D). • Lattice condition: The set A 0 satisfies 0 ∈ A 0 and ZA 0 = Z n . The results in this section lead to Algorithm 2 for solving F (x) = 0, given an admissible tuple (F , A 0 , (E 0 , E), D). This algorithm computes a candidate set of solutions containing every solution in (C\{0}) n . It might contain spurious points, since there might be eigenvalues that do not correspond to solutions but do come from a common eigenvector of M, see Example 5. One can identify these points, for instance, by evaluating the relative backward error, see Equation (4.1). In what follows, we discuss some aspects of the algorithm in more detail. In practice, the number of columns ∑ s i=1 #E i of the Macaulay matrix M(F , E; D) is often much larger than the number #D of rows. Multiplying from the right by a random matrix of size (∑ s i=1 #E i ) × #D does not affect the left nullspace, but reduces the complexity of computing it. This is what happens in , that is, the number of columns is not much larger than the number of rows, this step can be skipped. Remark 2.6. By the lattice condition, we have that 1 ∈ R A 0 . However, the rank condition might not be satisfied for f 0 = 1. That is, it might happen that rank(N 1 ) < Coker(F , E; D). This is the case, for instance , in Example 3. To overcome this issue, we choose f 0 randomly in R A 0 . Numerical considerations. In theory, we may pick B arbitrary such that N f 0 ,B is an invertible matrix. In practice, it is crucial to pick B such that N f 0 ,B is well-conditioned. This was shown in [46,47]. For that, we select a random f 0 and, in line 6, we use a standard numerical linear algebra procedure for selecting a well-conditioned submatrix from N f 0 : QR factorization with optimal column pivoting. This computes matrices Q 0 , R 0 and a permutation p = (p 1 , . . ., p] is N f 0 with its columns permuted according to p. The leftmost γ columns of R 0 form the square, upper triangular matrixR 0 . The column permutation p is such that columns p 1 , . . . , p γ form a well-conditioned submatrix of N f 0 . In line 8, these columns are selected to form the matrix N f 0 ,B . Using the identities N * f 0 ,B M * g = N * g,B , where * is the conjugate transpose, and N f 0 ,B = Q 0R0 , we see that the solution Q * 0 M * g Q 0 to the linear systemR * 0 X = N * g,B Q 0 is similar to the matrix M * g in this section, and it can be obtained by back substitution since R * 0 is lower triangular. Since we extract the coordinates of the roots form the eigenvalues, not the eigenvectors, we may work with Q * 0 M * g Q 0 as well. This is exploited in line 11. In line 17, we invoke Algorithm 1. Lines 18-25 are a straightforward implementation of Remark 2.4. As pointed out, in the case m = 1, λ i j can alternatively be computed as a Rayleigh quotient. Remark 2.7. Alternatively, by Theorem 2.1, when V is one-dimensional, we may check if, for a vector v ∈ V, there is a non-zero constant C and ζ ∈ C n such that C · ζ D = v ζ · Coker(F , E; D). If 0 ∈ D and ZD = Z n , we scale v such that C = 1 and find ζ from ζ D as above. When the matrices M g are multiplication operators, this approach is usually referred as the eigenvector criterion [1]. This idea can be extended to the case where V has dimension > 1. Extracting vectors of the form ζ D from a vector space can be viewed as a harmonic retrieval problem, see [49,Sec. 3.3]. 6: Q 0 , R 0 , p ← apply QR decomposition with optimal pivoting to N f 0 7:R 0 ← square, upper triangular matrix given by the first γ columns of R 0 8: B ← exponents in E 0 corresponding to columns p 1 , . . . , p γ of N f 0 9: for j = 1, . . ., k do 10: Algorithm 2 SOLVE As our input is an admissible tuple, the compatibility condition implies that the the matrix N f 0 is well-defined. By the rank condition and the fact that f 0 is generic, N f 0 is has full rank. See the discussion below Lemma 2.1. Hence, the matrices M g and M h are well-defined and agree with the ones defined in (2.2). Let ζ 1 be a solution of F such that ζ 1 ∈ (C \ {0}) n . As ζ D 1 = 0, by Theorem 2.1, we can assume with no loss of generality that µ 1 = g f 0 (ζ 1 ). Let V := GETEIGENSPACE (µ 1 ,V µ 1 , M h ), for generic h ∈ R A 0 . As h is generic, by Proposition 2.2, all vectors in V belong to the same eigenvalue of M x α j , for j = 1, . . . , k. Hence, by Proposition 2.3, there is a non-zero constant C ∈ C such that the element λ 1, j computed in line 23 agrees with C ζ α j 1 , for α j ∈ A 0 . Observe that, as ζ 1 ∈ (C \ {0}) n , λ 1, j = 0. Therefore, as the admissible tuple satisfies the lattice condition Z A 0 = Z n , we can recover the coordinates of ζ 1 using (2.4) and ζ 1 ∈ Z. It is clear that the size of the matrices in Algorithm 2 depends on the cardinality of the exponent sets in the admissible tuple. Constructing admissible tuples for certain families of polynomial systems is an active field of research, strongly related to the study of regularity of ideals in polynomial rings, in the sense of commutative algebra [26,Sec. 20.5]. Recent progress in this area, for the case where n = s, was made in [11]. In the next section, we will summarize some of these results by explicitly describing some admissible tuples for systems with important types of structures. As mentioned above, the matrices M g considered in this section play the role of multiplication operators in the algebra R/I, where I is the ideal generated by the polynomials in F [22,Ch. 2]. In the very general setting we consider here, assuming only that (F , A 0 , (E 0 , E), D) is an admissible tuple, the matrices M g do not necessarily represent such multiplication operators. However, under some extra assumptions, they do commute. In this case, we can simplify Algorithm 2 by computing the simultaneous Schur factorization of (M x α ) α∈A 0 as in [11,Sec. 3.3]. Proof. In what follows, we fix two vector spaces I D := Im(Sylv (F,(E 1 ,...,E s );D) ) and I D+A 0 := Im(Sylv (F,(E 1 +A 0 ,...,E s +A 0 );D+A 0 ) ). Observe that, for every g ∈ R A 0 and f ∈ I D , g f ∈ I D+A 0 . We In this proof, for each g ∈ R A 0 , we consider the mapM g := N −1 f 0 ,B · M g · N f 0 ,B . The maps M g andM g are similar, so it is enough to prove thatM g 1M g 2 =M g 2M g 1 . It is not hard to show that for v, w ∈ C γ , such thatM g (v) = w ∈ C γ , we have g (v · B) ≡ f 0 (w · B) modulo I D . First, observe that, for every v ∈ C γ , As f 0 h 2 +g 2 h 1 ∈ I D+A 0 , the claim follows. Since g 1 g 2 = g 2 g 1 , it also holds that Moreover, by the assumption on the difference of coranks, the dimension of the vector space Construction of admissible tuples In this section, we fix an s-tuple of sets of exponents A := (A 1 , . . . , A s ), where A i ⊂ N n , and consider a polynomial system F = ( f 1 , . . ., f s ) ∈ R A 1 × · · · × R A s . We construct tuples that are admissible under mild assumptions on F (Assumption 1). This allows us to compute the solutions of the system F using Algorithm 2. Section 3.1 states explicit formulas for admissible tuples that in practice are near-optimal in the case where s = n. In the overdetermined case (s > n), we can obtain admissible tuples leading to smaller matrices by using incremental constructions. These are the topic of Subsection 3.2. The section uses the following notation. The convex hull of a finite subset E ⊂ R n is the polytope Conv(E) ⊂ R n defined as, By a lattice polytope we mean a convex polytope P ⊂ R n that arises as Conv(E), where E ⊂ N n . Such a lattice polytope is called full-dimensional if it has a positive Euclidean volume in R n . Given two polytopes P 1 , P 2 ⊂ R n and c ∈ N, we denote by P 1 + P 2 the Minkowski sum of P 1 , P 2 and by c · P 1 the c-dilation of P 1 , that is, We denote the Cartesian product of two subsets P 1 ⊂ R n 1 and P 2 ⊂ R n 2 by P 1 × P 2 := {(α, β ) : α ∈ P 1 , β ∈ P 2 } ⊂ R n 1 × R n 2 = R n 1 +n 2 . Throughout, we use the notation ∆ n = Conv({0, e 1 , . . . , e n }) ⊂ R n for the standard simplex in R n . Figure 1, the polytopes P 1 := Conv(E 1 ), P 2 := Conv(E 2 ), and P 1 + P 2 ⊂ R 2 are displayed. Observe that P 2 is the two-dimensional standard simplex ∆ 2 . △ + = Explicit constructions We present explicit constructions of admissible tuples for the following types of polynomial systems, listed in (more or less) increasing order of generality. 1. Dense systems. These are systems for which f i may involve all monomials of degree at most d i , where (d 1 , . . . , d s ) ∈ N s >0 is an s-tuple of positive natural numbers. For dense systems, we have A i = {α ∈ N n : α 1 + · · · + α n ≤ d i } = (d i · ∆ n ) ∩ N n . 2. Unmixed systems. We say that the polynomial system F is unmixed if there is a fulldimensional lattice polytope P and integers d 1 , . . . , d s such that d i · P = Conv(A i ). The codegree of P is the smallest t ∈ N >0 such that t · P contains a point with integer coordinates in its interior. Note that dense systems can be viewed as unmixed systems with P = ∆ n . 3. Multi-graded dense systems. A different, natural generalization of the dense case allows different degrees for different subgroups of the variables x 1 , . . . , x n . Let {I 1 , . . . , I r } be a partition of {1, . . . , n}, i.e. I j ⊂ {1, . . ., n}, I j ∩ I k = ∅ and r j=1 I j = {1, . . ., n}. This way we obtain subsets x I 1 , . . . , x I r ⊂ {x 1 , . . . , x n } of the variables, indexed by the I j . In a multigraded dense system, f i may contain all monomials of degree at most d i, j in the variables x I j . If the variables are ordered such that the first n 1 variables are indexed by I 1 , the next n 2 variables by I 2 and so on, this means A i = ((d i,1 · ∆ n 1 ) × · · · × (d i,r · ∆ n r )) ∩ N n . Necessarily we have n 1 + · · · + n r = n. A dense system is a multi-graded dense system with r = 1. 4. Multi-unmixed systems. This is a generalization of the unmixed and the multi-graded dense case, where there are full-dimensional lattice polytopes P 1 ⊂ R n 1 , . . . , P r ⊂ R n r such that 0 ∈ P i and n = ∑ i n i and for each i ∈ {1, . . . , s}, an r-tuple That is, the convex hull of A i is the product of dilations of the polytopes P 1 , . . . , P r . Note that a multi-graded dense system is a multi-unmixed system with P i = ∆ n i , and an unmixed system is a multi-unmixed system with r = 1. 5. Mixed systems. This is the most general case, our only assumption on each A i is that the lattice polytope ∑ s i=1 Conv(A i ) ⊂ R n is full-dimensional. If the full-dimensionality requirements in the previous list are not fulfilled, one can reformulate the system using fewer variables. For polynomial systems from these nested families, admissible tuples are presented in Table 1. In what follows, we discuss them in more detail. The tuples presented in Table 1 are admissible under a zero-dimensionality assumption on the system F . Unfortunately, it is not enough to require that F (x) = 0 has finitely many solutions in C n or (C \ {0}) n . Loosely speaking, we need that the lifting of F to a certain larger solution space has finitely many solutions. This is best understood in the context of toric geometry. We refer the reader to [44,Sec. 3] or [11, Sec. 2] for a description of the zero-dimensionality assumption in this language. Here, we omit terminology from toric geometry and state the assumption in terms of face systems, following [12]. We will use the notation For any vector v ∈ R n , we define where v, α = v 1 α 1 + · · · + v n α n ∈ R. For i = 1, . . ., s, fix any β i,v ∈ A i,v . This gives a new system called the face system associated to v. The exponents α − β i,v , α ∈ A i occurring in the polynomials f i,v lie in a lattice of rank < n when v = 0. We denote this lattice by Let r v be the rank of M v . Applying a change of coordinates, F v is a system of Laurent polynomials in r v variables on the torus (C \ {0}) r v . Its solutions are independent of the choice of β i,v ∈ A i,v . Assumption 1 (Zero-dimensionality assumption). For every v ∈ R n , the face system F v (x) = 0 has finitely many (possibly zero) solutions in (C \ {0}) r v . Remark 3.1. Assumption 1 holds for a generic element F ∈ R A 1 × · · · × R A s , in the sense of Section 2. In fact, for a generic system F all face systems F v for v = 0 have no solutions in (C \ {0}) r v , and the condition for this to hold only depends on the coefficients associated to some vertices of the polytopes Conv(A i ), see [17]. The fact that we can allow finitely many solutions for all face systems comes from the recent contributions [44,11]. In practice, this means that our algorithm is robust in the presence of isolated solutions at or near infinity (where this is understood in the appropriate toric sense). Table 1. Then, we have that (F , A 0 , (E 0 , . . . , E s ), D) is an admissible tuple. Proof. We sketch the proof. We need to show that the three conditions in Definition 2.1 are satisfied. Observe that, by construction, the elements from the tuple satisfy the Compatibility condition and A 0 satisfies the Lattice condition. By Assumption 1, for generic f 0 ∈ R A 0 , the system ( f 0 , . . ., f s ) has no solutions on the toric variety associated to the lattice polytope Conv(A 1 ) + · · · + Conv(A s ) and we can adapt [11,Thm. 4.3] Macaulay matrices defined by the tuples from Table 1 have been used in different algorithms for solving sparse polynomial systems, e.g. sparse resultants [30], truncated normal forms [47], Gröbner bases [8,9], and others [38]. When restricted to Macaulay matrices, these constructions are often near-optimal when s = n. However, there exist other kind of smaller matrices which can be also used to solve the system [7,10]. When s > n, we can often work with much smaller Macaulay matrices. This is the topic of the next subsection. Incremental constructions Even though the tuples from Theorem 3.1 are admissible, they might lead to the construction of unnecessarily big matrices in Algorithm 2. To avoid this, we present an incremental approach which leads to the construction of potentially smaller matrices. For ease of exposition, we consider only the unmixed case. The ideas can be extended to the other cases. In what follows, we fix a polytope P such that 0 ∈ P and integers d 0 , . . ., d s ∈ N >0 . We consider sets of exponents A 0 , A 1 , . . . , A s ⊂ N n such that, for each i ∈ {0, . . ., s}, we have Conv Theorem 3.2. With the above notation, consider an unmixed polynomial system Proof. By construction, the tuple satisfies the Compatibility and Lattice conditions. By assumption, it satisfies the Rank condition, so it is admissible. The proof follows as in Theorem 3.1. The bound upper bound on λ obtained in Theorem 3.2 is not tight for overdetermined systems. Below, we will present a broad class of overdetermined unmixed systems, namely semi-regular* sequences, for which we can improve it. In these cases, the matrices M g from Equation (2.2) are not multiplication operators. For readers familiar with the concept of Castelnuovo-Mumford regularity, we note that this happens because the degree D λ belongs to the regularity of { f 0 , F }, but not necessarily to that of F . Theorem 3.2 suggests an algorithm for finding an admissible tuple for an unmixed system F : we simply check, for a random element f 0 ∈ R A 0 and increasing values of λ , whether rank( In order to do this efficiently, instead of computing Coker(F , E λ +1 ; D λ +1 ) directly as the left nullspace of the large matrix M(F , E λ +1 ; D λ +1 ), we will obtain it from the previously computed Coker(F , E λ ; D λ ) and a smaller Macaulay matrix. This technique was applied in the dense setting (P = ∆ n ) in [4,40], where it is also called 'degree-by-degree' approach. See also [41] for a recent complexity analysis. Note that, by construction, The first step is to construct the following 2 × 2 block matrix Here id denotes the identity matrix of size #(D λ +1 \ D λ ). Note that the columns of the matrix (Coker(F , E λ ; D λ ) × id) are indexed by D λ +1 , where the first block column is indexed by s \ E λ s ) and construct the Macaulay matrix Here we require that the ordering of the rows is compatible with the ordering of the columns in (3.3). Let L λ +1 be a left nullspace matrix of the matrix product . The power of this approach lies in the fact that (3.4) is much smaller than M(F , E λ +1 ; D λ +1 ), which leads to a much cheaper left nullspace computation. This gives an iterative algorithm for updating the left nullspace matrix Coker(F , E λ ; D λ ). We start our iteration by considering λ = max i d i , as we want to take into account all of the equations. This discussion is summarized in Algorithm 3. Note that the algorithm computes the cokernel Coker(F , E; D) for the admissible tuple as a by-product, as well as the matrix N f 0 . This allows us to skip the steps before line 6 in Algorithm 2. Remark 3.4 (Other incremental constructions). There are alternative incremental constructions for the matrices N f 0 which also reuse information from previous steps to speed up the computations. An example is the F5 criterion in the context of Gröbner bases [31]. These ideas extend naturally to the mixed setting, see [9]. However, these approaches based on monomial orderings lead to bad numerical behaviour. In the context of sparse resultants for mixed systems, Canny and Emiris [28] proposed an alternative incremental algorithm to construct admissible tuples leading to smaller Macaulay matrices. Their procedure can be enhanced with the approach followed in this section. In the rest of this subsection, we identify a broad class of overdetermined unmixed systems for which we can obtain smaller admissible tuples than the ones in Theorem 3.1. We will need some more notation. The Ehrhart series of a polytope P is the series The Hilbert series of a polynomial system F 0 : Definition 3.1 (Semi-regularity*). We say that F 0 is a semi-regular* sequence if where [ · ] + means that we truncate the series in its first negative coefficient. Observe that we write semi-regular* sequence with an asterisk as the usual definition of semi-regular sequence asks for this condition on the Hilbert series to hold for every subsystem ( f 0 , . . . , f i ), i ≤ s. However, semi-regular sequences are too restrictive for our purposes. Even in the case where P is a standard simplex, semi-regular* sequences are not understood as well as regular sequences. For example, Fröberg's conjecture states that being a semi-regular* sequence is a generic condition [32]. This conjecture, supported by a lot of empirical evidence, was extended to the unmixed case [31]. Theorem 3.3. Consider an unmixed polynomial system F ∈ R A 1 × · · · × R A s and a polynomial f 0 ∈ R A 0 , with Conv(A i ) = d i · P. Let λ min be the smallest integer among the degrees of the monomials in ES P (t) ∏ s i=0 (1 − t d i ) standing with a non-positive coefficient. We have that, if ( f 0 , F ) is a semiregular* sequence, then the tuple (( f 0 , F ), A 0 , (E λ min 0 , E λ min ); D λ min ) is admissible. Proof. The proof follows from the fact that HF(( f 0 , F ), E λ min 0 , E λ min ); D λ min ) = 0 as the sequence is semi-regular*. It follows directly from Theorem 3.2 that, whenever ( f 0 , F ) is semi-regular*, λ min ≤ ∑ i d i − CODEGREE(P) + 1. In Section 4.2, we present generic families of zero-dimensional overdetermined systems F such that ( f 0 , F ) is semi-regular*. For these systems, we show that the previous inequality can be strict. Semi-regular* sequences give us an inexpensive heuristic to discover values for λ for which we can obtain admissible tuples. It was observed in practice [2,31] that for many systems F not having much solutions outside the torus (see Remark 3.1), they can be extended to semi-regular* sequences. Moreover, there are asymptotic estimates for the expected value of λ [2]. Experiments In this section we illustrate several aspects of the methods presented in this paper via numerical experiments. We implemented these algorithms in the new Julia package EigenvalueSolver.jl, which is freely available at https://github.com/simontelen/JuliaEigenvalueSolver. For all computations involving polytopes, we use Polymake.jl (version 0.5.3), which is a Julia interface to Polymake [34]. We compare our results with the package HomotopyContinuation.jl (version 2.3.1), which is state-of-the-art software for solving systems of polynomial equations using homotopy continuation [14]. All computations were run on a 16 GB MacBook Pro with an Intel Core i7 processor working at 2.6 GHz. To evaluate the quality of a numerical approximation ζ ∈ C n of a solution for a polynomial system F given by (3.1). We define the backward error BWE(ζ ) of ζ as This error can be interpreted as a measure for the relative distance of F to a system F ′ for which Additionally, we validate our computed solutions via certification. For that, we use the certification procedure implemented in the function certify of HomotopyContinuation.jl, which is based on interval arithmetic, as described in [13]. This function takes as an input a list of approximate solutions to F and tries to compute a list of small boxes in C n , each of them containing an approximate input solution and exactly one actual solution to F . The total number of connected components in the union of these boxes is denoted by crt in what follows. Each of these connected components contains exactly one solution of F , and one or more approximate input solutions. This means that crt is a lower bound on the number of solutions to F . If crt equals the number of solutions, the solutions of F are in one-to-one correspondence with the approximate input solutions. In this case, we say that all solutions are certified. The function certify assumes that F is square, i.e. F should have as many equations as variables (s = n). If this is not the case (s > n), we use certify on a system obtained by taking n random C-linear combinations of f 1 , . . ., f s . The main function of our package EigenvalueSolver.jl is solve EV, which implements Algorithm 2. It takes as an input an admissible tuple (see Definition 2.1). This tuple can be computed using the auxiliary functions provided in our implementation, which are tailored to take into account the specific structure of the systems. These functions use the explicit and incremental constructions from Section 3. It is common in applications that we have to solve many different generic systems F with the same supports A 1 , . . . , A s . In this case, the computation of the admissible tuple can be seen as an offline computation that needs to happen only once. We will therefore report both the offline and the online computation time. The offline computation time is the time needed for computing an admissible tuple and executing solve EV. The online computation re-uses a previously computed admissible tuple to execute solve EV. Table 2 summarizes the notation that we use to describe our experiments. The section is organized as follows. In Section 4.1, we consider square systems (s = n) and show how to use EigenvalueSolver.jl to solve them. In Section 4.2, we solve overdetermined systems (s > n) using our incremental algorithm. We perform several experiments summarized in Table 4 and Table 5. In Section 4.3, we consider systems for which one solutions drifts off to 'infinity'. In Section 4.4, we compare our algorithm with homotopy continuation methods. Square systems In this subsection, we demonstrate some of the functionalities of EigenvalueSolver.jl by solving square systems, that is s = n, for each of the families in Table 1. The code used for the examples can be found at https://github.com/simontelen/JuliaEigenvalueSolver in the Jupyter notebook /example/demo EigenvalueSolver.ipynb. We fix the parameters of Table 1 and consider specific supports A i as described below. We construct random polynomial systems by assigning random real coefficients to each of the monomials, which we draw from a standard normal distribution. By Remark 3.2, the number γ equals the number of solutions δ for all examples in this subsection. For our first example, we intersect two degree 20 curves in the plane. That is, we consider a square, dense system F 1 with n = 2 and d 1 = d 2 = 20. The equations are generated by the following simple commands: In the previous line, the option DBD = false indicates that we do not want to use the 'degree-bydegree' approach for solving this system, that is, the incremental approach described in Section 3.2. Experiments show that this strategy is only beneficial for square systems with n ≥ 3. The letters CI in the name of the function stand for complete intersection, which indicates that a zero-dimensional square system is expected as its input. In this example, we have #D = 820 and the computation took t off = 0.83 seconds. To validate the solutions, we compute their backward errors. BWEs = EigenvalueSolver.get_residual(f, sol, x) The maximal value, computed using the command maximum(BWEs), is BWE ≈ 10 −12 . The function certify from HomotopyContinuation.jl certifies crt = 400 distinct solutions. If we perform the same computation with parameters n = 3, (d 1 , d 2 , d 3 ) = (4,8,12), we obtain δ = γ = crt = 384, #D = 2300, t off = 3.10, BWE ≈ 10 −11 . In this case, we obtain δ = γ = crt = 240, #D = 685, t off = 0.94, BWE ≈ 10 −11 . We remark that the function solve CI unmixed also returns the admissible tuple (A 0 , E, D), so that it can be used to solve another generic unmixed system with the same parameters, without redoing the polyhedral computations to generate this tuple. This can be done in the following way, sol = EigenvalueSolver.solve_EV(f, x, A0, E, D; check_criterion = false) The option check criterion = false in the previous line indicates that the input tuple is admissible, so we do not need to spend time on checking whether the criterion in Lemma 2.1 is satisfied. Using this option, the online computation is faster and takes t on = 0.41 seconds, yet the parameters δ , γ, crt, BWE are comparable to the offline case. To illustrate how the unmixed function exploits the structure of the equations, in Figure 2, we plot the exponents in D for this example, together with the exponents in D for our dense example F 1 . In both plots, we have highlighted the exponents in the set B that were selected using QR factorization with optimal column pivoting. These monomial bases clearly do not correspond to any standard (Gröbner) or border basis. Figure 2 should be compared to, for instance, Figure 2 in [47]. Table 1, for the dense systems F 1 (left) and the unmixed system We can solve multi-graded dense and multi-unmixed systems using the implemented functions solve CI multi dense and solve CI multi unmixed, respectively. Table 3 summarizes our choice of parameters and the results of our experiments for these systems. To conclude this subsection, we present a classical example of a square mixed system in n = 3 variables coming from molecular biology [29,Sec. 3.3]. The following code generates and solves these equations: In this case, we obtain δ = γ = crt = 16, #D = 200, t off = 0.53, t on = 0.02, BWE ≈ 10 −13 . The function certify tells us that all 16 solutions are real, confirming the observation made in [29]. Overdetermined systems We now consider examples of overdetermined systems, by which we mean cases where s > n. We will limit ourselves to unmixed systems and use Algorithm 3 to find admissible tuples leading to small Macaulay matrices. These systems arise, for instance, in tensor decomposition problems [48]. We present examples where γ is significantly larger than δ and show that, nevertheless, our algorithms successfully extract δ < γ relevant eigenvalues and consistently return all solutions of the input systems. We observe that the Macaulay matrices constructed in this section are smaller than the ones obtained using other symbolic-numerical techniques as (sparse) resultants [27] or its generalization [38]. The admissible tuples used in those symbolic-numerical algorithms lead to multiplication operators, for which γ = δ . As observed in Remark 3.3, our matrices M g are too large to be multiplication operators. The extra time needed for computing the eigenvalues of these larger matrices is negligible compared to the time won by computing M g from a smaller Macaulay matrix. The overdetermined systems considered in this section are constructed as follows. For a fixed number of variables n, number of solutions δ and set of exponents A, we generate δ random points ζ 1 , . . . , ζ δ in C n by drawing their coordinates from a complex standard normal distribution. We construct a Vandermonde type matrix Vdm whose rows consist of the vectors ζ A i / ζ A i 2 , i = 1, . . ., δ . The nullspace of Vdm is computed using SVD and its columns represent s = #A − δ polynomials f 1 , . . . , f s with support A. If we do not pick too many points, we have that s > n and the solutions of F = ( f 1 , . . ., f s ) are exactly the points ζ 1 , . . ., ζ δ . Dense, overdetermined systems In this subsection, we consider dense overdetermined systems, i.e. A 0 = ∆ n ∩ N n and A = (d · ∆ n ) ∩ N n for some degree d ∈ N >0 . The offline computation uses Algorithm 3 to find an admissible tuple, as well as a left nullspace, and then execute Algorithm 2 from line 6 on. The online computation uses this admissible tuple to execute Algorithm 2 directly. This means that the offline version uses an incremental strategy for computing the left nullspace, while the online version works directly with the large Macaulay matrix. The online version can be adapted to work incrementally as well. We have chosen not to do this in order to illustrate that, depending on n, s, the incremental approach may be less or more efficient than the direct approach. In cases where the incremental approach is more efficient, this may cause t off < t on . In the square case (s = n), this happens for n ≥ 3 [40,41], but our results show that in the overdetermined case this might not happen. Further research is necessary to make an automated choice. Table 4 gives an overview of the computational results. The column indexed by #D represents the size of the matrix that would be used in classical approaches. This is discussed in the final paragraph of this subsection. The first 10 rows in Table 4 correspond to systems of 6 equations in 3 variables of increasing degree d = 2, 4, . . ., 20. Note that γ > δ for d > 4. In all cases, δ distinct solutions were computed using our algorithms and crt = δ . This means that exactly δ out of γ eigenvalues were selected and correctly processed to compute solution coordinates. The maximum backward error grows faster with the degree of the equations than for square systems [47]. This can be remedied by using larger admissible tuples to bring γ closer to δ , at the cost of computing cokernels of larger matrices. However, our experiment shows that we can find certified approximations for all 1765 intersection points of 6 threefolds of degree 20 within less than 10 minutes. All of these are within two Newton refinement steps from having a backward error of machine precision. The next 5 rows of Table 4 contain results for 18 dense equations in 6 variables of increasing degree d = 2, 3, . . ., 6. Note that t on > t off for d > 2. This is due to the incremental approach for the offline phase, as mentioned above. In the following 7 rows of Table 4, we illustrate the effect of increasing the number of variables when we fix the degree d = 3. We work with overdetermined systems for which s = 2n. Although the complexity of eigenvalue methods usually scales badly with the number of variables, these results show that when the system is 'sufficiently overdetermined', our algorithms can find feasible admissible tuples to solve cubic equations in 8 variables in no more than 20 seconds. Finally, the last rows of Table 4 correspond to systems of cubic equations in 15 variables with an increasing number δ = 200, 300, . . ., 600 of solutions. Note that the computation time decreases with the number of solutions. The reason is that for all these values of δ , we can work with the same support D for the Macaulay matrix. This means that the matrix has the same number of rows for each system. The number of columns, however, depends on the number of equations, which increases with decreasing δ by construction. For δ = 700, we need a larger set of exponents D, causing memory issues. All systems ( f 0 , F ) appearing in Table 4 are semi-regular*. By Theorem 3.3, the minimal value of λ such that (( f 0 , F ), A 0 , (E λ 0 , E λ ); D λ ) is an admissible tuple is the degree λ min of the n Table 4: Computational results for overdetermined, dense systems. See Table 2 for the notation. lowest-degree monomial with a non-positive coefficient in To illustrate the gain of using such a minimal λ min , we included the number #D which corresponds to the number of monomials in the D λ for the smallest λ which gives γ = δ . That is, the smallest λ for which the matrices M g in our algorithm represent multiplication matrices. For n = 3, s = d = 6, λ min is 9, and the admissible tuple has #D = #(9 ∆ 2 ∩ Z 2 ) = 220 lattice points. Multiplication matrices are obtained from #D = #(10 ∆ 2 ∩ Z 2 ) = 286. To see the benefit of our incremental construction over the bounds from Unmixed, overdetermined systems We now use our algorithms to solve overdetermined unmixed systems. The results are summarized in Table 5. First, we set n = 3 and choose δ such that s = 6. We define A 0 as the columns of  Solutions at infinity An important feature of our algorithms is that they can deal with systems having isolated solutions at or near infinity. To illustrate this, we work with the same set-up as in Section 4.2.1 with parameters n = 7, d = 3 and s = 14, implying δ = 106. We generate 106 random complex points ζ 1 , . . ., ζ 106 as before, and then multiply the coordinates of ζ 106 by a factor 10 e for increasing values of e. That is, we let one of 106 solutions drift off to infinity. Figure 3 shows the maximal 2-norm of the computed solutions as well as the maximal backward error BWE for e = 0, . . ., 14. The results clearly show that the accuracy is not affected by the 'outlier' solution. As e grows larger, the solution ζ 106 corresponds to an isolated solution of the face system F v with v = (1, 1, 1, 1, 1, 1, 1), see Remark 3.1. For all considered values of e, our algorithm computed crt = δ = 106 distinct certified approximate solutions. Comparison with homotopy continuation methods Homotopy continuation algorithms form another important class of numerical methods for solving polynomial systems [42]. These methods transform a start system with known solutions continuously into the target system, which is the system we want to solve, and track the solutions along the way. This process can usually only be set up for square systems, i.e. s = n. In these cases, especially when n = s is large (≥ 4), homotopy continuation methods often outperform eigenvalue methods. When the system F is overdetermined (s > n), homotopy methods solve a square system F square obtained by taking n random C-linear combinations of the s input polynomials. The set of solutions of F is contained in the set of solutions of F square , so that the solutions of F can be extracted by an additional 'filtering' step. Often F square has many more solutions than F , so that many of the tracked paths do not end at a solution of F . Below, we use the notation δ square for the number of solutions of F square . Several implementations of homotopy methods exist, including Bertini [3] and PHCpack [50]. Here, we choose to compare our computational results with the relatively recent Julia impementation HomotopyContinuation.jl [14]. The motivation is twofold: it is implemented in the same programming language as EigenvalueSolver.jl, and it is considered the state of the art for the functionalities we are interested in. We point out that due to the extremely efficient implementation of numerical path tracking in HomotopyContinuation.jl, the package can outperform our eigenvalue solver even when δ square is significantly larger than δ . For instance, in the case n = 3, d = 20 from Table 4, we have δ square = 8000 > δ = 1765, but HomotopyContinuation.jl tracks all these 8000 paths in no more than 40 seconds. The performance is comparable for the row n = 6, d = 5 in Table 4, where HomotopyContinuation.jl tracks δ square = 15625 paths in about 45 seconds. For all the above computations, we used the option start system = :total degree, which is optimal for dense systems and avoids polyhedral computations to generate start systems. However, for strongly overdetermined systems, our algorithm outperforms the homotopy approach. For example, for all the cases n = 15, d = 3, Table 4 shows that our algorithms take no more than 2 minutes for δ ≤ 600. On the other hand, the number δ square equals 3 15 = 14348907, for which HomotopyContinuation.jl shows an estimated duration of more than 2 days. Additionally, for the case n = 15, d = 2 in Table 5, we have δ square = 32765 and the path tracking takes over 10 minutes, as compared to 48 seconds for the online version of our algorithm and 65 seconds for the offline version. In this last case we use the default start system = :polyhedral. We conclude that for strongly overdetermined systems (s ≫ n), EigenvalueSolver.jl outperforms HomotopyContinuation.jl, which suggests that eigenvalue methods are more suitable to deal with this kind of systems.
2021-05-19T01:15:54.332Z
2021-05-18T00:00:00.000
{ "year": 2021, "sha1": "34030a186a12743d76ddf4dba1ea0007750ee248", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "34030a186a12743d76ddf4dba1ea0007750ee248", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
11174388
pes2o/s2orc
v3-fos-license
Complete families of commuting functions for coisotropic Hamiltonian actions Let G be an algebraic group over a field F of characteristic zero, with Lie algebra g=Lie(G). The dual space g^* equipped with the Kirillov bracket is a Poisson variety and each irreducible G-invariant subvariety X\subset g^* carries the induced Poisson structure. We prove that there is a family of algebraically independent polynomial functions {f_1,...f_l} on X, which pairwise commute with respect to the Poisson bracket and such that l=(dim X+tr.deg F(X)^G)/2. We also discuss several applications of this result to complete integrability of Hamiltonian systems on symplectic Hamiltonian G-varieties of corank zero and 2. INTRODUCTION In this paper, we study Hamiltonian actions of algebraic groups on affine varieties focusing on the non-reductive case. The ground field F is assumed to be of characteristic zero, but not necessarily algebraically closed. Let us start with main definitions in the general algebraic setting. Let P be a Poisson algebra. Assume that P has no zero-divisors and tr.deg P < ∞. Let Der(P , Quot P ) stand for the set of all Quot P -valued derivations of the algebra P regarded just as a commutative associative algebra. This is a linear space over Quot P of dimension tr.deg P . Each ϕ ∈ P gives rise to a derivation ad(ϕ), where ad(ϕ)·ψ = {ϕ, ψ} for all ψ ∈ P . Let V (P ) := ad(ϕ) | ϕ ∈ P be the subspace of Der(P , Quot P ) spanned by the inner derivations. Then dimV (P ) (the dimension over Quot P ) is said to be the rank of P , usually denoted by rk P . If the ground field F is algebraically closed and the algebra P is finitely generated, then Der(P , Quot P ) can be viewed as the space of rational vector fields on the affine algebraic variety Spec P . The inner derivations of P are then interpreted as the Hamiltonian vector fields. Next, set ω(ad(ϕ), ad(ψ)) := {ϕ, ψ}. Since ad(ϕ) = 0 for each ϕ ∈ ZP , ω is a nondegenerate skew-symmetric bilinear form on V (P ) over Quot P . Hence, in particular, rk P is even. It is not difficult to see that V (P ) and ω do not change if we pass to the localisation of P by a multiplicative subset of ZP . Definition 2. A Poisson algebra P is said to be symplectic, if V (P ) = Der(P , Quot P ), or, in other words, if rk P = tr.deg P . In what follows, we assume that ρ is injective and consider q as a Lie subalgebra of P . The Poisson subalgebra P (q) ⊂ P , generated by q, is called the Noether subalgebra. Let P be a symplectic algebra and A ⊂ P a Poisson subalgebra. Let U (A) ⊂ V (P ) be the subspace spanned over Quot P by the derivations ad(ϕ) with ϕ ∈ A. Definition 4. A Hamiltonian action q ֒→ P is said to be coisotropic if the subspace U (P (q)) is coisotropic with respect to ω. The main result of the paper is the following theorem. Theorem 1. For any coisotropic Hamiltonian action of a Lie algebra q on a symplectic algebra P , the subalgebra P (q) contains a Poisson-commutative subalgebra of transcendence degree 1 2 rk P . With a few preparations it follows from a more geometric statement. Let g = Lie G be the Lie algebra of a connected algebraic (or a Lie) group G, and S (g) be the symmetric algebra of g. Then S (g) is a Poisson algebra and the algebra g acts on it in the sense of Definition 3. The same holds for any quotient of S (g) by a G-invariant ideal I ✁ S (g) (which is automatically a Poisson ideal). In case I = 0, the existence of a Poisson-commutative subalgebra A ⊂ S (g) with tr.deg A = l(g * ) was conjectured by Mishchenko and Fomenko [13], and proved by Sadetov [15]. A clearer treatment of this result is given by Bolsinov [2]. Note that of course the image of a Poisson-commutative subalgebra A ⊂ S (g) remains Poisson-commutative in S (g)/I. However, transcendence degree may sink far below l(I). Our proof of Theorem 2 follows the same strategy as the proofs of Sadetov and Bolsinov for S (g). Note that in the general case our functions f 1 , . . . , f l(I) ∈ S (g)/I do not extend to Poisson-commuting functions in S (g). First we prove Theorem 2 in case of a reductive g, see Section 3. In the general case, we argue by induction on dim g, see Section 5. We remark that the number l(I) does not change under field extensions. Some applications of Theorems 1 and 2 to integrable Hamiltonian systems are discussed in Section 2. SYMPLECTIC ALGEBRAS AND HAMILTONIAN ACTIONS Consider a Poisson algebra P . Assume that P has no zero-divisors and that tr.deg P < ∞. For each subalgebra C ⊂ P , let C −1 P denote the localisation of P by the subset of all non-zero elements of C. Clearly C −1 P is a subset of the field Quot P . The Poisson structure uniquely extends from P to Quot P and for any multiplicative system S ⊂ P the localisation P S is a Poisson subalgebra of Quot P . In particular, this is true for C −1 P . If C ⊂ ZP , then C −1 P can be regarded as a Poisson algebra over the field QuotC. Definition 5. A Poisson algebra P is said to be separable if tr.deg ZP + rk P = tr.deg P . Roughly speaking, P is separable if generic symplectic leaves of the underlying Poisson affine variety X = Spec(P ⊗ F F) are separated by the "central" functions, elements of ZP ⊗ F F. If P is a separable Poisson algebra, then (ZP ) −1 P is a symplectic algebra over Quot ZP , see Definition 2. Example 6. Let W be a finite-dimensional vector space over F equipped with a nondegenerate skew-symmetric bilinear form ω. Then ω defines a Poisson bracket on the symmetric algebra S (W ) by the formula {x, y} := ω(x, y) for all x, y ∈ W . This Poisson algebra (S (W ), ω) is symplectic and V (S (W )) = Quot S (W ) ⊗ F W with the same (extended) form ω. The algebra S (W ) has a natural grading, with grading components being S k (W ), and {S k (W ), S l (W )} ⊂ S k+l−2 (W ). Hence q := F +W + S 2 (W ) is a Lie subalgebra and n := F +W is an ideal of q. Note that n is a Heisenberg algebra. The map q → Der n δ → ad(δ) |n is an epimorphism of Lie algebras with the kernel F and S 2 (W ) is mapped isomorphically onto the Lie algebra sp(W ) of the symplectic group Sp(W, ω). Note also that the centraliser of n in S (W ) coincides with F. Example 7. Let q be a (finite-dimensional) Lie algebra over F. Then S (q) is a Poisson algebra with the usual Kirillov-Kostant bracket. The corresponding symplectic vector space V (S (q)) can be constructed as follows. Set K := Quot S (q). Recall that K is also a Poisson algebra. Set further ω(ξ, η) := [ξ, η] for all ξ, η ∈ q. Since ω(ξ, η) ∈ K, this formula defines a skew-symmetric bilinear form on a K-vector space V := q ⊗ F K. (In a basis of q, ω is just the structural matrix.) Then V (S (q)) = V /Ker ω. Let us say that q is separable if tr.deg ZS (q) = tr.deg ZK. A Lie algebra q is separable if and only if S (q) is separable. In that case (ZS (q)) −1 S (q) is a symplectic algebra over Quot ZS (q). The next two statements follow easily from the construction of (V (P ), ω) and Definition 2. Proposition 8. Let P be a symplectic algebra and A ⊂ P a Poisson subalgebra. Let U (A) ⊂ V (P ) be a subspace spanned over Quot P by the derivations ad(ϕ) with ϕ ∈ A. Then (1) dim Quot P U (A) = tr.deg A; Combining Propositions 8 and 9, we get that if {A, B} = 0, then tr.deg A +tr.deg B ≤ rk P . From now on assume that P is symplectic and that we have a Hamiltonian action of a Lie algebra q on P , see Definition 3. Set P q := {ϕ ∈ P | {q, ϕ} = 0} = Z P (P (q)). As above, tr.degP (q) + tr.degP q ≤ rk P . A Hamiltonian action is said to be separable if tr.degP (q) + tr.degP q = rk P . It is possible to characterise separable Hamiltonian coisotropic actions. Proof. Recall that for a separable action, the orthogonal complement of U (P (q)) coincides with U (P q ). Hence the action q ֒→ P is coisotropic if and only if U (P q ) ⊂ V (P ) is an isotropic subspace. According to Proposition 9, this condition is equivalent to the Poissoncommutativity of P q . Proposition 11. Theorem 1 follows from Theorem 2. Proof. The subalgebra P (q) ⊂ P is isomorphic to some Poisson quotient S (q)/I, where I ✁ S (g) is a Poisson ideal. In particular, I is G-invariant. Since P is a domain, the algebra S (q)/I is also a domain. By Theorem 2, S (q)/I contains a Poisson-commutative subalgebra Combining Definition 4 and Proposition 8, we see that rk P (q) = rk P − 2(rk P − tr.deg P (q)) = 2tr.deg P (q) − rk P and therefore l = 1 2 rk P . GEOMETRIC REALISATION AND APPLICATIONS Suppose that X is an irreducible affine variety defined over F. Let X (F) denote the set of its points over the algebraic closure of F. As usual F[X ] and F(X ) := Quot F[X ] stand for the algebras of regular and rational functions on X , respectively. Our convention is that All subvarieties of X , all differential forms on X , and all morphisms of X are supposed to be defined over F. Let G be a connected linear algebraic group over F with g = Lie G. An algebraic action of G on X gives rise to a representation of G (and of g) on F[X ]. Definition 12 (Geometric version of Definition 3). Suppose that Y is an affine variety Suppose that we have a Hamiltonian action G ×Y → Y . Then each function in µ * (S (g)) is called a Noether integral on Y . Their most important property is given by the Noether The kernel of µ * is a Poison ideal of S (g), say I, and therefore S (g)/I is a Poisson quotient of S (g). Let is a Poisson algebra and X is a Poisson variety. Set It follows from Rosenlicht's theorem, that be a Poisson commutative subalgebra. Take γ ∈ X (F) such that the orbit Gγ is of maximal possible dimension. The subspace Hence the dimension of this subspace is less than or equal to l(X ) and also tr. The simplest example of a symplectic variety is an even-dimensional vector space V equipped with a non-degenerate skew-symmetric bilinear form ω. Each Lagrangian decomposition V = V + ⊕V − gives us a complete family of linear functions on V , namely, one has to take a basis of V * + . Another familiar example is the cotangent bundle of a smooth irreducible affine variety Y , M = T * Y , equipped with the canonical symplectic structure. Suppose that M = T * X , where X is a G-variety. Then M possesses a canonical Ginvariant symplectic structure such that the action of G is Hamiltonian. If the action G × M → M is coisotropic, then X has an open G-orbit [5]. For reductive G one can say more. Suppose F is algebraically closed, G is reductive, and X is smooth. By a result of Knop [8,Sections 6&7], the action of G on T * X is a coisotropic if and only if a Borel subgroup B of G has on open orbit on X . Normal varieties having an open B-orbit are said to be spherical. It was known before that if X is spherical and X = G/H, where H is a reductive subgroup of G, then each G-invariant Hamiltonian system on T * X is integrable within the class of Noether integrals, see [5,11,7]. Here we lift the assumption that H is reductive. Smooth affine spherical varieties are classified (under mild technical constraints) in [9]. It would be interesting to study complete families on their cotangent bundles. By the same result of Knop [8], the action of G on T * X is of corank 2 if and only if tr.deg F(X ) B = 1, i.e., X has complexity 1. Theorem 4 provides also (hopefully) interesting completely integrable systems for these cotangent bundles. Other well-studied coisotropic actions on cotangent bundles are related to Gelfand pairs. Suppose that F = R and M = T * X , where X = G/K is a Riemannian homogeneous space. Then X is called commutative or the pair (G, K) is called a Gelfand pair if the action G × M → M is coisotropic. Gelfand pairs can be characterised by the following equivalent conditions. (i) The algebra D(X) G of G-invariant differential operators on X is commutative. (ii) The algebra of K-invariant measures on X with compact support is commutative with respect to convolution. (iii) The representation of G on L 2 (X ) has a simple spectrum. Theorem 3 and its corollary provide two more equivalent conditions. (iv) There is a complete family of Noether integrals on T * X . (v) Each G-invariant Hamiltonian system on M is completely integrable in the class of Noether integrals. According to [16], if G/K is a Gelfand pair and G = L ⋉ N is a Levi decomposition of G such that K ⊂ L, then R[n] L = R[n] K and n is at most two-step nilpotent. These conditions guarantee that the construction of a complete family on µ(M) would have at most three induction steps. Thus, one can hope for explicit formulas for our commuting families and applications to physical problems. Gelfand pairs are partly classified in [17,19] and completely in [20]. THE REDUCTIVE CASE In this section, G is a connected reductive algebraic group. Here one can apply a very powerful tool, the so called "argument shift method". It was used by Manakov [10], Mishchenko and Fomenko [12], and Bolsinov [1] in constructions of complete families on g * and coadjoint G-orbits. The reader is referred to [6, Chapter 4] for a thorough exposition and historical remarks. Let us briefly outline this method. Let r be the rank of g. Choose any set F 1 , . . ., F r of free generators of F[g * ] G . For any a ∈ g * , let F a denote the finite set Then {F a , F a } = 0, see e.g. [14,Sections 1.12,1.13]. Here we should mention that this fact is stated in [14] for F = C, but the proofs are valid over all fields of characteristic zero. Recall that the index of a Lie algebra q is the minimum of dimensions of stabilisers q ξ over all covectors ξ ∈ q * , i.e., ind q = min ξ∈q * dim q ξ . Note that ind q = tr.degF(q * ) q and that dim q − ind q is the rank of the Poisson algebra S (q) as defined in the Introduction. Then there is a ∈ g * such that the restriction of F a to the coadjoint orbit Gξ contains 1 2 dim(Gξ) algebraically independent functions if and only if ind g ξ = ind g. The proof of Theorem 2 in [1] uses only linear algebra and can be repeated for any algebraically closed field of characteristic zero. We are going to use the result also for F = C. Proposition 15. If g is reductive and ξ ∈ g * , then ind g ξ = ind g. The statement of Proposition 15 is known as Elashvili's conjecture. For the classical Lie algebras, it is proved in [18] under the assumption that char F is good for g. W. de Graaf used a computer program to verify the conjecture for the exceptional Lie algebras, see [4]. An almost conceptual proof of Elashvili's conjecture is given in [3]. (The authors still have to rely on computer calculations for a few orbits.) LetV a,ξ ⊂ T * ξ (g * ) be the F-linear span of the differentials {d ξ F | F ∈ F a } and let V a,ξ be the restriction ofV a,ξ to T ξ (Gξ) = gξ. Since the orbit Gξ is a symplectic variety and the subspace V a,ξ is isotropic, we get 2 dimV a,ξ ≤ dim(Gξ). The restriction of F a to Gξ contains a complete family if and only if there is a ′ ∈ Ga such that 2 dimV a ′ ,ξ = dim Gξ. Combining Propositions 14 and 15, we obtain the following assertion. Proof of Theorem 2 in the reductive case. Let I be a prime Poisson ideal of S (g) and X (F) a closed subvariety of (g ⊗ F F) * defined by I. Choose a set of homogeneous generators {F 1 , . . . , F r } ⊂ F[g * ] G . LetF i denote the restriction of F i to X . Each fibre of the quotient morphism X (F) → X (F)//G(F) contains finitely many G-orbits. Hence for generic ξ ∈ X (F) the differentials {d ξFi | i = 1, . . . , r} generate a subspace of dimension m := dim X − dim(Gξ). According to Proposition 16, there is an element a ∈ (g ⊗ F F) * such that the restriction of F a to G(F)ξ contains a complete family, i.e., 2 dimV a,ξ = dim(G(F)ξ). There is an open subset of such elements. In particular, we may (and will) assume that a ∈ g * . Then F a is a subset of S (g). Each differential d ξFi is zero on gξ. Therefore and the restriction of F a to X contains a complete family. AUXILIARY RESULTS In this section, we collect several facts concerning structural properties of algebraic Lie algebras. They will be used in the proof of the main theorem. Recall that a (2n+1)-dimensional Heisenberg Lie algebra over F is a Lie algebra h with a basis {x 1 , . . ., x n , y 1 , . . . , y n , z} such that n ≥ 1, [x i , x j ] = [y i , y j ] = 0, [h, z] = 0, and [x i , y j ] = δ i j z. Recall also that a Lie ideal a ✁ q is said to be a characteristic ideal if it is stable under all automorphisms of the Lie algebra q. Lemma 17. Suppose that n is a nilpotent Lie algebra such that each commutative characteristic ideal of n is one-dimensional. Then n is a Heisenberg algebra. Proof. Let z be the centre of n. Then dim z = 1. Consider the upper central series of n z = n 0 ⊂ n 1 ⊂ n 2 ⊂ · · · ⊂ n k−1 ⊂ n k = n, i.e., n i /n i−1 is the centre of n/n i−1 . The centre of n 1 is a commutative characteristic ideal of n. Hence, it is one-dimensional and coincides with z. Therefore n 1 is a Heisenberg algebra. Let z n (n 1 ) be the centraliser of n 1 in n. Clearly, z n (n 1 ) is an ideal in n and n 1 ∩ z n (n 1 ) = z. We claim that n = n 1 + z n (n 1 ). Indeed, let ξ ∈ n. Then [ξ, n 1 ] ⊂ n 0 and there is an element Let z 0 be the centre of z n (n 1 )/z. Since n/z = (n 1 /z) ⊕ (z n (n 1 )/z) is the direct sum of two ideals, z 0 lies in the centre of n/z. Thus, z 0 ⊂ (n 1 /z) and z 0 = 0. Since z n (n 1 )/z is a nilpotent Lie algebra, we have z n (n 1 )/z = 0, and n = n 1 is a Heisenberg algebra. Let N be the unipotent radical of an affine algebraic group G. Set n := Lie N. For any action P ×Y → Y let Y /P stand for the set of P-orbits on Y . It remains to prove that µ * is a homomorphism of the Poisson algebras S (g/n) and Let γ ∈ S α . The identification S (g/n) ∼ = S (l) ⊂ S (g) gives us that The last step is to prove that {µ * ( f 1 ), µ * ( f 2 )}(Nγ) = { f 1 , f 2 }(γ). It is well-known that Gγ is a symplectic leaf of Y α and g * . Also Lγ is a symplectic leaf of S α . We have T γ (Gγ) = T γ (Lγ) ⊕ T γ (Nγ), where T γ (Lγ) = lγ and T γ (Nγ) = nγ are orthogonal, additionally T γ (Lγ) ⊂ T γ S α . Let F i be the restriction of µ * ( f i ), which is regarded now as an N-invariant function on Y α , to Gγ. Since Gγ is a symplectic leaf of g * , we have Clearly, the functions F 1 and Corollary. In the setting of Lemma 18, we have Consider now the general case. The Galois group Gal F (F) of the field extension F ⊂ F acts on (F[X ][1/z]) N and on S (g/n) ⊗ F F[z, 1/z]. Taking its fixed points on both sides, we see that the statement holds. Remark 19. From Lemma 18 one can deduce that F(g * ) G = F((g/n) * ) G/N ⊗ F F(z * ). In particular, in this case F(g * ) G is a rational field. Let H ✁ N be a connected commutative normal subgroup of G with Lie H = h. Lemma 20. Fix α ∈ h * and let Y α be the preimage of α under the natural restriction g * → h * . Then Y α /H = Spec F[Y α ] H and the restriction map π α : Y α → (g α ) * defines an isomorphism Y α /H ∼ = (g α /h) * × {α}. Since h is a commutative ideal of g, we have h ⊗ F K ✁ĝ. Moreover,ĥ is also an ideal ofĝ. The main object of our interest is the quotient Lie algebra g :=ĝ/ĥ. Another way to define this Lie algebra is to say that g := {ξ ∈ g ⊗ h K | [ξ, h](x h ) = 0}. Then the algebra A carries a natural Poisson structure induced from F[X ]. Lemma 21. Suppose that F = F. Then A is a Poisson quotient of S ( g). Proof. The elements of A and S ( g) are linear combinations of rational functions on x h with coefficients from F[X ] H or S (g), respectively. Thus, it suffices to verify the claim at generic Fix a vector space decomposition g = g α ⊕ m and let s : {α} × (g α /h) * → Ann(m) ⊂ Y α be the corresponding section of π α . Then S α := Im s is a closed subset of Y α and by Lemma 20, At the same time,ĝ(α) where α ∈ g * α is a linear function such that α |h = α. Hence ( g(α)) * = (g α /h) * × F α and S α ∩ X is a closed subset of ( g(α)) * . Therefore A is a quotient of S ( g). Since the Poisson structure on A is induced from F[X ] and X is a Poisson subvariety of g * , it is indeed a Poisson quotient. Remark 22. Informally speaking, A is the algebra of functions on the set X of all rational morphisms ψ : x h → X such that ψ(α) ∈ (X ∩ S α ). Here X is also a set of the H-invariant rational morphisms ψ ′ : x h → X such that ψ ′ (α) ∈ (X ∩Y α ). If F = F, then it is better to work with ideals. Let I ✁ S (g) be a G-invariant prime ideal. Set I 0 = I ∩ S (h) and let x h ⊂ h * be the subvariety defined by I 0 . Now K = Quot S (h)/I 0 and Lemma 23. Let F be any field of characteristic zero. Then P is a Poisson quotient of S ( g). Proof. In case F = F, g coincides with the quotientĝ/ĥ, whereĝ andĥ are defined by Formulas (1) and (2). In the general case, we have g ⊗ F F = g ⊗ F F/ h ⊗ F F. By Lemma 21, P ⊗ F F is a Poisson quotient of S ( g) ⊗ F F. The Galois group Gal F (F) of the field extension F ⊂ F acts on both these Poisson algebras. By taking fixed points of Gal F (F), we conclude that P is a Poisson quotient of S ( g). Set X := Spec P . Then X ⊂ g * is Poisson subvariety defined over K. Let us compute l( X). In order to simplify notation, we do it in case F = F. (The numbers l(X ) and l( X) do not change under field extensions.) Let k be the dimension of a generic H-orbit on X . Note that k is also the dimension of a generic G-orbit in x h . Since h is an algebraic Lie algebra consisting of nilpotent elements, we have F(X ) H = Quot F[X ] H . Therefore generic H-orbits on X are separated by regular H-invariants and tr.deg P h = n − k. Hence tr.deg P = n − k − dim x h . Next, K( X) = F(X ) H ⊗ K K. Recall that X is a Poisson subvariety of g * . In particular, the Poisson centre ZK( X ) of K( X) coincides with K( X) g . Because h is commutative, h ⊂ F[X ] H . Therefore the Poisson centre ZF(X ) H is equal to the Poisson centraliser Clearly R contains both F[x h ] and ZF(X ) = F(X ) g . For generic γ ∈ X we have dim(h |gγ ) = dim(hγ) = k. Since all functions in F(X ) g are constant on G-orbits, the subspace of T * γ X generated by d γ F[x h ] and d γ (F(X ) g ) has dimension d +k. Hence, tr.deg R ≥ d +k. By a simple dimension reason tr.deg R = d + k. Since ZK( X ) = ZF(X ) H ⊗ F K, we get tr.deg K( It remains to show that the dimension of g over F(x h ) is less than dim F g. If this is not the case, then g = g ⊗ F K andĥ = 0 (hereĥ is the same as in (2)). From the first equality we get [g, h] ⊂ I 0 , hence [g, h] = 0; and by the second one, dim h = 1. Together these conclusions contradict the initial assumptions on h. Applying the inductive hypothesis to X, we construct l(X ) − dimx h functionsf i ∈ S ( g) such that their restrictions give us a complete commutative family on X. After multiplying them by a suitable element of K, we may assume thatf i ∈ S (g). The remaining dim x h functions we get from S (h). • Suppose now that g is a non-algebraic Lie algebra. If the nilpotent radical n ✁ g contains a characterisitic ideal h such that either dim h > 1 or [g, h] = 0, then the above "commutative" part of the proof (decreasing of dim g) goes without any alteration. If n = 0, then g is reductive and algebraic. It remains to consider the "Heisenberg" case.
2014-10-01T00:00:00.000Z
2005-11-20T00:00:00.000
{ "year": 2005, "sha1": "645213bbb1178553eb2a983cd10965dd0793ff51", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0511498", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4f04b680082e3d6c1eb2d78628e01d5dcd81b634", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
234014516
pes2o/s2orc
v3-fos-license
Physically and chemically treating sulphurous water in western of Iraq The agricultural sector in Iraq is one of the country’s most water-consuming sectors. Recent shortages of fresh water have made it necessary to utilise treated water, whether sewage water, sulphurous water, or industrial water, for such purposes to preserve the available water resources. A laboratory experiment was therefore conducted to study the effects of different physical and chemical treatments on sulphurous water intended for irrigation. The chemical treatments used were bentonite, nitric acid, activated carbon, and manganese oxide, while the physical treatment process examined was ventilation of various durations. The results showed that all treatments led to a reduction in the concentrations of iron, hydrogen sulphide, and sodium in the treated water, and that all methods of treatment and concentrations led to the reduction of SAR in the treated water. Some treatments led to an increase in the electrical conductivity, namely bentonite and nitric acid, while the other treatments led to lower electrical conductivity. All treatments and concentrations led to an increase in the concentration of magnesium in the water compared to that in the untreated water, which is considered a positive indicator. Introduction Water is one of the main resources that control the distribution of the population and human economic activities, especially with regard to drinking water and irrigation. Iraq relies mainly on surface water, rainwater, and groundwater to provide its water needs [1], yet the volume of water received from surface water has decreased by half in recent years due to a large number of projects, international political moves, and natural conditions. Most water in Iraq thus comes from rain [2], and due to the nature of the country's desert and semi-desert climate, its rains are characterised by scarcity and fluctuation. In terms of the waters of rivers and their basins [3], most of the groundwater is sulphurous and requires treatment before use, and many physical, chemical, and biological methods have been used to remove hydrogen sulphide from water over the years [4,5]. Assessing sulphurous water treatment for irrigation requires measuring various important variables to determine whether they are within acceptable limits. The most important value is the sodium adsorption ratio (SAR), an irrigation water quality parameter used in the management of sodium-affected soils [6]. This acts as an indicator of the suitability of water for use in irrigation based on the concentrations of the main alkaline and earth alkaline cations present in the water. The formula for calculating the sodium adsorption ratio (SAR) is [7]. The concentration of dissolved ions in treated sulphurous water (sodium concentration, calcium concentration, and magnesium concentration) was measured in this work in order to calculate the SAR value: the higher the SAR, the more dangerous the water, with acceptable limits being 0<SAR<10 [8]. Electrical conductivity, hydrogen sulphide levels, bicarbonate concentration and iron content were also measured as important factors. The aim of the research was thus to examine several methods for treating sulphurous water chemically and physically and to test whether the resulting treated water is suitable for irrigation. Materials and Methods Sulphurous water was brought from an area of 90 km west of the city of Ramadi; this artesian well water flows with a discharge of 3 cubic metres per minute, and 100 litres were brought to the laboratory. Samples were also taken from the Euphrates and from puncture water from a trocar near the site of the experiments, for the purposes of examining the chemical properties of sulphurous water in comparison to other sources, as shown in Table.1 Chemical processes used in the experiments included the addition of bentonite clay, activated carbon , manganese oxide (MnO), and nitric acid (HNO3); one physical process (ventilation ) was used in the experiment at various durations. Chemical Processes : Bentonite clay Bentonite clay minerals were bought from a tile and ceramic factory in the Ramadi region of the Anbar Governorate, and bentonite clay was then added to the sulphurous water (1,780 mg/litre of sulphur) in 1, 1.5, 2, 2.5, and 3 g increments to 1-litre containers of sulphurous water. The containers were shaken for an hour at a speed of 6 cm per second using a shaker device, then filtered using filter paper, with the water retained for chemical analysis. Activated carbon Activated carbon was used in the treatment of sulphurous water in quantities of 0.5, 1, 1.5, 2, and 2.5 g to 1-litre containers of sulphurous water (1,780 mg/litre of sulphur). After shaking for one hour at a speed of 6 cm per second, the samples were filtered using filter paper, with the water was saved for the chemical analysis. Manganese oxide (MnO). Manganese oxide was used to treat sulphurous water (1,780 mg/litre of sulphur) at concentrations of 0.1, 0.2, 0.3, 0.4, and 0.5 g in a 1-litre container. The solutions were shaken for one hour at 6 cm per second using a vibrating device, and then filtered through filter paper, with the water saved for chemical analysis. Nitric acid was used in the treatment of sulphurous water (1,780 mg/litre of sulphur) by adding 1, 2, 3, 4, and 5 ml of nitric acid to 1-litre containers of sulphur water. After that, the solutions were shaken for one hour using the shaker device, and then the water was filtered using filter paper and saved for chemical analysis. Physical process 5.1. Durations of Ventilation Ventilation was used in the treatment of sulphur water (1,780 mg/litre of sulphur) by applying an air injection device to water samples for 1, 2, 3, 4, and 5 hours. The treated water was preserved for chemical analysis. Discussion and Results Several important and specific properties and limits of irrigation water specifications were examined: Figure 1 shows the effect of treatment with different concentrations of various treatment materials (activated carbon, bentonite, manganese oxide, nitric acid, and ventilation ) on electrical conductivity. The results show that the electrical conductivity of untreated sulphurous water is 3.8 dS.m-1. Some treatments using bentonite led to an increase in electrical conductivity: adding concentrations of 2, 2.5, or 3 g of bentonite to a litre of sulphur water led to conductivity of 3.82 dS.m-1 on average, a 1% improvement compared to untreated sulphurous water. The addition of concentrations of 1.0 or 1.5 g of bentonite to a litre of sulphurous water led to conductivities of 3.72 and 3.77 dS.m-1, respectively, however, decreases of 2 and 1% as compared to untreated sulphurous water. Ventilation treatment also led to a decrease in the electrical conductivity, with ventilation durations of 1, 2, 3, 4, or 5 hours leading to decreases in electrical conductivity of 3%, 4%, 9%, 10%, and 10% as compared to untreated sulphurous water (3.69, 3.46, 3.44 and 3.44 dS.m-1, respectively). Treatment with activated carbon also led to a decrease in electrical conductivity, with the addition of 0.5, 1.0, 1.5, 2.0, or 2.5 grams of activated carbon per litre of sulphur water reducing the electrical conductivity to 3.69, 3.64, 3.46, 3.44, and 3.44 dS.m-1, respectively, for 3%, 4%, 9%, 10%, and 10% reductions in conductivity compared to untreated sulphurous water. Treatment using manganese oxide further led to a decrease in the electrical conductivity: adding 0.1, 0.2, 0.3, 0.4, or 0.5 g concentrations reduced electrical conductivity to 3.55 dS.m-1 on average, a decrease of 7% compared to untreated sulphurous water. Treatment using nitric acid also led to a decrease in electrical conductivity, with adding 1, 2, or 3 ml of nitric acid to litres of sulphur water reducing the electrical conductivity to 3.39, 3.52, and 3.67 dS.m-1, respectively, giving reduction rates of 11%, 7%, and 3%, respectively, as compared to the untreated sulphurous water. The addition of 4 or 5 ml of nitric acid to sulphur water increased the electrical conductivity to 3.85 and 3.92 dS.m-1, respectively, however, showing increases of 1 and 3% as compared to untreated sulphur water. The treatments that led to the greatest increase in electrical conductivity were the addition of bentonite at concentrations of 2.0, 2.5, or 3.0 g/litre, and treatment with nitric acid at concentrations of 4 or 5 ml/litre of sulphurous water. This can be attributed to the fact that these treatment processes encourage the formation of complex salts through chemical reactions in the treated water. All other treatments led to a reduction in electrical conductivity, perhaps due to adsorption processes that occurs with the addition of processors such as ions involved in the synthesis of some salts [9], or to the deposition processes of ions involved in the synthesis of such salts [10]. Figure 2. shows the effect of treatment with different concentrations of treatment materials (activated carbon, bentonite, manganese oxide, nitric acid, and ventilation ) on hydrogen sulphide concentrations in sulphurous water. The results show that the hydrogen sulphide concentration of untreated sulphur water is 234.6 mg per litre. Treatment with nitric acid led to a decrease in the concentration of hydrogen sulphide, with the highest concentration of nitric acid (5 ml per litre of sulphurous water) reducing the concentration of hydrogen sulphide to 24 mg per litre, an 88% reduction compared to untreated sulphurous water. Treatment with manganese oxide also led to a decrease in the concentration of hydrogen sulphide; adding the highest concentration of 0.5 mg. per litre reduced the concentration of hydrogen sulphide to 62.4 mg per litre, a 75% decrease. Bentonite treatment also produced a decrease in the concentration of hydrogen sulphide, as adding the highest concentration of 3.0 g of bentonite to a litre of sulphur water reduced the concentration of hydrogen sulphide to 62 mg per litre, a decrease of 74% as compared to untreated sulphurous water. Activated carbon similarly contributed to a decrease in the concentrations of hydrogen sulphide, with the highest concentration of 2.5 g activated carbon per litre of sulphurous water reducing the concentration of hydrogen sulphide to 96 mg per litre, a decrease of 59%. Ventilation also decreased the concentration of hydrogen sulphide: the highest ventilation time (5 hours) reduced the concentration of hydrogen sulphide to 144 mg per litre, a decrease of 40% compared to untreated sulphurous water. All treatment methods led to a decrease in the concentration of hydrogen sulphide in the treated water as compared to the untreated sulphur water [11,12]. The most significant reductions in the concentration of hydrogen sulphide occurred from treatment with nitric acid, followed by manganese oxide, bentonite, activated carbon, and ventilation. Figure 3 shows the effect of treatment with different concentrations of different treatment materials (activated carbon, bentonite, manganese oxide, nitric acid, and ventilation ) on sodium concentrations. The results show that the sodium concentration in untreated sulphurous water is 9.0 mmol per litre. Treatment with activated carbon decreased the sodium concentration: adding concentrations of 0.5, 1.0, 1.5, 2.0, or 2.5 g of activated carbon per litre of sulphur water reduced the sodium concentrations to 5.2. 5.4, 5.5, 5.6, or 5.7 mmol per litre, respectively . The highest percentage of decrease was seen with a concentration of 0.5 g of activated carbon per litre of sulphur water, a 42% decrease as compared to untreated sulphurous water. Adding 0.1 g manganese oxide to a litre of sulphurous water reduced the sodium concentration to 5.2 mmol per litre, a 42% reduction, while the other concentrations of MnO (0.2, 0.3, 0.4, or 0.5 g) reduced the sodium concentration to 6.6 mmol per litre, a 27% decrease compared to untreated sulphurous water. Ventilation treatment also decreased the sodium concentrations. Ventilation for 1, 2, 3, 4, or 5 hours reduced the sodium concentration to 6.5 mmol per litre, a 28% reduction, on average as compared to untreated sulphurous water. Treatment with bentonite also led to decreases in sodium concentrations. The addition of bentonite reduced the sodium concentration to 6.9 mmol per litre, a 42% reduction, on average as compared to untreated sulphurous water. Treatment using nitric acid similarly reduced the sodium concentration to 7 mmol per litre on average, a 21% reduction compared to untreated sulphurous water. All treatment methods and concentrations resulted in a decrease in the sodium concentration in treated water as compared to the concentration of sodium in untreated sulphurous water. This corresponds with the findings of [10,13]. The order of the best sodium concentration reductions was Activated carbon> Manganese oxide> Ventilation> Bentonite> Nitric acid. Figure 4 shows the effects of treatment with different concentrations of different treatment materials (activated carbon, bentonite, manganese oxide, nitric acid, and ventilation ) on calcium concentrations. The results show that the calcium concentration in untreated sulphur water is 8.5 mmol per litre. Treatment using manganese oxide resulted in a decrease in calcium concentration. The addition of 0.1 g manganese oxide to a litre of sulphurous water reduced the calcium concentration to 8.3 mmol per litre, a 2% reduction compared to untreated sulphurous water, while other concentrations reduced the calcium concentration to 5.6 mmol per litre, a 34% decrease. Most treatment with nitric acid also reduced the concentration of calcium to 7.6 mmol per litre on average, an 11% reduction compared to untreated sulphurous water. However, for treatment with only 1 ml of nitric acid added to a litre of water, no effect on calcium concentration could be observed. Ventilation treatment also reduced calcium concentration to 7.8 mmol litre, an 8% average reduction compared to untreated sulphurous water. Treatment with activated carbon decreased the calcium concentration in some cases: concentrations of 0.5 or 1.0 g activated carbon per litre of sulphurous water reduced the calcium concentration to 8.3 mmol per litre, while other concentrations investigated did not affect the calcium concentration in any way. Adding bentonite led to an increase in the concentration of calcium in some cases. Adding 2.0, 2.5, or 3.0 g of bentonite to a litre of sulphur water resulted in an increase in the concentration of calcium to 8.6 mmol per litre, an increase of 1% compared to untreated sulphurous water. However, adding 1.0 or 1.5 g of bentonite to a litre of sulphurous water had no effect on calcium concentration. The generally low calcium concentration and the effects of most treatment methods are confirmed by other researchers, including [12]. Arranging treatments in terms of the most effective at reducing calcium concentration gives the sequence: manganese oxide> nitric acid> ventilation> activated carbon. increases of 24%, 59%, 62%, 64%, and 64%, respectively, as compared to untreated sulphurous water. Treatment with nitric acid also led to increases in the concentration of magnesium. Adding 1, 2, 3, 4, or 5 ml of nitric acid to a litre of sulphurous water led to concentrations of magnesium of 5.4, 6.4, 6.7, 7.1, and 7.6 mmol per litre, increases of 2%, 20%, 26%, 34% and 43%, respectively, compared to untreated sulphur water. Treatment with bentonite further led to an increase in the concentration of magnesium of 6.8 mmol per litre on average, a 28% increase as compared to untreated sulphurous water. Adding activated carbon also led to an increase in the concentration of magnesium: all tested concentrations of activated carbon increased the magnesium concentration to 6.8 mmol per litre, an increase of 28% as compared to untreated sulphurous water. Ventilation treatment also led to an increase in the concentration of magnesium, with 6.3 mmol per seen for 1-and 2-hour ventilation periods, an increase of 19% on average, and 5.8 mmol per litre for other durations, a 10% increase as compared to untreated water. All treatment methods and concentrations led to an increase in the concentration of magnesium in treated water as compared to the concentration in untreated sulphurous water, in agreement with both [14,15]. The most significant increase in the concentration of magnesium was found with the addition of manganese oxide, followed by nitric acid, bentonite, activated carbon, then ventilation. Figure 6 shows the effect of treatment with different concentrations of different treatment materials (activated carbon, bentonite, manganese oxide, nitric acid, and ventilation ) on the sodium adsorption ratio. The results show that the sodium adsorption ratio (SAR) in untreated sulphurous water is 2.43 mmol per litre. Treatment with nitric acid resulted in a decrease in the SAR value, reducing the SAR to 1.85 on average, a 24% reduction as compared to untreated sulphurous water. Bentonite additions also led to a decrease in the value of SAR, to 1.76 on average, a 28% reduction compared to untreated sulphurous water. Treatment with ventilation also led to decreases in the SAR value, with ventilation for periods of 1, 2, 3, 4, or 5 hours decreasing SAR to 1.66, 1.74, 1.78, 1.78, and 1.78, respectively, decreases of 32%, 28%, 27%, 27%, and 27% as compared to untreated sulphurous water. Manganese oxide also led to a decrease in the SAR value, with 0.1 g per litre reducing the SAR to 1.35, a 44% decrease as compared to untreated sulphurous water, and the additions of 0.2, 0.3, 0.4, or 0.5 g per litre reducing the SAR to 1.75 on average, a decrease of 28%. Adding carbon also led to a decrease in SAR value, to 1.42 on average, a decrease of 41% as compared to the untreated sulphurous water. All treatment methods and concentrations resulted in a decrease in the value of SAR in treated water as compared to the value of SAR in untreated sulphurous water. The most effective methods for reducing the value of the SAR are, in order, activated carbon> manganese oxide> bentonite> ventilation> nitric acid. Additionally, in all cases, the value of SAR <10, meaning that the treated water is suitable for irrigation for most land types [16]. Figure 7 shows the effects of treatment with different concentrations of different treatment materials (activated carbon, bentonite, manganese oxide, nitric acid and ventilation on bicarbonate concentration. The laboratory results showed that the concentration of bicarbonate in untreated sulphurous water is 11.7 mmol per litre. Treatment with nitric acid led to a decrease in bicarbonate concentration. Adding 1, 2, 3, 4, or 5 ml of nitric acid to a litre of sulphurous water reduced the concentrations of bicarbonate to 9.5, 9.7, 10.3, 11.1 and 11.4 mmol per lire, respectively, decreases of 24%, 22%, 12%, 5%, and 3% as compared to untreated sulphur water; thus, increasing the concentration of nitric acid has less effect on reducing the concentration of bicarbonate. Treatment with activated carbon also led to a decrease in the concentration of bicarbonate. Adding 0.5, 1.0, 1.5, 2.0, or 2.5 g activated carbon per litre of sulphur water reduced the concentrations of bicarbonate to 11.0, 9.5, 9.1, 9.1, and 9.0 mmol per litre, respectively, reductions of 6%, 19%, 22%, 22%, and 23% as compared to untreated sulphurous water; thus, increasing the activated carbon concentration leads to a decrease in the concentration of bicarbonate. Ventilation also led to a reduction in the concentration of bicarbonate, to 8.5 mmol per litre on average, a reduction rate of 27% as compared to untreated sulphurous water. Treatment using manganese oxide also led to a decrease in the concentration of bicarbonate, to 8.7 mmol on average, a reduction rate of 26%. A further decrease in bicarbonate concentrations was seen with bentonite: adding 1 g of bentonite to a litre of sulphurous water reduced the bicarbonate concentration to 9.0 mmol per litre, a reduction rate of 23% as compared to untreated sulphurous water. The addition of 1.5, 2.0, 2.5, or 3.0 g of bentonite per litre of sulphurous water reduced the concentration of bicarbonate to 8 mmol per litre on average, a reduction of 32% as compared to untreated sulphurous water. All treatment methods and concentrations led to a decrease in the concentration of bicarbonate in the treated water as compared to the concentration of bicarbonate in untreated sulphurous water [17]. The most influential treatment was bentonite, followed by ventilation, manganese oxide, activated carbon, and nitric acid. The iron concentration in sulphurous water treated with ventilation was 0.03 mg per litre on average for all periods used in the treatment. The sulphurous water both before and after most treatment contains lower concentrations of iron than the critical concentrations for irrigation use, however; the limit per litre for iron in water for irrigation purposes is 0.5 mg, according to [19,20,21]. Figure 8 shows the effects of different treatments on iron concentration, with only the water treated with nitric acid exceeding the recommended amount. Conclusion In conclusion, different treatment materials, which are activated carbon, bentonite, nitric acid and manganese oxide in their different concentrations, have an effect in treating sulfur water and removing hydrogen sulfide. The treatment of sulfurous water with nitric acid removed the highest percentage of hydrogen sulfide and caused a decrease of 88% compared to before the treatment. The best treatment for removing hydrogen sulfide was with the use of nitric acid, followed by treatment with manganese oxide and then bentonite.
2021-05-10T00:04:02.437Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "006ab7bb82049a6317c2541419a95333992ff1c8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/1067/1/012043", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d2112ed52cfaad41d756a7945cf82ae035918376", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Environmental Science", "Physics" ] }
259964416
pes2o/s2orc
v3-fos-license
Can Deep CNNs Avoid Infinite Regress/Circularity in Content Constitution? The representations of deep convolutional neural networks (CNNs) are formed from generalizing similarities and abstracting from differences in the manner of the empiricist theory of abstraction (Buckner, Synthese 195:5339–5372, 2018). The empiricist theory of abstraction is well understood to entail infinite regress and circularity in content constitution (Husserl, Logical Investigations. Routledge, 2001). This paper argues these entailments hold a fortiori for deep CNNs. Two theses result: deep CNNs require supplementation by Quine’s “apparatus of identity and quantification” in order to (1) achieve concepts, and (2) represent objects, as opposed to “half-entities” corresponding to similarity amalgams (Quine, Quintessence, Cambridge, 2004, p. 107). Similarity amalgams are also called “approximate meaning[s]” (Marcus & Davis, Rebooting AI, Pantheon, 2019, p. 132). Although Husserl inferred the “complete abandonment of the empiricist theory of abstraction” (a fortiori deep CNNs) due to the infinite regress and circularity arguments examined in this paper, I argue that the statistical learning of deep CNNs may be incorporated into a Fodorian hybrid account that supports Quine’s “sortal predicates, negation, plurals, identity, pronouns, and quantifiers” which are representationally necessary to overcome the regress/circularity in content constitution and achieve objective (as opposed to similarity-subjective) representation (Burge, Origins of Objectivity. Oxford, 2010, p. 238). I base myself initially on Yoshimi’s (New Frontiers in Psychology, 2011) attempt to explain Husserlian phenomenology with neural networks but depart from him due to the arguments and consequently propose a two-system view which converges with Weiskopf’s proposal (“Observational Concepts.” The Conceptual Mind. MIT, 2015. 223–248). In this paper, I argue that deep convolutional neural networks (CNNs) necessarily result in infinite regress and circularity in content constitution if CNNs employ the empiricist theory of abstraction, which results in a similarity semantics.Since the empiricist theory of abstraction has been shown to be employed by deep CNNs (Buckner, 2018), it follows that content constitution in CNNs cannot avoid the infinite regress and circularity mentioned, unless supplemented by (what Quine called) 'the apparatus of identity and quantification,' which allows for an identity semantics (as opposed to a similarity semantics).The entailment, its consequences, and the remedy will be the subject of this essay. Infinite Regress & Circularity in Deep CNNs via the Empiricist Theory of Abstraction In this section, I'll outline the empiricist theory of abstraction and my relation to it.I'll then present the arguments against the theory, with reference to the type of encoding found in deep CNNs.The next section discusses deep CNNs in detail and how the arguments apply. The empiricist theory of abstraction is the idea that general representations, i.e. concepts, might be constituted, with regard to their contents, by noticing sensed similarities among multiple objects and subtracting from their differences.Another way to describe the theory, with a view to the objects that correspond to the concepts thus generated, is that "the empiricist theory of abstraction….[i]s the theory… that abstract objects arise by our directing attention to some aspects of what we experience and overlooking others: the retained features constitute the abstract object…" (Simons, 1995, p. 107).Consequently, the abstract object (e.g.square, triangle, cat, dog, table, chair, red, green) will be a reflection of amalgamated similarities from sense-experience, with differences subtracted.The object corresponding to a quasiconcept generated in this fashion would be, as Quine would have it, a "half-entity," corresponding, for Quine's behavioristic scruples, to a mass noun, or (for us) to an amalgamated content similarity (Quine, 2004, p. 107).To make the object a 'full entity,' (an objectively meant entity) it would need to correspond representationally to a system instantiating "the apparatus of identity and quantification" (ibid.).For Quine, this was a behavioristically interpreted natural language, with its symbolic predicates and quantifiers.For me, this is a representational theory of the mind, involving the panoply of traditional computational elements and operations: "sortal predicates, negation, plurals, identity, pronouns, and quantifiers" (Burge, 2010, p. 238).But now, since Quine's criteria for this assumption have been shown to be satisfied by children prior to the mastery of a natural language (Carey, 2009), it would seem to follow that what corresponds to a 'full entity' (as opposed to a 'half-entity') is a content identity (as opposed to a content similarity) in a language of thought (LOT).Section III discusses how similarity contents might be compatible with LOT. How does the empiricist theory of abstraction work, and why are deep CNNs committed to it?The general idea of how the theory works is this: one can acquire the concept RED and the concept TRIANGLE by being minimally exposed to three objects: a red square, a red triangle, and a green triangle.One thereby notices that 1 3 Can Deep CNNs Avoid Infinite Regress/Circularity in Content… the sense experience of red is abstractable from the shape, just as the shape of a triangle is abstractable from its color.We thus have a rudimentary picture of how general representations, applicable to many objects, might be abstracted from senseexperience-this being the primary alternative to the doctrine of innate ideas.Deep CNNs are committed to this picture of how the mind arrives at general representations according to Cameron Buckner: "The classical empiricists never specified a plausible mechanism that could perform the crucial 'leaving out' of irrelevant information highlighted in abstraction-as-subtraction.This, I argue, is the role played by max-pooling units" (2018, p. 5357).Having been convinced by this argument, I shall discuss below the way in which max-pooling units implement the empiricist theory of abstraction.For now, one need only note, to see the connection, that max-pooling units 'pool' together multiple features, ignoring differences, to create a general representation, which allows for accurate classification of new inputs to the degree that they are similar to the old.As a result, many people believe that deep CNNs show us how one might learn a "concept" and its corresponding "object identity" (Goodfellow, Bengio, Courville, 2016, p. 17).Thus the empiricist theory of abstraction is a theory of content constitution that amalgamates similarities into generalities by a method of abstracting-as-subtracting; and deep CNNs are committed to this theory of content constitution in virtue of their architecture. In the Logical Investigations, Husserl argues (anticipating Fodor) that this theory isn't going to work for concepts (or universal meanings), and their corresponding objects (which are properly neither individuals nor amalgamations of them).The reason is simple: concepts require content identity in order to enter into logical relations; what it is to be a concept is to be a meaning susceptible of the "apparatus of identity and quantification" (Quine, 2004, p. 107).This is an assumption which is, at least in practice, if not in theory, admitted by neural network theorists.For a meaning must have a unifying structure, capable of recurring and being manipulated as a unit.Today we would call this 'syntax.'Many of Husserl's points on this topic directly anticipate Fodor's and remain explananda for deep neural networks.For example, Husserl begins his criticism of the empiricist theory of abstraction (in the 2nd Logical Investigation) by noting that each "meaning certainly counts as a unit in our thought and that on occasion we pass evident judgements upon it as a unit" (2001, p. 241).The evidence of such judgments is a reason to think that the unitary, discrete nature of such meanings is not an artifact of linguistic usage (in the manner of Quine) but rather an indication of genuinely discrete, unified meanings "in our thought."His examples include the analogy to grammatical words: "an identical subject for numerous predicates;" and compositionality: "It can be summed together with other meanings and can be counted as a unit" (ibid.).These examples point to the classical virtues of systematicity and compositionality of symbolic representations in a language of thought (LOT).These remain explananda for deep learning.For Husserl, all of these points draw the same moral: concepts cannot be constituted by empiricistic abstraction; and for us (a fortiori) cannot be constituted by deep CNNs.Thus in virtue of the nature of concepts and the relevant explananda, one begins to think that deep CNNs will be inadequate to the phenomena. Having set the stage, I'll now present the arguments.Husserl was perhaps the first to note that empiricist theories of abstraction, a fortiori deep CNNs, result in infinite regress and circularity in content constitution.This is later echoed insistently and very seriously by Fodor & Lepore (1992) & Fodor (1998, 2008); it is, moreover, at least implicit in Quine's (corpus-wide) distinction between similarity 'encounters' (expressed by mass nouns) and the apparatus of identity and quantification (reflected by count nouns), which Quine believed to derive from one's linguistic community (as opposed to a representational mind) (Quine, 2004).Although we are not here interested precisely in exegesis, it is nevertheless appropriate to look at the exact wording of Husserl's original argument, concerning the infinite regress between content-identity and similarity groupings: The conception we are criticizing operates with 'circles of similars' (Ähnlichkeitskreisen), but makes too much light of the difficulty that each object belongs to a plurality of 'circles of similars' and that we must be in a position to say what distinguishes these 'circles of similars' among themselves.It is plain that, in default of a previously given Specific Unity, we cannot avoid a regress in infinitum (2001, pp. 243-244). Content constitution via similarity cannot logically lead to a "specific unity" necessary for meaning.If similarities in the input are being computed at the most basic level of content-constitution, then the next level up, according to Husserl's argument, can only be "similarities of… similarities" (Husserl, 2001, p. 244).In the literature on deep CNNs, this is referred to as "'patterns of patterns'" of similarity groupings (Dube, 2021, p. 76).If these similarities of similarities are grouped once more by the Hubel & Wiesel inspired 'complex cells' of the upper layers of a deep CNN, the next level up, again, will consist of similarities of similarities of similarities.And so on, ad infinitum.This is the infinite regress, which results "in default of a previously given Specific Unity" (2001, p. 244).What Husserl means by this, qua content constitution, is that there is no level of generalization, starting with similarities and differences, at which the (content) identity necessary for phenomenologically evident, discretely unified meaning emerges.For similarity is not identity (simile non est idem).Thus if meanings just are specific unities or discrete units of thought with generally applicable content, the empiricist theory of abstraction, a fortiori deep CNNs, cannot result in meaning but can only "approximate meaning" (Marcus & Davis, 2019, p. 132). The regress, then, results from not being able to invoke content identity, which would block the regress.A vector space that encodes for red objects, and a vector space that encodes for triangular objects, may overlap in the representation of some object A, a red triangle.But we can only say, given network resources, that this overlapping is similar to similar overlappings, on which the network has previously been trained.Let's say B is a red square and C is a green triangle.Then the argument is this: 1.A is similar to B in one respect (red-likeness) and C in another respect (triangularity-likeness). 2. The respect in which A is similar to B is defined by its similarity to previously observed (inputted) similarities. 3 Can Deep CNNs Avoid Infinite Regress/Circularity in Content… 3. The respect in which A is similar to C is defined by its similarity to previously observed (inputted) similar similarities.4.These previously observed (inputted) similar similarities are in turn defined by previously observed (inputted) similar similarities.5. Premise 4 applies ad infinitum, generating illegal species and genera ('illegal' because assuming what was to be proved (petitio principii)). A vector space, let us assume, may encode the content red-triangle-like, and so represent the corresponding similarity bundle object.But where do we find the identical respect around which the similarity grouping is organized?Husserl & Fodor argue that we will be referred to the similarity species (Art)-red-trianglelike-and the similarity genus (Gattung)-red-like, the 'contents' of which are simply begged in the explanation.That's in part why Fodor & Pylyshn (2015) claim that "connectionists/associationists have no theory of conceptual content" (51).The infinite referral highlighted by Husserl is a failure to respond to the question of how did the network identify the respect in which a group of similar encodings in vector-space is similar without already knowing the identity in respect of which the group of similars is similar.If there is no answer to this question, or if the answer is negative, then neural networks, no matter how deep (or accurate), cannot be said to be recognizing objects (per se).Rather, they may be said to recognize, from our conceptually grounded perspective, approximate objects (half-entities) or similarity bundles. The reasoning behind the above infinite regress is simple and points to a fundamental circularity in the empiricist approach.At each step in the infinite regressfrom phenomenal features (whisker-like content), to species (cat-like content), to genera (felis-like content)-we "come up against kinds" that are not kind-like, i.e., not constituted in terms of continuous similarity spaces (Dube, 2021).At each step in the classificatory hierarchy, therefore, we are already presupposing what we are therefore circularly seeking.We "cannot predicate," as Husserl says, "[even] exact likeness of two things, without [already] stating the [identical] respect in which they are thus alike" (2001, p. 242).There is thus a phenomenologically evident asymmetrical dependency between kind-like or meaning-like representations on kind-simpliciter or meaning-simpliciter contents.The phenomenology is that when we mean kinds we do not mean anything kind-like.To explain our intentionality toward kinds on the basis of kind-like representations is to presuppose what was to be explained, since one cannot even state the similarity content except in terms of the contentidentity.Notice the reverse is not true.One can mean (intend) objects and refer to them without any dependency on the notion of similarity.The explanation of concepts and objects qua identity based on similarity is therefore (also) circular. None other than Fodor in Concepts (1998) treads the same ground here as Husserlian phenomenology: It looks as though a robust notion of content similarity can't but presuppose a correspondingly robust notion of content identity.Notice that this situation is not symmetrical; the notion of content identity doesn't require a prior notion of content similarity (32). Fodor is here arguing against a proposal of Gilbert Harman's to the effect that the nature of concepts should be theorized in terms of similarity spaces (a perennial desire of empiricists).If Husserl and Fodor are right, however-and I see no argument to the contrary-concepts can never be explained in terms of any explanatory apparatus which essentially refers content to the output of inductions from similarities (cf.Carey, 2009, p. 28).This is problematic because, as Fodor & Lepore independently argue, "content similarity actually presupposes a solution to (and therefore begs) the question of content identity" (1992, p. 197).Thus the circularity argument looks like this: 1.All concepts exhibit content identity.2. Empiricism and deep CNNs propose that the contents of all concepts are built (via some similarity metric) from more or less similar (real) individuals.3.But similarity presupposes identity (Husserl, 2001;Fodor, 1998).4. Therefore, all content constitution explanations from similarity are circular. This argument appears to be inescapable, but only if concepts corresponding to "object identity" are explananda for empiricistic deep CNNs (Goodfellow et al., 2016, p. 17).There is some reason to think they are, insofar as the literature speaks freely of concepts in relation to object identity (Goodfellow et al., 2016;Kelleher, 2019;Dube, 2021).Insofar as these 'concepts' are the products of the empiricist theory of abstraction instantiated by deep CNNs, they will be subject to the above arguments.That's a problem if deep CNNs (or generally DNNs) are thought to potentially explain and model the tokening of concepts, as has been recently argued (Shea, 2021;Dube, 2021).For (as premise 1 states) there is a "non-negotiable" publicity constraint on concepts on the content-side, which involves content identity (Fodor, 1998, pp. 33-34;Fodor & Pylyshyn, 2015, p. 55;Hopp, 2011).This constraint is violated by deep learning if the above argument is valid.The reason why one might adhere to the constraint is this: one cannot (phenomenologically) intuit a kind as a kind, for example, unless one tokens a concept susceptible of an identity semantics-for kinds are not kind-like.But, since we cannot "come up against kinds" according to the "empiricistic conception" due to reliance on an unexplicated notion of kind-like, it follows that deep CNNs cannot result in the specific unities of conceptual meanings required by a semantics capable of logic (Husserl, 2001 p. 244). The arguments above apply to content-constitution.There is a lesson for the object-side as well.If deep CNNs cannot result in concepts, they cannot, on the object-side, 'objectively' represent objects.This is certainly a paradox, not least because "simple object recognition is deep learning's forte" (Marcus & Davis, 2019, p. 108).But objective representation, on pain of representing "half-entities, inaccessible to identity," requires discretely unified symbolic identities (Quine, 2004, p. 107).Content identity is qualitatively distinct from amalgamated similarity groupings.Consequently if deep CNNs cannot achieve content identity, it follows they cannot objectively represent objects-for "no entity without identity" (Quine, 2004, p. 107).At best, deep CNNs can statistically approximate (without theoretically explaining) conceptual content and the representation of objects through "circles of similars" (Husserl, 2001, p. 243;Marcus & Davis, 2019, p. 132). 3 Can Deep CNNs Avoid Infinite Regress/Circularity in Content… The above arguments, specifically aimed at the empiricist theory of abstraction, apply to deep CNNs (by transitivity) (Buckner, 2018).Husserl thought these were knock-down arguments against the theory, necessitating "the complete abandonment of the empiricist theory of abstraction" (2001, p. 114).If that evaluation is correct, these arguments would compel us to likewise 'completely abandon' deep CNNs (by transitivity)-at least as regards their scientific (as opposed to their engineering) interpretation.This paper proposes to take these arguments seriously.It is therefore necessary to look more closely (in the next section) at the machine that has given life to a refuted theory (Buckner, 2018).Instead of completely abandoning the theory, however, as Husserl recommended, the section after the next (section III) will consider how the theory may be salvaged as potentially explaining similarity judgements, without, however, being a theory of concept attainment. Deep CNNs & the Empiricist Theory of Abstraction In this section, I describe deep CNNs in some detail to show that they employ the empiricist theory of abstraction and therefore are subject to the above arguments. Deep CNNs were especially designed for image classification tasks.Their "success" in the past ten years has been described as "tremendous" (Goodfellow et al., 2016, p. 321) and "incredible" (Kelleher, 2019, p. 138).They were directly inspired by Hubel & Wiesel's (1962) discovery that neurons in mammalian cortex are specialized to fire in response to proprietary stimuli (e.g.slits, edges, contrast bars). Hubel & Wiesel deemed these neurons 'simple cells,' as opposed to 'complex cells,' which combine input from the simple cells.Fukushima's Neocognitron (1980) applied this idea to neural networks.The key realization was that if a network layer shares a set of weights, called "parameter sharing," then the layer's receptive field will be fixed in a manner similar to Hubel & Wiesel's simple cells (Goodfellow, Bengio, Courville, 2016, p. 326).In practice, this means that if a pixel pattern of the 2-D input is present anywhere in the image, the function defined by that layer of shared weights, provided that it scans the area with the pattern, will record its presence in the output (a visual feature map).Since the same point applies to the pooling of patterns from several layers, the general representations that result at the pooling layer become "translation invariant" (Kelleher, 2019, p. 168).Translation invariant content is detached from the location at which its object was initially recorded.A generalization process thereby occurs.This is why there is great plausibility in the idea that deep CNNs contribute to the (empiricistic) explanation of general representations of the mind. The basic outline of the initial processing of the machine should be clear.There is a 2-D topological input (or, if the input consists of time-series data, 1-D).There is then a layer of shared weights, known as the 'kernel matrix,' which is the convolutional layer.This layer searches the image for proprietary stimuli, like a flashlight scanning a darkened room for a particular stimulus (Kelleher, 2019, p. 162).And then there's the output of the convolutional layer, which is the visual feature map, recording the presence of various pixel patterns.The reader may note that, since pixel patterns are not themselves representational, the representations generated from them are by definition sub-symbolic (Shea, 2021). The output of the feature map now becomes the input to the final processing layers.There are typically three of these-a nonlinearity layer, a pooling layer, and a dense layer.So after the visual feature map is generated, it becomes the input to a nonlinear activation function layer, which updates the topological carving of the input space, typically with ReLU (Dube, 2021, p. 68).ReLU is a nonlinear function which changes all negative values to zero.This means that neurons below a certain threshold are cut off entirely from the adjacent pooling layer (Sejnowski, 2018, p. 132). The pooling layer, therefore, comes next; and this is the operation on the data structure of the visual feature map that is of greatest philosophical interest, since it serves as the focus of Buckner's identification of deep CNNs with the empiricist theory of abstraction ( 2018).What we wish to argue is that this pooling layer represents a detachable similarity-content (or amalgam), whose denotation is, in the words of Quine, "a half-entit[y], inaccessible to identity" (Quine, 2004, p. 107). Since our arguments rest on this identification, some definitions from deep learning's practitioners and theorists are in order."A pooling function," say Goodfellow et al. (2016), "replaces the output of the net at a certain location with a summary statistic of the nearby outputs" (330).This "summary statistic" is an amalgamation of similarities.For this is the locus in the network of the empiricistic 'abstraction-assubtraction' operation, previously identified by Buckner's argument (2018).What is being subtracted are the dissimilarities; what remains are the similarities.The similarities, therefore, constitute the content of the "summary statistic."And thus, the abstraction (and any content attributable to the machine) is a similarity amalgam.Now, if this is the general representation to be identified with a concept-and this appears to be the general presumption in the field (Goodfellow et al., 2016 passim;Buckner, 2018;Kelleher, 2019;Dube, 2021)-then the content of this concept is a similarity amalgam.But since a concept is not a concept unless it is the vehicle of content-identity (according to logical considerations), and since there is logically no equivalence between similarity and identity (simile non est idem), deep CNNs cannot be said to attain concepts (see section IV for a discussion of the possibility of ignoring the relevant logical considerations). The problem arises immediately from Buckner's identification of deep CNNs employing the empiricist theory of abstraction, which for the same reasons cannot achieve concepts (or object identity).That's a problem because the empiricist theory of abstraction is a theory of concept attainment.This understanding led Husserl to seriously argue that the empiricist theory (a fortiori deep CNNs) must be completely abandoned.CNNs are, in this way, technically refuted as a possible explanation for how the mind achieves "concept[s]… and object identity" (Goodfellow et al., 2016, p. 17). Our reasoning, however, may still seem too quick for this conclusion.I propose, therefore, to conclude this section by briefly looking at the paper that started 'the deep learning revolution'-the famous AlexNet paper (Krizhevsky, Sutskever, Hinton, 2012). The paper that first describes the similarity semantics of deep learning-the one that started the deep learning 'revolution'-is "ImageNet Classification with Deep Convolutional Neural Networks" (2012).This paper describes the performance of 1 3 Can Deep CNNs Avoid Infinite Regress/Circularity in Content… a supervised convolutional neural network (CNN) on image classification tasks.A CNN is (again) specifically designed for image recognition tasks.The idea here (again) is that the nodes in the early layers extract phenomenal features from the raw pixel values of the input.These features get aggregated or amalgamated in later processing layers of the network.At each stage, these representations are sub-symbolic, since they are constituted from elements that are not themselves representational.The combined input primitives (e.g.oriented lines) form, in later layers, higher-order and more recognizable features (e.g.fineness-of-fur) which, in the final layers, get induced into readily recognizable complexes (e.g.cats).The generalization process occurs naturally by training multiple filters (layers of nodes) to respond specifically to certain pixel values and features, through parameter sharing or "tied weights" (Goodfellow et al., 2016, p. 328).Generally, the convolving filters (or kernel matrices) are several orders of magnitude smaller than the input space, encouraging generalization (Goodfellow et al., 2016, p. 326).By sequentially convolving the filters across the entire input space, the features will be detected if they are present. The outputs of the filters after the pooling layer can be combined in several ways.The ImageNet CNN used what are called 'dense' layers toward the output.The dense layers are typically the final layers of the network.The layer is 'dense' because, in contrast with the rest of the network, which bears the property of 'sparse connectivity'-meaning not all neurons are connected with all other neurons in the preceding layer-each of the nodes in the dense layer relate to all of the outputs of the preceding pooling (or dense) layer.In this way, the dense layers are more like regular ANNs. The important philosophical point is that the representational semantics of AlexNet is explicitly a similarity semantics.As a result the above infinite regress/ circularity arguments apply.This is brought out in Sect.6, where the authors consider the dense layers of the network.These final layers have 4096 neurons each.The authors state that one way "to probe the network's visual knowledge" is the following: [C]onsider the feature activations induced by an image at the last, 4096-dimensional hidden layer.If two images produce feature activation vectors with a small Euclidean separation, we can say that the higher levels of the neural network consider them to be similar.[....] Computing similarity by using Euclidean distance between two 4096-dimensional, real-valued vectors is inefficient, but it could be made efficient by training an auto-encoder to compress these vectors to short binary codes (Krizhevsky, Sutskever, Hinton, 2012, p. 8 italics added). Notice that it is the (Euclidean) distance relations which are defining the content for the modelers in terms of "similarity."The content is defined over a continuous region of representational space; it is not discrete (i.e.symbolic).Proximity in representational space defines a geometrical region of similarities, with dissimilars spreading outward.Content is, therefore, understood as a similarity-amalgaminduced similarities deriving from similarities in pixel patterns from the input.We can also call these induced CNN patterns "'patterns of patterns'" or similarities of similarities (Dube, 2021, p. 76).One can begin, at this point, to catch a glimpse of the infinite regress which Husserl calls "the worst of infinite regresses"-a charge originally leveled against the empiricist theory of abstraction simpliciter, one which, however, carries over to deep CNNs due to their employment of the empiricist theory of abstraction.For the degree to which any content-similarity, corresponding to an activation pattern, is similar to any other is the degree to which its circle of (induced) similar image patterns is close to the other circle of (induced) similar image patterns. The appeal to auto-encoders magnifies the issue.Motivation for the suggestionto use auto-encoders for greater efficiency-explicitly includes creating a tighter correspondence between Euclidean similarity and "semantically similar" representations (review the quote above).Auto-encoders themselves operate by compression (traditionally for feature learning tasks) to give a useful model that "resembles"by application of abstraction-as-subtraction-the input (Goodfellow et al., 2016, p. 493).The result would be, in the most efficient scenario, representation-building from input similarity through autoencoder similarity to (sub-symbolic) semantic similarity, defined over continuous distance relations (Churchland, 2012, pp. 38-45).The recognition the machine can be said to perform will be a recognition via similarity amalgams of "half-entities, inaccessible to identity" (Quine, 2004, p. 107).Whatever content the machine can be said to build from its input will be on the basis of "computing similarity," not identity (Krizhevsky, Sutskever, Hinton, 2012, p. 8). A Hybrid LOT Theory: Quine's Apparatus + Yoshimi's Dynamics qua Husserlian Phenomenology In this section I try to find a half-way point between the statistical learning of deep CNNs and the logical conditions on concepts and object identity that are essential to a theory of concepts that avoids the infinite regress/circularity of content constitution. The previous section argued that the "summary statistic" of the max-pooling units of a CNN is a similarity content (of similarities) (Goodfellow et al., 2016, p. 330).The statistical summation of the max-pooling units is a similarity amalgam, to which the arguments of section I apply.It follows that the representations of the network are approximate similarity contents that, if they are to explain concepts and their objects qua identity, result in a disruption of logic (Quine, 2004, p. 107).The result is not peculiar to deep CNNs-it applies just as much to any theory of concept attainment deriving from, and including those of, the British empiricists.Now one might follow Husserl's startling recommendation that "the empiricist theory of abstraction must be completely abandoned" (2001, p. 114).If so, one should likewise completely abandon the currently popular idea that deep CNNs might illuminate the nature of conceptual tokening.But due to the tremendous success of deep CNNs-they regularly outperform humans on a variety of tasks-we might think this remedy too strong.Since there is one philosopher, Jeffrey Yoshimi, who would particularly recommend this in relation to Husserlian phenomenology, and since we have been guided by the (logical) phenomenology 1 3 Can Deep CNNs Avoid Infinite Regress/Circularity in Content… throughout our discussion, we can perhaps begin developing a hybrid model by considering Yoshimi's approach. Yoshimi has spent the last two decades valiantly attempting to unite the phenomenological descriptions of Husserl with the empiricism of neural networks (e.g.Yoshimi, 2011).As should be clear from our arguments, however, this project will eventually run into the infinite regress and circularity of content constitution.But we might, nevertheless, start with Yoshimi to more clearly show how to unite the two main approaches to cognitive science (Marcus, 2001;Cain, 2016).This should have the corollary of providing a beginning for a complete explanatory (causal) theory for the phenomenology of logical experiences, one that does not "disrupt logic" by reducing discrete symbolic units of thought to continuous gradients (Quine, 2004, p. 107). The centerpiece of Yoshimi's Husserlian Phenomenology: A Unifying Interpretation (2016) consists of two functions that output a continuous gradient (x ∈ [0, 1]).One is an expectation function, the other an update function (17,29).These functions are intended to generate a similarity semantics of sub-symbolic contents (43).To illustrate how these functions work in experience, Yoshimi chooses the Humean example (from Kant's Critique of Pure Reason) of moving around a house with one's body.Through statistical learning, one forms associations (expectations) about house-like aspects of experience ( 18).The more these associations are confirmed by one's bodily movement, visual experience, and background knowledge, the higher the returned expectation gradient.An update rule supplements the expectation rule by producing "incremental changes in background knowledge" (31). Provisionally accepting Yoshimi's account as a starting-point, we can hypothesize that the machinery of deep CNNs might supply the values to the visual experience and background knowledge variables.Degrees, therefore, of sub-symbolic visual fulfilment-an expectation being satisfied in accordance with past associations-can thereby be causally explained.Yet we still do not have what would stop circularity and regress "even in the sensory realm," as Husserl says, if we define content identity as "a limiting case of 'alikeness" in accordance with Yoshimi's continuous outputs (2001, p. 242).To stop the regress and prevent circularity, we would need to "come up against kinds" (Husserl, 2001, p. 244).We are, in other words, in search of those specific unities (e.g.HOUSE), for which there are unities of fulfilment (and frustration).Yoshimi has the empiricistic cart before the phenomenological horse; he has explained degrees of fulfilment before he has given any account of what is being fulfilled.What's being fulfilled, according to Husserlian phenomenology, are the meanings corresponding to the objects of logical experience: discrete units of thought corresponding to identical kinds.These kinds, as the objects of an act of knowledge, must be the objects of an identity semantics, as opposed to a similarity semantics. Without casting Yoshimi and deep CNNs entirely aside, it is necessary to propose a two-system view, given the regress and circularity above, and the remarkable coincidence between the Husserlian and Fodorian projects.This is distinct from single-system views (Marcus, 2001, Smolensky & Legendre, 2006) and coincides with Weiskopf's recent proposal concerning conceptual identity semantics being qualitatively distinct from similarity semantics (Weiskopf, 2015, p. 239).Accordingly, there must be a system of statistical learning based on similarity metrics, which may, however, be jettisoned as inessential when considering a system of knowledge in the abstract; and a distinct system based on conceptual units and computational transformations supporting a quantificational syntax and identity semantics (Fodor, 2008, pp. 159-163).The two may causally interact but they must be seen, due to regress/circularity, as in principle distinct. We think the two systems may interact via Yoshimi's update rule (or "learning rule λ" see Yoshimi, 2016, p. 31).As the asymptotic approximation converges (its cat-like, dog-like, house-like representations) toward a conceptual kind (CAT, DOG, HOUSE etc.), the representation in the neural network is drawn toward 'a zone of stability.'Such 'zones' can be thought of as attractors in state space.An attractor can be thought of as a set "to which all neighboring trajectories converge" (Strogatz, 2015, p. 331).In this way, there is a brute-causal transition from the one system to the activation of the other. To bring all this together in an example, consider the causal process of concept triggering as being drawn to 'a zone of stability.'We can base this on Yoshimi's notion of a "stable cognizer" while yet departing from his conception by requiring the apparatus of identity and quantification and thus a symbolic system (2016, p. 18).Once the mind comes close enough through statistical learning, which might comprise a single occasion (consistent with the extremely high rate at which word-learning occurs), Fodorian locking to the property (kind, type, universal etc.) occurs.The asymptotic approximation to a kind at this point activates a symbolic unit of the knowledge system.The knowledge system is based on computational transformations supporting a quantificational syntax and identity semantics.This again must be assumed for the reason that one needs this apparatus to represent kinds as opposed to half-entities corresponding to similarity amalgams; and to stop the infinite regress we need to "come up against kinds" and their corresponding discretely unified meanings (Husserl 2001, p. 244).This can only happen, so far as I know, through a language of thought (Fodor, 1975(Fodor, , 2008)).What is statistically learned, therefore, in deep learning, and therefore potentially in our own minds, are representations of experience via "similarity metrics" distinct from a language of thought (Fodor, 2008, p. 158).These contents will be sub-symbolic and serve to trigger the symbolic and discrete units of a language of thought. Our sketch, as a solution to the arguments, is consistent with the empiricist theory of abstraction and the way in which deep CNNs might learn the representation triangle-like.What cannot be learned, however, is: [T]he disposition to grasp such and such a concept (i.e.lock to such and such a property) in consequence of having learned such and such a [statistical] stereotype.Experience with things that are asymptotic approximations of The Triangle in Heaven causes locking to triangularity (Fodor, 2008, p. 162). Locking to triangularity means activation of the identical content conveyed by the concept TRIANGLE.Activation of the concept means the ability to represent 1 3 Can Deep CNNs Avoid Infinite Regress/Circularity in Content… a property as holding of all individuals in some domain.It is the ability to represent kinds as such (as opposed to kind-like objects).Such representations are therefore conceptually discrete and symbolically manipulable.We therefore feel forced to combine the sort of learning that deep CNNs are capable of-as an aspect of the similarity semantics of Yoshimian dynamics-with the Fodorian theory of concepts (concept triggering and locking); these together form a hybrid solution to our properly Husserlian problem of how precisely to stop the infinite regress and circularity in content constitution by coming up against kinds.This hybrid two-system solution-a statistical learning system and a conceptual knowledge system-departs from the established empiricism of deep CNNs precisely to the extent that it must attach itself to a "language of thought," which can support "meanings" that are "precise" in the above-required content-identity sense (Pinker, 2007, p. 151). Concluding Discussion: Options for Theorists & Modellers In this concluding discussion, I want to address the various aims and orientations of modelers and theorists who might want to avoid my solution. When first presented with the regress/circularity arguments (in section I) there are a few logical options: 1. Ignore the infinite regress/circularity arguments and the logical conditions on content.2. Accept the infinite regress/circularity arguments but reject the logical conditions on content.3. Accept the infinite regress/circularity arguments and accept the logical conditions on content. De facto, the field of neural network theorizing is currently in position 1, because I don't believe the arguments are widely known.They apply to feature-extraction theories, which is what deep CNNs employ.(That is partly why I have focused on deep CNNs.)Choosing 2 could be interpreted as understanding the regress/circularity to be an argument against the existence of logically objective content, given a belief in the default truth of Lockean pooling procedures.In other words, we could say that the success of deep CNNs is proof that Husserlian/Quinean/Fodorian strictures on content constitution are too strong.We can then justify ignoring any and all conditions on content (e.g. the publicity constraint), because we will have demonstrated that excellent recognition is compatible with content only ever being approximated, never identically instantiated (or conceptually conceived stricto sensu).We can then work outward toward approximating the logical phenomena of compositionality and systematicity with deep learning systems, as Yoshua Bengio proposes. There are a couple of reasons, however, why I think we should accept 3. The first reason is that, although the high dimensional representational space of a deep network is opaque to us (e.g.it's not clear what the nodes in the activation patterns of distributed representations are representing) it is nevertheless assumed that the high dimensional spaces are representational spaces.If deep learning systems are representation learning systems, and concepts in the logical sense are a kind of representation, then this idea of concepts falls within the explanatory domain of deep learning systems.For example, if it is self-evident that concepts enter into logical relations (as our everyday experience attests) and this entails the properties of compositionality and systematicity (as Husserl insists) these become explananda for any theory of concepts.Yoshua Bengio admits compositionality is an explanandum for deep learning (Bengio, 2019).And Geoffrey Hinton has famously said that deep learning will be able to do everything (Hao, 2020).I'm not aware of anyone denying the phenomena.But one can safely ignore the phenomena by circumscribing the idea of a conceptual representation as a feature-extraction amalgamation.And if one has success with that idea (as with deep CNNs) that does justify ignoring the phenomena to some degree; but I think only for a time. The second reason is this: the basis of recognition of deep learning is not of individuals as members of a conceptual type.But it seems clear that recognition of individuals as members of a conceptual type is an essential aspect of conceptual meaning.A highly trained, superhumanly recognizing CNN is paradoxically in the position of Quine's pre-individuative child in terms of the possibility of objective representation.Marcus & Davis actually hit on this idea in their polemic against deep learning: [Y]ou can train a deep learning system to recognize pictures of Derek Jeter, say, with high accuracy.But that's because the system thinks of 'pictures of Derek Jeter' as a category of similar pictures, not because it has any idea of Derek Jeter as an athlete or an individual human being (2019, p. 142 italics added). What I think Marcus & Davis are getting at is the Quinean point, that a deep CNN is a pre-individuative (and therefore pre-conceptual) machine.Note the reference to similarity semantics.A deep CNN is trapped in the Quinean pre-individuative stage to the degree that its knowledge (its competence) is based on similarity semantics (Firestone, 2020).It cannot escape similarity semantics due to the regress/circularity in content constitution. Assume for the moment that this line of argument is essentially correct-that statistical learning must be supplemented by a LOT to avoid the regress/circularity in content constitution.Then, provided the aims are different, the nature of deep CNNs, and what they reveal about statistical learning, can potentially be incorporated into a general theory of mind-perhaps explaining prototypicality judgements based in similarity.I have nothing against this.All I am saying is that the logical nature of concepts and object identity are correlative explananda, along with their entailments (e.g.compositionality), for representation theory.Since deep learning is a highly successful branch of representation theory, it becomes a question whether these phenomena are proper aims of the theory behind deep CNNs, which employ the empiricist theory of abstraction.And I am saying that if deep CNNs really are the mechanical realization of the empiricist theory of abstraction, then these phenomena will be missed due to the infinite regress/circularity, and are therefore not proper aims of the theory behind deep CNNs. 3 Can Deep CNNs Avoid Infinite Regress/Circularity in Content… It's worth, therefore, discussing what the aims of modelers and theorists are in deep learning.It may be that these aims are very modest.Perhaps concepts and object identity and the associated logical phenomenology characteristic of thought (systematicity, compositionality) are not explananda for them.Perhaps they use this language loosely, without intending to mean what these words mean in philosophical discussion (cf.Machery, 2009).If so, they can ignore my arguments for supplementation in terms of the familiar apparatus of a language of thought.I wish all such modelers and theorists the best of luck-I am not against loose-talk per se. If, however, these words are to be taken seriously as theoretically subsuming all conceptual phenomena, then I believe this hinders progress toward a general theory of mind.And that's only because the use of the terms 'concept' and 'object identity' in discussions of deep learning systems obscures the representational phenomena at issue that are relevant to a general theory of mind/intelligence (e.g.systematicity, compositionality, etc.); and at least some deep learning modelers (e.g.Bengio) admit these are real explananda that must be faced eventually. One distinction that might clarify the theoretical gulf between the system's aims and the relevant explananda is the distinction between symbolic and sub-symbolic content.Shea (2021) has recently argued that deep learning systems are probably a good model for the transition from sub-symbolic contents (textures, colors etc.) to the tokening of concepts (e.g.CAT).This registers the direction of discussion since Fodor's Concepts (1998): "If, looking at Greycat, I take him to be a cat, then too I apply the concept CAT to Greycat" (24).I contrast Shea and Fodor here to bring out how the current discussion assumes that concepts are feature-extraction amalgamations based on sub-symbolic contents.The infinite regress/circularity arguments apply to feature-extraction amalgamations.But again this is only a problem if symbolic content (a logical conception of CAT) capable of traditional computational operations is a goal; it may not be. Part of my argument has been that a logical concept of, say, CAT should be a goal also for object-side reasons.A sub-symbolic system represents quasi-objects based on segmentation of contours or patterns in the input, the sensory presence of bodies, memory and recognition of the sensory presence of bodies, etc.The subjectivity of similarity-based objects arises for objects that are essentially relative (not just accidentally) to the past history of the individual organism(/machine) (see Quine, 2004, p. 290).Deep learning proposes to overcome this subjectivity by brute force, with extremely large data-sets.Nevertheless, strict object identity will always require quantification and discrete symbolic units, if the infinite regress/circularity arguments hold.Lacking these representational resources, a representational being/machine can only be said to represent a sort of half-entity corresponding to a similarity amalgam.This is not a metaphor-it's a technical term (to be taken quite literally), meaning a subjective representation, incapable of logic and reasoning, because based on the amalgamated history of semantic segmentation or representation of similarity 'clumps' in the environment (Burge, 2010, p. 236;cf. Millikan, 2017).It is sub-symbolic (Smolensky, 1991).This is important for our argument since Quinean subjective representations correspond to the representations of deep CNNs, with the difference that deep CNNs have far more data at their disposal.I merely extend Quine's argument and group deep CNNs with his notion of animals and pre-linguistic infants despite deep learning's recognitional prowess far exceeding the limits of all known organic creatures (cf.Cangelosi & Schlesinger, 2015).But I further argue that there is no bridge from this kind of representation to the kind 'required' for concepts and object identities due to the arguments.I conclude that the only solution given the arguments is to suppose a language of thought (see Sect.III).If one is not interested in the phenomena (the logical phenomenology) related to concepts and object identities in the strict sense, one can ignore the arguments (per the above option). We can summarize the broad outlines of this concluding discussion with the following: 1. Deep learning systems, in particular deep CNNs, employ the empiricist theory of abstraction (Buckner, 2018).2. The empiricist theory of abstraction generates content on the basis of induction by noting similarities and differences among phenomenal features of objects.3.These similarities and differences are defined, in deep learning, over pixels, sound images etc. (sub-symbolic contents).4. Deep learning systems, in particular deep CNNs, therefore induct sub-symbolic similarity amalgams.5. Sub-symbolic similarity amalgams are essentially contrasted with symbolic contents susceptible of an apparatus of identity and quantification.6.Therefore, insofar as identity semantics (concepts and their objects qua identity) are relevant explananda for deep learning (as is admitted by all sides), a supplementary mechanism involving the apparatus of identity and quantification is required (Marcus & Davis, 2019). Premise 5 is given support by the infinite regress/circularity arguments (part I).The need for the apparatus of identity and quantification is supported by Fodor's much discussed 'publicity constraint' on conceptual content (Prinz, 2002;Edwards, 2009;Schneider, 2011).As to Premise 6, it should be noted that this is textually true-"concept… and object identity" are explicitly considered explananda of deep learning (Goodfellow et al., 2016, p. 17;Kelleher, 2019).I do not think that these authors have reflected that these terms are connected with systematicity, compositionality, and other recognized cognitive explananda.I argued in part I that they are.And since deep learning is tied to the empiricist theory of abstraction (Buckner, 2018), which can neither result in concept nor object identity due to the resultant similarity semantics (Fodor, 1998, Husserl, 2001), deep learning, as applied to the mind, will, again, need to be supplemented in terms of the computational-linguistic apparatus normally associated with a language of thought. The duty of the philosopher, I think, is to clear a path for the scientist to discover a mechanism.One can ignore the phenomena and the arguments, but if one takes them seriously, one will necessarily seek the discovery of mechanisms that support "transformations needed for quantification," potentially yielding an identity semantics capable of representing kinds (types, properties, etc.) (Hinzen, 2006, p. 177).Such mechanisms have been sought with some success (O' Reilly et al., 2014); and 1 3 Can Deep CNNs Avoid Infinite Regress/Circularity in Content… the language of thought may even have a readily interpretable neuroscientific realization (Gallistel, 2018).Our solution to the circularity and infinite regress posed by the similarity semantics of deep learning is to suppose that it must be supplemented by a separate system defined by specific, unified meanings, detachable in principle from statistically induced degrees of associative fulfilment (Wieskopf, 2015).A place for deep CNNs is included in our solution, to the degree that such functions for similarity learning are a real aspect of statistical learning in experience, as well as the informational basis of similarity judgments.In future work, I hope to develop this unified picture of the phenomenology of logical experiences-the original sense of (Husserlian) phenomenology-with the language of thought and deep CNNs as a full explanatory theory consistent with the above arguments.
2023-07-19T15:02:59.710Z
2023-07-17T00:00:00.000
{ "year": 2023, "sha1": "9bacaa8bc18f198901aa81dc83d880b94d1f04e7", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11023-023-09642-0.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "078dc0d41bb98c032543b131c86312192d41ba43", "s2fieldsofstudy": [ "Computer Science", "Philosophy" ], "extfieldsofstudy": [ "Computer Science" ] }
256912264
pes2o/s2orc
v3-fos-license
Ablation of peri-insult generated granule cells after epilepsy onset halts disease progression Aberrant integration of newborn hippocampal granule cells is hypothesized to contribute to the development of temporal lobe epilepsy. To test this hypothesis, we used a diphtheria toxin receptor expression system to selectively ablate these cells from the epileptic mouse brain. Epileptogenesis was initiated using the pilocarpine status epilepticus model in male and female mice. Continuous EEG monitoring was begun 2–3 months after pilocarpine treatment. Four weeks into the EEG recording period, at a time when spontaneous seizures were frequent, mice were treated with diphtheria toxin to ablate peri-insult generated newborn granule cells, which were born in the weeks just before and after pilocarpine treatment. EEG monitoring continued for another month after cell ablation. Ablation halted epilepsy progression relative to untreated epileptic mice; the latter showing a significant and dramatic 300% increase in seizure frequency. This increase was prevented in treated mice. Ablation did not, however, cause an immediate reduction in seizures, suggesting that peri-insult generated cells mediate epileptogenesis, but that seizures per se are initiated elsewhere in the circuit. These findings demonstrate that targeted ablation of newborn granule cells can produce a striking improvement in disease course, and that the treatment can be effective when applied months after disease onset. A closer analysis of seizure frequency following cell ablation revealed that the positive effects required a few weeks to develop ( Fig. 2c; p < 0.001, two-way RM ANOVA). Specifically, differences in seizure frequency were not evident until two weeks after DT treatment (during the seventh week of recording). At this time, SE-control mice exhibited 2.6 ± 0.5 seizure per day, while SE-ablation mice had only 0.9 ± 0.6 seizures per day (p = 0.028, Holm-Sidak MCP). During the final week of recording (week nine) SE-control mice experienced 5.1 ± 0.5 seizures per day and SE-ablation mice exhibited 1.8 ± 0.6 seizures per day (p < 0.001, Holm-Sidak MCP). Effect of cell ablation on mossy fiber sprouting. Inner molecular layer mossy fiber sprouting is a prominent feature in many animal models of epilepsy, including the pilocarpine model used here (Fig. 4; epileptic vs. Figure 2. Cell ablation treatment blocks epilepsy progression. Pre-treatment and post-treatment seizure frequencies (a,b), severities (d,e), and durations (g,h) are shown for SE-control mice (left, black) and SEablation mice (middle, red). Each line shows the means ± SEM for one animal. (c) Average number of seizure events during each week of recording for SE-control (black) and SE-ablation (red) groups (DT was given during week 5, red arrow). (f) Average behavioral seizure scores and (i) durations. (j) Representative posttreatment electrographic seizures from SE-control (top) and SE-ablation (bottom) mice. *p < 0.05, **p < 0.01, ***p < 0.001, scale bars: 300 μV and 2 seconds. ScieNtific REpoRts | (2017) 7:18015 | DOI:10.1038/s41598-017-18237-6 healthy mice, p < 0.001, two-way RM ANOVA). Newborn granule cells contribute to sprouting 18 ; therefore we queried whether removing these cells would reduce sprouting. Ablation did not significantly decrease sprouting (p = 0.724), in accord with past studies 10,11 . These findings are consistent with recent data indicating that older cells can contribute to sprouting 19 , perhaps compensating for the loss of younger cells 20 . Effect of cell ablation on neurogenesis rates. To confirm the efficacy of ablation treatment on neurogenesis, brain sections were immunostained with the immature granule cell marker doublecortin. Ablation treatment significantly reduced the number of doublecortin-expressing cells in healthy-ablation mice relative to all other groups ( Fig. 5; p < 0.001, two-way ANOVA on ranked data). Interestingly, at the time point examined (4-5 months after status epilepticus), the number of doublecortin-expressing cells per dentate was also reduced in SE-control mice relative to healthy-control mice. Our findings are consistent with numerous studies showing increased neurogenesis in the weeks after status, but impaired neurogenesis in chronically epileptic animals [21][22][23][24][25] . Although doublecortin-expressing cells were numerically fewer in SE-ablation mice relative to SE-control mice, the difference was not statistically significant (p = 0.174). This result almost certainly reflects the chronic timepoint at which the animals were collected, when all SE groups show reduced neurogenesis. Nonetheless, the data clearly indicate that the ablation strategy kills newborn cells. Consistent with this conclusion, in our previous work using the identical genetic strategy, SE-ablation mice had fewer doublecortin-expressing cells than SE-controls three months after status 11 . Effect of cell ablation on astrocytes and microglia. The present findings support the conclusion that ablating peri-insult generated newborn granule cells prevents epilepsy progression. Cell ablation, however, has the potential to produce inflammatory changes that might impact seizure occurrence. To assess this possibility, sections were immunostained with the astroglial marker GFAP, and the microglial marker Iba1. Astrocyte soma area was significantly increased in epileptic mice ( Fig. 6 , consistent with prior studies demonstrating that epilepsy is associated with brain inflammation [26][27][28] . No significant differences were found among epileptic mice. Microglial soma area was also increased in epileptic mice ( Fig. 6; Group F vs. Groups B [p = 0.006, one-way ANOVA] and C [p = 0.034]; Group D vs. group B [p = 0.032]). Notably, microglial soma area in epileptic DT-treated mice, in which ablation occurred, was statistically identical to epileptic DT-treated mice without ablation (Group F vs. Group D, p = 0.956, one-way ANOVA), indicating that non-specific DT toxicity cannot account for the seizure-reducing effects of cell ablation. Discussion For the present study, we used a targeted cell ablation strategy to demonstrate that selective removal of peri-insult generated newborn granule cells from the epileptic brain prevents further epilepsy progression. This finding builds upon prior work demonstrating that prophylactic ablation of newborn granule cells reduces seizure incidence once epilepsy develops, and represents the next key step in determining whether targeted cell ablation has potential as a novel therapy for epilepsy. The present findings predict that such a treatment would be beneficial in patients with new onset temporal lobe epilepsy. Caveats and limitations. Several studies, using distinct strategies, have now demonstrated that reducing neurogenesis or ablating newborn cells mitigates epilepsy severity [8][9][10][11] (but see also ref. 29 ). Together, these findings provide compelling support for the hypothesis that adult neurogenesis contributes to temporal lobe epilepsy. Nonetheless, several caveats should be kept in mind. Firstly, the extent to which newborn cells are critical for epilepsy remains to be determined. In all studies conducted to date, epilepsy still developed or persisted; albeit with reduced severity [8][9][10][11] . This may simply reflect the limitations of the approaches used. It has not been possible with current techniques to ablate 100% of newborn cells. Indeed, while our previous work with the identical mouse line and similar treatment strategy confirmed that ablation was effective, we still found only a 50% reduction in doublecortin-expressing newborn granule cells three months after ablation 11 . Remaining cells, therefore, may be sufficient to promote epileptogenesis. Alternatively, newborn cells may modulate epilepsy severity without being required for epilepsy onset. The status epilepticus models used for these studies cause extensive extrahippocampal damage 30 . Seizures might originate from these damaged extrahippocampal regions. In light of this fact, it is somewhat surprising that newborn granule cell ablation works as well as it does. It will be important in future studies to test more focal temporal lobe epilepsy models, where newborn granule cells may play a larger role. A second caveat of the current approach is that surviving progenitor cells may reestablish dentate neurogenesis, potentially limiting the long-term utility of ablation. Because of this limitation, animals were not examined beyond the one month post-ablation period. Whether the positive effects observed here persist beyond this initial period will need to await the development of better techniques for controlling neurogenesis. We do note, however, that Cho and colleagues 10 demonstrated that ablating granule cell progenitors prior to status epilepticus reduced seizure frequency up to 48 weeks later. A final caveat of the approach is that indirect effects of ablation cannot be entirely excluded. DTr expression was induced in a small number of reactive astrocytes in epileptic mice (Fig. 1), and anti-neurogenic and cell ablation approaches have the potential to alter the hippocampal circuit in unexpected ways (see ref. 31 ). It is possible that such off-target effects contribute to the observed reduction in epilepsy severity. The development of new Table 1 for group details. ScieNtific REpoRts | (2017) 7:18015 | DOI:10.1038/s41598-017-18237-6 techniques will be required to address these issues. In the meantime, newborn granule cells remain promising candidates for the development of a mechanistic understanding of temporal lobe epileptogenesis. Role of newborn granule cells in epileptogenesis. Epilepsy encompasses multiple processes which may or may not reflect distinct underlying mechanisms. Epilepsy begins with epileptogenesis, which includes 1) the transition from a brain that does not support spontaneous seizures to a brain that does and 2) increases in disease severity that can occur after clinical disease onset, also known as epilepsy progression 32 . Once the epileptic state is established, neurons and/or neuronal circuits in the brain occasionally initiate seizures. Whether the neurons responsible for epileptogenesis are the same neurons that initiate seizures is not known. While previous studies have consistently found that reducing the number of newborn granule cells prior to disease onset mitigates the severity of the epilepsy that subsequently develops, these studies provide little insight into whether the newborn cells play a role in epilepsy progression, seizure initiation, or both. Either scenario could account for the observed results. By contrast, the present findings argue for a role for newborn cells in epilepsy progression, but not seizure initiation. Specifically, ablation of peri-insult generated newborn cells after the onset of epilepsy produced no acute change in seizure frequency. A significant difference in seizure frequency was not evident until two weeks after ablation, and in this case only because seizure frequency increased in untreated animals; not because ablation-treated animals experienced reductions (see Fig. 2c). If newborn granule cells were responsible for seizure initiation, then eliminating these cells would be predicted to produce an immediate reduction in seizure frequency, on par with the reduction observed in our prior work. Rather, the findings suggest that these newborn cells play a role in epilepsy progression, and that seizure initiation occurs elsewhere in the circuit. If correct, such a model would be reminiscent of memory consolidation, in which hippocampal function is only required during early phases of memory retention 33,34 . Significance of the findings for epilepsy therapy. The present findings suggest it may be possible to overcome one of the major limitations of epilepsy therapy development. Specifically, the vast majority of interventions that have shown disease modifying effects in animal models of epilepsy began treatment either before, or immediately after, an epileptogenic insult. This greatly complicates translating these approaches to clinical populations because (1) many patients develop epilepsy in the absence of an identifiable initial insult and (2) it has so far not been possible to predict which patients will develop epilepsy even when the causal insult (e.g. brain trauma) is known. Deciding who to treat, therefore, is extremely challenging; particularly if therapies have significant side effects. The efficacy of the approach used here provides hope that the window of opportunity for disease modifying treatments in epilepsy extends beyond the first clinical seizure, when patients could be easily identified. One caveat to our approach is that we targeted cells born before and after the insult; the peri-insult generated population of granule cells. The rationale for targeting granule cells born before the insult is based on numerous studies demonstrating that these neurons contribute to hippocampal rewiring (for review see ref. 24,35 ). Similarly, granule cells born after the insult also integrate abnormally. If the goal is to remove abnormal granule cells from the dentate, therefore, both populations must be targeted. A challenge moving forward is developing strategies that could achieve this in patient populations. While the transgenic strategy used here is not translatable, it should be possible to eliminate the same population of cells in patients by taking advantage of proteins expressed exclusively in these cells, like doublecortin, for targeting. The window of time during which such an approach would be viable is still narrow, as developing neurons down regulate the known selective markers as they mature. A promising strategy for chronic epilepsy would be to identify differences in gene expression between normal and abnormal granule cells. Although an optimal target has yet to emerge, established differences in morphology and physiology indicate that these cells likely exhibit differences in growth-associated proteins, synaptic components and ion channels 5,14,15 . With additional research, it may be possible to selectively target abnormal cells for modulation or elimination as a novel therapy for epilepsy. Methods Animals. All procedures complied with the National Institutes of Health's and institutional guidelines for the care and use of animals and have been approved by CCHMC's Institutional Animal Care and Use Committee (IACUC). Mice used in this study were derived from NestinCreER T2 mice 36 , Gt(ROSA)26Sor tm1(HBEGF)Awai /J mice 37 (referred to as DTr mice), and GFP reporter mice 38,39 . NestinCreER T2 mice were acquired from Dr. Lionel Chow (CCHMC) and were maintained on an FVB/NJ background. Diphtheria toxin-receptor (DTr) mice and GFP mice were maintained on a C57BL/6 background. All mice in the study were the F1 progeny of hemizygous NestinCreER T2 mice (FVB/NJ) crossed to DTr +/− ; GFP +/± expressing mice (C57BL/6). All mice, therefore, were a 50:50 mix of FVB/NJ and C57BL/6 backgrounds. This cross was used to generate animals of multiple genotypes, as outlined in Table 1. Mice were housed in standard cages with regular bedding within the CCHMC clean barrier vivarium facility and were provided with regular chow and water ad libitum on a 14/10 day/night cycle. Mice were weaned between P21-P23 and same-sex littermates were housed together with 2-4 mice per cage. Experimental Design. A total of 146 transgenic mice were generated for potential use in the present study. One hundred twenty-three mice were designated for pilocarpine treatment (SE) and 23 for saline treatment (healthy). Eighty mice successfully entered and survived status epileptics (65%). From this group of 80 mice, 39 were assigned to EEG monitoring. Groups were chosen at random immediately following pilocarpine treatment by the researcher (BEH). EEG recording platforms are limited such that no more than 16 mice can be monitored at any given time, and space is shared between multiple studies in multiple labs, so not all mice could be used. An additional 22 SE mice were excluded from the final group due to surgical/electrode failures (n = 11), early mortality (n = 9), or an absence of spontaneous seizures during baseline recording (n = 2). Of the 23 healthy control mice, three were excluded due to surgical/electrode failures or early mortality. The final study included 17 pilocarpine-treated mice, and 20 healthy controls. Mice were randomly assorted into the four principal groups used in this study: (1) Healthy-control (n = 10), (2) Healthy-ablation (n = 10), (3) SE-control (n = 9) and (4) SE-ablation (n = 8). Using data generated from prior studies, we calculated that we should be able to detect a 50% change in seizure frequency with 95% confidence and a power > 0.8 with group sizes of 8 mice (0.461 seizures/ day, SD 0.406, n = 10) in all epileptic groups. Blind analyses were conducted on data collected from final group constituents by concealing and coding samples prior to analysis. Tamoxifen and pilocarpine. All mice received eight, once-weekly subcutaneous injections of tamoxifen (Sigma, 250 mg/kg/dose at a concentration of 20 mg/mL in corn oil) beginning the day they were weaned (P21-P23). Mice underwent pilocarpine-induced SE at eight weeks of age, such that five tamoxifen injections occurred before status, and three after. SE was induced in an empty, bedding-free cage. Mice received an injection of methyl scopolamine nitrate (1 mg/kg intraperitoneally dissolved in sterile Ringer's solution). Fifteen minutes later, mice received an injection of pilocarpine (380 mg/kg intraperitoneally dissolved in sterile Ringer's solution [ Table 1, groups D-F]; no SE controls received sterile Ringer's [groups A-C]). Immediately following pilocarpine administration, animal behavior was monitored continuously for seizure activity. Onset of SE was defined by the appearance of multiple class V (tonic/clonic) seizures 13 , followed by continuous behavioral seizure activity. In the event an animal did not develop SE within 60 minutes following pilocarpine administration, a second injection of pilocarpine (190 mg/kg) was administered. Three hours after the onset of SE mice received two injections of diazepam spaced 15 minutes apart (10 mg/kg subcutaneously). Mice were then returned to normal housing conditions. Animal health was monitored closely in the days following SE. Sterile Ringer's solution was provided s.c. as needed to restore mice to pretreatment weight. Following pilocarpine (or control) treatment mice were assigned to one of six treatment groups, as shown in Table 1. EEG monitoring and DT administration. Seven to twelve weeks following SE mice underwent electrode implantation surgery 40 . Three mice that did not receive pilocarpine were also implanted for EEG monitoring. Seizures were not observed in any mice that were not previously treated with pilocarpine. Mice were anesthetized with 4.0% isoflurane in 1.5% oxygen, transferred to a stereotaxic frame, and kept sedated with 0.5-1.0% isoflurane. The surgical site was shaved and cleaned with Dermachlor (2.0% chlorhexidine) and 70% ethanol. Lidocaine (50 μL) was administered subcutaneously at the surgical site. A small incision was made above the skull and two burr holes were drilled through the skull, but leaving the dura intact. Holes for electrodes were placed at the following coordinates: 1.5 mm anterior of lambda and 1.5 mm left and right of the sagittal suture. Three additional holes were placed for skull screws (two at the base of the skull, 1.5 mm posterior to lambda and 1.5 mm left and right of center, and the third near the front of the skull, 1.5 mm posterior to bregma and 2.0 mm right of the sagittal suture). The grounding and recording electrodes from a single channel wireless EEG transmitter (TA11ETA-F10, Data Sciences International) were placed beneath the skull and above the dura, and the transmitter body was inserted into a pocket created underneath the skin of the animals' torso. Dental cement was applied to secure the transmitter leads and the surgical wound was sutured. Mice were housed in single cages with standard bedding and food and water ad libitum. Cages were placed on wireless receiver plates (RPC1, Data Sciences International) and continuous video-EEG monitoring was initiated. Animal behavior was monitored to ensure a complete recovery. Four weeks into the recording period mice received five, once-daily injections of diphtheria toxin (DT). DT was dissolved in nuclease-free sterile water and injected intraperitoneally at a dose range of 30-50 μg/kg. DT potency was found to vary among lots, so effective doses were established empirically (data not shown). Final DT doses were statistically equivalent among epileptic animals (p = 0.368, Mann-Whitney rank sum test). A subset of control mice received sterile Ringer's solution rather than DT. Following DT treatment, mice were video-EEG monitored for an additional three to four weeks. EEG analysis. Video-EEG data was analyzed by a reviewer blind to treatment group using Neuroscore software (version 2.1.0). EEG data was analyzed for seizure frequency, severity, and duration. A seizure was identified by a sudden increase in voltage amplitude (at least 2x baseline), with a progressive change in firing frequency or amplitude, and a minimum duration of 10 seconds. Seizure cessation was marked when the recording returned to baseline, although postictal theta was not included as part of the seizure for duration measurements. Behavioral seizure severity was determined by video analysis of each electrographic seizure using the Racine scale 13 . Seizure severity was only scored when there was a clear video image of the mouse experiencing the electrographic seizure. Quantification of doublecortin-expressing, and DTr + Prox1-expressing granule cells. Sections immunostained for doublecortin or double-immunostained for Prox1 + DTr were imaged using a 3024 Nikon A1Rsi inverted microscope with a 60x water objective (NA = 1.27, resolution = 0.410 μm/pixel). Confocal image "stacks" through the z-depth were collected at 1 μm increments through 20 μm of tissue. Multiple image stacks were montaged to capture the entire x-y dimensions of the dentate gyrus (including the hilus) from two hemispheres (left and right) per mouse. Data collected from each hemisphere was averaged for each animal prior to statistical analysis. Images of Prox1 + DTr immunostaining were collected simultaneously. To determine the number of doublecortin-expressing and Prox1 + DTr-expressing granule cells present within a 20 μm thick section of the dentate gyrus, confocal image stacks were imported into Imaris software (version 7.7.2). Immunopostive cells were quantified using an automated detection method which identifies and counts fluorescent "spots". Minimum fluorescent diameter was set to 5.0 μm and minimum intensity threshold was adjusted to optimize cell detection. All counts were completed using an optical dissector approach, excluding all cell bodies truncated at the upper surface of the tissue to eliminate bias due to changes in cell size or shape 41,42 . The automated cell counts were then reviewed to remove false positives and identify false negatives. Counts are expressed as number of granule cells per dentate gyrus section (encompassing the entire x-y dimensions of a dentate section and 20 µm through the z-depth). The location of each identified dentate granule cell was subsequently analyzed to determine the number of hilar ectopic granule cells present in each dentate section. Prox1-immunoreactive cells located within the hilus that were a minimum of 20 μm away from the hilar-granule cell body layer border were considered ectopic. To determine the percentage of Prox1 immunoreactive granule cells which were co-labeled with DTr, images of Prox1 + DTr immunostained sections were cropped in the x and y dimensions to isolate a 400 µm section of the dentate at the midpoint of the upper blade. Prox1 + cells within these samples were then identified and assessed for coexpression of DTr. Percent DTr expression among granule cells was determined using the following formula (DTr and Prox1 coexpressing cells/all Prox1 expressing cells) × 100. Mossy fiber sprouting. Sections immunostained for GFP + ZnT3 (approximately 2.4 mm posterior to bregma) were used to quantify mossy fiber sprouting 43 . Sections were imaged using a DMI6000 Leica SP5 inverted microscope with a 63x oil objective (numerical aperture = 1.4). Single images of ZnT3 staining were collected from the midpoint of the upper and lower blades of two hemispheres (resolution = 0.242 µm/pixel). Each image was collected 2-3 μm beneath the surface of the tissue to control for antibody penetration. Images were imported into Neurolucida software (version 11.09) for analysis. ZnT3 immunoreactivity within the inner molecular layer was determined using an automated object detection analysis. Detection parameters were set to an intensity threshold determined by the reviewer to optimize puncta identification. Objects less than 0.5 μm in diameter, the minimum size of granule cell mossy fiber puncta, were excluded. The automated detection was then reviewed by a blinded investigator to identify false negatives and remove false positives. Percent mossy fiber sprouting was calculated as follows: (area of inner molecular layer ZnT3 immunoreactivity/area of inner molecular layer examined) × 100. Percent mossy fiber sprouting from the upper and lower blades was averaged for each animal prior to statistical analysis. Astrocyte and microglia soma area measurements. Sections immunostained for GFAP + Iba1 (approximately 2.7 mm posterior to bregma) were imaged using the Leica SP5 system. The soma areas of GFAPand Iba1-immunostained cells were assessed from confocal image stacks collected from a sample of the dentate hilus from the left and right hemispheres of one section per mouse (20 µm depth, 1 μm step, resolution = 0.484 μm/pixel) using Neurolucida software. Ten GFAP-immunoreactive and ten Iba1-immunoreactive cells per hemisphere were randomly selected for quantification (for a total of 20 cells of each type per animal). Only somas entirely contained within the image stack were selected for analysis. Maximum soma profile area was determined for each cell. Area measurements within each cell type were averaged for each animal prior to statistical analysis. Statistical analysis. Statistical analyses were conducted using Sigma Plot software (version 13.0). Values presented are least square means ± standard error of the mean (SEM), or means ± SEM. No statistical differences were found between male and female mice for any of the parameters presented (Student's t-test, data not shown), so data were binned. No significant differences between healthy control (non-epileptic), DTr-negative mice receiving DT (Table 1, Group A) and healthy control, DTr-positive mice receiving Ringer's (Group B) were found for the following measures using Student's t-test (data not shown): number of granule cells per dentate, number of ectopic cells per dentate, percentage of ectopically located granule cells and percent mossy fiber sprouting, so data were binned. Similarly, no differences between pilocarpine-treated, DTr-negative mice receiving DT (Group D) and pilocarpine-treated, DTr-positive mice receiving Ringer's (Group E) were found for seizure frequency, seizure severity, seizure duration, number of granule cells per dentate, number of ectopic cells per dentate, percentage of ectopically located granule cells and percent mossy fiber sprouting, so data were also binned. Statistical differences among groups were observed for microglial and astrocyte soma area, so data for all six groups are presented. Primary measures in the study, however, used binned datasets to generate the following groups for statistical analysis: 1) Healthy-control, 2) Healthy-ablation, 3) SE-control and 4) SE-ablation. Group details are presented in Table 1. EEG data was segregated into "pre-treatment" and "post-treatment" periods. Pre-treatment encompassed the time period up to the first DT injection, while the post-treatment period began seven days after the first DT injection. The DT-treatment period, during which cell death is occurring, was excluded from most analyses; although this data is shown in Fig. 2c. Two SE-ablation mice did not exhibit any seizures in the post-treatment recording period that coincided with a clear video image, therefore they were not included in the behavioral seizure analysis and the subsequent data was generated from six SE-ablation mice and nine SE-control mice. SE-ablation mice were recorded for an average of 26.3 ± 2.7 days during the pre-treatment period and 25.3 ± 4.0 days during the post-treatment period. SE-control mice were recorded for 27.4 ± 2.8 pre-treatment days and 27.7 ± 2.3 post-treatment days (recording times did not differ between groups; pretreatment p = 0.386; post treatment p = 0.143, t-test). SE-ablation mice were implanted on average 74.5 ± 10.7 days following SE, while SE-control mice were implanted 83.0 ± 10.4 following SE (p = 0.118, t-test). SE-ablation mice were sacrificed 136.0 ± 20.4 days following SE, and SE-control mice were sacrificed 149.6 ± 12.4 days following SE (p = 0.114, t-test). Figure preparation. All images were prepared using Adobe Photoshop Elements 12. Brightness and contrast were adjusted to optimize cellular detail. Identical changes were made to figures meant for comparison. Graphs were generated using Sigma Plot software (version 13.0). Data availability. The datasets generated and/or analyzed for the current study are available from the corresponding author upon reasonable request. Significance Statement. Here, we demonstrate that targeted ablation of peri-insult generated granule cells has anti-epileptogenic effects when applied months after disease onset. Our findings provide compelling new evidence that abnormal granule cells play a fundamental and protracted role in the epileptogenic process, rather than a transient and more limited role in modulating the acute effects of the initial injury. Our findings also represent the first proof-of-concept demonstration that cell ablation could be therapeutic in patients who have already developed the disease.
2023-02-17T14:40:08.641Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "4a770449060ca26fe42db13d1d7744ce2018e011", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-18237-6.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "4a770449060ca26fe42db13d1d7744ce2018e011", "s2fieldsofstudy": [ "Biology", "Medicine", "Psychology" ], "extfieldsofstudy": [] }
2065047
pes2o/s2orc
v3-fos-license
Torque Ripple Minimization in a Switched Reluctance Drive by Neuro-Fuzzy Compensation Simple power electronic drive circuit and fault tolerance of converter are specific advantages of SRM drives, but excessive torque ripple has limited its use to special applications. It is well known that controlling the current shape adequately can minimize the torque ripple. This paper presents a new method for shaping the motor currents to minimize the torque ripple, using a neuro-fuzzy compensator. In the proposed method, a compensating signal is added to the output of a PI controller, in a current-regulated speed control loop. Numerical results are presented in this paper, with an analysis of the effects of changing the form of the membership function of the neuro-fuzzy compensator. I. INTRODUCTION s SR machine presents strong nonlinear characteristics, fuzzy logic and neural networks methods are well suited for its control, and so many authors have proposed the dynamic control of SR drives using these artificial intelligence based methods [2,4]. The use of fuzzy logic control has been implemented with success by the authors in [1], and has shown to be effective for the SR speed control in applications where some degree of torque ripple is tolerated, as is the case in many industrial applications. Nevertheless, in servo control applications or when smooth control is required at low speeds, the elimination of the torque ripple becomes the main issue for an acceptable control strategy. In this case, the fuzzy logic controller is not enough because torque ripple changes with the SR motor speed and load. In this context, it is advantageous to include some learning mechanism to the SR control to adapt itself to new dynamic conditions. This paper presents thus a new methodology to control a SR drive that consists on the use of a PI speed controller with the supervision of a neuro-fuzzy block responsible of torque ripple reduction. II. TORQUE PULSATION With a PI-like control alone, it is not possible to obtain a ripple-free output speed at any speed range, because it would also require a ripple-free output torque for this purpose. A constant current reference can produce an oscillating torque as shown in Fig. 2. At lower speeds, it is more convenient to compensate for the torque pulsations through phase current waveshaping. In this case, the current reference signal should vary as a function of position, speed and load torque, in order to produce the desired ripple compensation. In fact, the optimum compensating signal will be a highly non-linear function of position, speed and load. Some works [3,4] have been published, which use many different strategies to produce a compensating signal. In this work, a novel SR ripple compensation method is proposed based on [5,6], which uses a self-tuning neuro-fuzzy compensator. The proposed compensation scheme is described in the next section. but producing significant ripple, as shown in Fig. 2. The resulting current signal after the addition, comp I , is used as a compensated reference signal for the current-controlled SR drive converter. The compensating signal should then be adjusted in order to produce a ripple-free output torque. The compensating signal is adjusted iteratively, through a neuro-fuzzy learning algorithm, where the training error information is derived from some internal variable of the SR drive system. In the simulation tests, the torque ripple itself has been used as the training error information. However, this approach would not be very practical for on-line implementation in a real system, since the dynamic torque is a variable that is difficult to measure. For continuous on-line training, other variables could be more appropriate, such as III. PROPOSED METHOD The training procedure consists on adjusting the rule consequents by a hybrid-training algorithm, which combines back-propagation and least squares minimization. At each learning iteration, the dc component is removed from the compensating signal, so that the ripple compensator does not try to change the mean value of the output torque. As a result, when the control system operates in steady-state, after the training, the PI controller will really produce a constant output signal, while the neuro-fuzzy compensator will produce a zero-mean-value compensating current reference, the comp I ∆ signal. Training data are obtained from simulations of steady-state operation of the complete SR drive system. At each learning iteration, the dc component is removed from the torque signal, so that just the ripple remains. This torque ripple data is then tabulated against the mean value of the PI output reference current, and against the rotor angular position. This data set is then passed to the training algorithm, so that the torque ripple is interpreted as error information for each current-angle pair. The output of the neuro-fuzzy compensator is then readjusted to reduce the error (which is in fact the torque ripple), being this process repeated until some minimum torque ripple limit is reached. A. Without Compensation The SR motor is first controlled using only the PI regulator without compensation and full-load torque (4 Nm) at 500 rpm. Fig. 2 shows the torque signal and Fig. 3 shows its harmonic spectrum. With a 6/4 SRM, the converter produces 12 current pulses per rotor turn. So, the torque pulsations occur at a frequency 12 times higher than the frequency of rotation. For this reason, the harmonic spectrum shown in After ten training iterations, Fig. 4 and Fig. 5 show the output torque waveform and its harmonic content for a compensated current reference. It can be seen that the total harmonic content is very low, and the 12th harmonic is lower than 0.5% of the mean torque. After 10 training iterations, the compensated current reference produces phase current pulses like those were shown in Fig. 6. As expected, the current values are higher at the beginning and at the end of the current pulse. This pulse shape is consistent with the torque characteristics of the SR motor, which produces less torque at the beginning of pole overlapping and just before the aligned position. B. Compensation Sensitivity Fuzzy systems have their performance significantly affected by the shape used for its membership functions. Previous results used triangular functions. The compensator performance was tested for three other membership functions: bell, and two gaussian shapes named open and normal gaussian. All results were obtained again with fullload torque, 500 rpm, and five fuzzy sets. The results showed that using a bell shape function, the neuro-fuzzy compensator achieved its best performance. For comparison, in Fig. 7(a), we show a zoom of the harmonic content obtained for triangular functions (Fig. 5). Fig. 7(b) shows the harmonic content using the bell functions. Fig. 7(c) shows the harmonic content using the gaussian functions. We can verify that all 12 th -harmonic components were decreased and not only the fundamental one as occurred with the other functions. VI. CONCLUSIONS A neuro-fuzzy compensating mechanism to ripple reduction in SR motors was investigated. Results showed the potentialities of incorporating a compensating signal in the current waveform to minimize the torque ripple. The effect of changing the form of the membership function was also investigated revealing that a bell shape function produces better ripple reduction in all harmonic content. Next steps are the use of this concept in an experimental drive and incorporate another signal to be trained.
2000-09-30T08:31:16.000Z
2000-04-09T00:00:00.000
{ "year": 2000, "sha1": "62dbbeca940c8c8585286fb07d91c486f9a77764", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cs/0010003", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a0cbaf8469110e3131c93eb87722a0f6e81e8ea0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
264332012
pes2o/s2orc
v3-fos-license
Multiple Mobile Sinks for Quality of Service Improvement in Large-Scale Wireless Sensor Networks The involvement of wireless sensor networks in large-scale real-time applications is exponentially growing. These applications can range from hazardous area supervision to military applications. In such critical contexts, the simultaneous improvement of the quality of service and the network lifetime represents a big challenge. To meet these requirements, using multiple mobile sinks can be a key solution to accommodate the variations that may affect the network. Recent studies were based on predefined mobility models for sinks and relied on multi-hop routing techniques. Besides, most of these studies focused only on improving energy consumption without considering QoS metrics. In this paper, multiple mobile sinks with random mobile models are used to establish a tradeoff between power consumption and the quality of service. The simulation results show that using hierarchical data routing with random mobile sinks represents an efficient method to balance the distribution of the energy levels of nodes and to reduce the overall power consumption. Moreover, it is proven that the proposed routing methods allow for minimizing the latency of the transmitted data, increasing the reliability, and improving the throughput of the received data compared to recent works, which are based on predefined trajectories of mobile sinks and multi-hop architectures. Introduction WSN (Wireless Sensor Networks) is a special case of Ad hoc networks [1], broadly used in various applications such as environment monitoring, object tracking, military surveillance, traffic control, healthcare, etc.A WSN is a collection of large numbers of sensor nodes (SN) distributed over a geographic area to monitor certain phenomena.Each sensor node is limited in processing capability, wireless bandwidth, battery, and memory capacity.Mostly, it is difficult, even impossible, to recharge or change the battery, making energy consumption a significant constraint of WSNs lifetime [1,2]. WSNs have many advantages, and they are widely used due to their low cost, wireless communication capability, energy efficiency, and scalability, and they are suitable for Real-time monitoring applications. The SNs can sense, process, and transmit data either via multi-hop transmission or directly to a base station (BS).The BS sends the collected data to a remote-control station through radio networks or satellite connections.WSNs have unique features like autonomy, self-organization, and Ad-hoc infrastructure, which makes them ideal for healthcare, smart cities, and environmental surveillance [2,3]. Since wireless communication requires significantly more power than other tasks, energy conservation is important while designing routing protocols for WSNs.The clustering approach is one of the best techniques for reducing the energy consumption of nodes.Therefore, instead of each node sending its collected data individually, first, sensor nodes organize themselves into clusters, and then an elected cluster head (CH) sends all aggregate data to the sink. Clustering is used in WSNs for several important reasons, as it offers several benefits that contribute to the efficient and effective operation of these networks, such as: • Energy Efficiency: As sensor nodes, WSNs are usually powered by batteries with energy resources clustering, which plays a role in evenly distributing energy-intensive tasks like data transmission and aggregation among the nodes.By assigning nodes as Cluster Heads (CHs) for collecting and aggregating data, non-CH nodes can conserve energy and extend the network lifespan by operating in low-power modes for longer periods. • Reduced Communication Overhead: In clustered WSNs, the sensor nodes within a cluster typically transmit collected data to their respective CH.The CH, then.Forwards the data to the base station or sink node.This approach reduces communication distances within the network since data does not need to be transmitted to the base station.Consequently, reduced communication distances lead to energy consumption and alleviate network congestion. • Scalability: With clustered WSNs, new nodes can easily join existing clusters as the network expands while CHs efficiently route data towards the Base Station (BS).This allows for network expansion without impacting its performance. • Load Balancing: Cluster Heads are vital in distributing data collection tasks among sensor nodes within their cluster.This ensures that no single node becomes overwhelmed with the responsibilities of gathering data.This load-balancing technique plays a role in avoiding failures of nodes caused by excessive energy usage.Additionally, clustering enhances the fusion of data, allowing for aggregation at the CH level. As a result, redundant information collected by nodes is minimized, leading to the transmission of precise and concise data to the base station. • Prolonged Network Lifetime: The combination of reduced energy consumption, efficient communication, and optimized data routing achieved through clustering significantly extends the overall network lifetime. In large-scale WSNs, coverage is one of the most important QoS metrics, and it refers to how well the SNs in the network can monitor or sense the region of interest.Coverage directly impacts the ability of a WSN to fulfill its intended purpose, which could be environmental monitoring, surveillance, or any other sensitive application.In such a context, the battery replacement of large amounts of nodes is a labor-consuming work.Although the life of WSNs can be prolonged through energy-harvesting (EH) technology, it is necessary to design an energy-efficient routing protocol for energy harvesting, as an important part of nodes would be unavailable in the energy harvesting phase.In this phase, a certain number of unavailable nodes would cause a coverage hole, affecting the WSN's monitoring function of the target environment. In [4], authors propose an adaptive hierarchical clustering-based routing protocol for EH-WSNs (HCEH-UC) to achieve uninterrupted coverage of the target region through the distributed adjustment of the data transmission.The proposal balances the energy consumption of nodes.Then, a distributed alternation of working modes is proposed to adaptively control the number of nodes in the energy-harvesting mode, which could lead to uninterrupted target coverage.The simulation results show that the proposed HCEH-UC protocol can prolong the maximal lifetime coverage of WSNs compared with the conventional routing protocol and achieve uninterrupted target coverage using energyharvesting technology. Despite this, numerous challenges such as Quality of Services (QoS), efficiency of used energy, mobility, and lifetime restrict the use of WSN.The QoS and energy consumption are relevant metrics used to assess the quality of paths in any designed routing protocol in WSNs. The Quality of Service (QoS) is defined by the International Telecommunication Union regulations (ITU-T Supp.9 of E.800 Series) [5] as the totality of characteristics of a telecommunications service that bear on its ability to satisfy stated and implied needs of the user of the service.Also, the QoE (Quality of Experience) is defined as the degree of delight or annoyance of the user of an application or service [6]. QoS in WSN refers to the ability of the network to provide certain guarantees regarding latency, throughput, packet loss, and reliability for different types of traffic.Since WSNs are typically deployed in harsh environments where resources are limited, providing QoS is a hard task to resolve.However, it is essential to meet the application requirements, such as monitoring critical infrastructure, conserving energy, and collecting data. QoS is a very challenging subject, one of the big defis is how to guarantee QoS.In [7], a new method for QoE parameters prediction in an overall telecommunication system consisting of users and a telecommunication network, based on QoS indicators' values prediction, are overviewed.The presented results show the advantages of the proposed overall model normalization techniques towards adequate prediction and presentation of QoE in conjunction with QoS in the overall telecommunication systems. Besides, the small battery energy is a major constraint for WSNs.As these nodes are typically deployed in remote and inaccessible locations, recharging them is not feasible.Therefore, the energy resources of the sensors must be used efficiently to prolong the network's lifespan [8,9].If the network's topology is not variable and the sink remains fixed, the energy distribution will be increasingly uneven over time.The network's longevity is a crucial evaluation standard used to assess its performance, and it is typically measured by determining the period when the first node dies.Over the years, numerous routing protocols and algorithms have been suggested for energy-efficient WSNs, but many of these works suppose that the sink is static [10][11][12][13].In routing protocols based on multi-hop communications, nodes close to the sink play a crucial role in transmitting data to other sensors.As a result, their energy resources tend to deplete faster, resulting in the hot spots issue [14]. Routing protocols using clustering help sensors sense and reassemble data from the environment and transmit it to the sink with minimum costs.By grouping nodes, clustering algorithms enhanced the performance of nodes and their ability to send data.In clusterbased routing protocols, even cluster heads (CHs) located far away from the sink are more likely to exhaust their battery reserves than those nearby since the needed hops for sending data increase with the square of the distance [15,16].Node deaths can disrupt the network topology, reduce sensing coverage, and potentially result in a network partition, isolation of nodes, and loss of data.Additionally, in real-time WSN applications, such as military zone monitoring, enemy surveillance, natural disaster tracking (e.g., seismic activities), and exploration of inaccessible areas, stringent quality-of-service constraints are essential.These constraints include high data reliability, throughput network, low data delivery latency, and high communication efficiency, apart from efficient energy usage. Clustering technology is crucial in reducing the consumed power by attributing sensors to clusters based on specific rules.The cluster features a set of CHs that act as relay nodes for other members within the group.Clustering simplifies the network topology and mitigates the need for sensor-sink communications.Moreover, CHs can leverage data fusion techniques to eliminate repetitive data, thereby lessening the CHs burden.A prominent example of a routing protocol which uses clustering is the "low-energy adaptive clustering algorithm" (LEACH).However, selecting CH in the LEACH is not optimal, and research work is needed to refine the protocol.Moreover, mobile WSN (MWSN) is a novel variant of networks used in dynamic and mobile environments due to its capacity for self-configuration.In large WSNs, the network can be logically portioned into sub-networks.Each one has its mobile sink.Using mobile sinks is a highly effective approach for managing the imbalanced energy of WSNs.WSNs supporting mobile sinks typically deploy intelligent vehicles or robots to carry the sink, which can be moved freely around the sensing field.The implementation of mobile sinks was suggested and evaluated to address the imbalanced energy problem in WSNs [17][18][19][20][21][22][23]. Mobile sinks controlling region of interest (RoI) gather information from static sensors in one or multiple hops.A significant advantage of multiple mobile sinks is that they can distribute the communication and computation load across the network, reducing the burden on individual sensor nodes.This can significantly enhance the network lifetime by mitigating the effects of energy depletion in specific nodes.Moreover, multiple mobile sinks can help decrease data collection and transmission latency.By deploying sinks in different parts of the network, the time taken to collect and transmit data from distant nodes can be significantly reduced.However, there are some challenges in deploying mobile and multiple sinks.One such challenge is finding an optimal placement of sinks to cover the entire network efficiently.Also, the synchronization of mobile and multiple sinks can be complex.In conclusion, deploying multiple mobile sinks in a WSN is a promising approach for achieving better efficiency, network lifetime, and data collection, but it requires careful deployment and synchronization to realize its full benefits. The current research aims to investigate how energy consumption and QoS metrics are enhanced by multiple mobile sinks that use a cluster-based routing protocol.We will examine four different mobility models on these metrics, focusing on identifying the number and cost of deploying mobile sinks.Our study aims to provide insights into how network performance can be enhanced while balancing its investment costs. The next sections are as follows: Section 2 introduces previous works and compares them with our contribution.Section 3 presents the cluster-based routing protocols used in this study.Section 4 discusses the suggested sink mobility models.Section 5 highlights the findings of the simulations, along with analysis and discussions.Section 6 summarizes the paper and its perspectives. Review of the Literature This section discusses the studies utilizing stationary and movable WSN sinks to reduce power consumption and extend the network's operating life.We will also discuss the QoS challenges in WSNs applications in the literature. In [24], a scheme for maximizing the longevity of WSNs utilizing a movable sink was suggested to manage the delays in delivering data.Each node has a range of tolerance for delay, within which it does not need to instantly transmit data when it is available.Instead, the node can keep data in storage for a while and transfer it at the appropriate time, i.e., when the mobile sink is at an optimal location to lengthen the network's useful functioning duration. Moving sink nodes is among the viable ways used to extend the lifetime of the network.As pointed out in [25], this technique can significantly improve network durability.In [26,27], the authors delve further into using numerous mobile sinks to enhance energy efficiency and network longevity.In another study [28], a joint optimization assessment to optimize the network lifetime using mobile sinks is performed by determining Koptimal trajectories and scheduling of sojourn time per position while abiding by the given constraints by sensors and mobile sinks. Hence, mobility is a prevalent approach for mitigating hotspots' issues and extending the lifetime of multi-hop WSN routing, as highlighted in [29].Other studies, such as [30], highlight the impact of using mobile sinks on power usage and longevity by selecting optimal sink node numbers and parking positions.In [31], a network restructuring process is proposed by modifying the adjacent nodes of a sink to optimize the lifetime and balance the power usage among sinks. By ensuring that the total energy of the sink is below a specific threshold, only a set of selected nodes are connected, which enhances the network lifespan.Research in [32] highlights the benefits of using mobile sinks to prolong network life by randomly deploying nodes in a square area or pre-defined rectangular or hexagonal grids.The hexagonal grid deployment strategy is particularly effective since it maintains coverage and connectivity. In the previously discussed studies, authors highlight only the issues of energy consumption with multi-hop routing, considering some fixed or mobile sinks to reduce energy and improve network lifetime.However, do not give importance to ever-increasing QoS criteria, especially in real-time constraint applications.Since such routing already suffers from an exhilarating node energy consumption and a huge data delivery delay due to the transfer of data between nodes until reaching the BS in multi-hop traveling. Our study investigates a more critical problem in the real-time WSN context; we use multiple mobile sinks to improve energy consumption and QoS metrics.Existing clusterbased routing already guarantees better energy conservation and fast data delivery [15][16][17][18][19]. We are also trying to find the optimal number of mobile wells that maintain good power and QoS performance while considering the extra costs of mobile sinks deploying. The weaknesses of the work based on multi-hop routing are mainly the border nodes ensuring routing data of their affiliated sensors.These border nodes quickly lose their energies and die, creating network partitioning and a huge loss of relevant data, especially in a military context or vital monitoring. For that, we decided to work with hierarchical routing, which showed its performance concerning power usage and latency, representing the main weakness of mobility models relying on predetermined and non-adaptive trajectories. This type of mobility lacks the flexibility of adaptation, especially in a variable, stochastic and unpredictable military context.Suppose a final node loses energy, dies, and goes out of service.In that case, the mobile sink continues its regular trajectory, and all the nodes that transmit their data through it become unable to deliver their data to the destination.Therefore, we easily fall into the phenomenon of black holes, and the network becomes partitioned, which is unacceptable in critical applications.While random mobility models do not follow a trajectory, remain flexible, and adapt quickly to any change of context. Recently, there has been a significant interest in the development of cluster-based and power-efficient mobile protocols for routing.One such protocol, the "Energy-efficient Cluster-based Dynamic Routing Algorithm" (ECDRA) [33], involves deploying a mobile sink attached to a sensor that rotates circularly to dynamically change the topology of the network in response to the sink's position.However, LEACH [14] is a well-known hierarchical routing protocol differentiating CHs from normal nodes (ONs).ONs transfer their data to the appropriate CHs, which collectively transmit the data to BS.While LEACH is more effective than classical routing protocols at increasing network lifetime, the random selection of CHs can result in uneven distribution and flow between the BS and the CHs, leading to higher energy consumption. In [34], a framework that enhances energy efficiency and the QoS for WSN is presented.The introduced hybrid technique utilizes a fitness function that considers key performance indicators like the number of neighbors, the set of sensors for each cluster, and how long each node remains the CH.This fitness is integrated with a probability threshold function to influence the procedure of selecting CHs.Compared to previous homogeneous protocols such as LEACH, the proposed method maintains optimal CH selection more stably throughout network operation.Furthermore, compared with heterogeneous protocols like "Developed Distributive Energy-Efficient Clustering and Enhanced Developed Distributive Energy-Efficient Clustering", the proposed protocol displays superior performance regarding the WSN lifetime, power usage, and throughput.However, this suggested routing algorithm should offer more privacy and security features.Moreover, to prove its consistency, this paradigm should be tested in a real-world context. The authors of [35] introduced an evaluation technique to compare the "Secure Mobile Sink Node Location Dynamic Routing Protocol" (SMSNDRP) with another algorithm named "routing protocol with K-means for forming Data Gathered Path" (KM-DGP).The application of these two algorithms was on networks with Mobile Sinks of various sizes.QoS and power usage are used to assess the quality of routes and energy consumption patterns of both routing protocols on small (with single and multiple mobile Sinks) and large networks.The proposed evaluation technique is implemented on NS3 using five different scenarios.The findings suggest that compared with KM-DGP, SMSNDRP shows improved network energy consumption on small, single networks.In contrast, for larger networks with sixteen mobile Sink nodes or more, KM-DGP displays comparatively better network energy consumption than SMSNDRP with four mobile sink nodes. The study in [36] introduces a new high-performance communication protocol for routing packets using multiple mobile nodes.The protocol relies on four main features: assessment of packet delays, independent control of link quality and choosing active neighbors of the nodes.Simulation studies on this protocol show that the latter improves the packet forwarding rates, reduces power usage, and shortens average delays. The authors in [37] presented a clustering paradigm for MWSN.Their technique involves introducing super cluster heads (SCH) that are static and efficient sensors within the MWSN to gather CH data from CHs. Combining SCHs with the "Minimum Transmission Energy" (MTE) protocol reduces the distance required to transfer data from CH to BS, ultimately improving energy efficiency.Under this approach, data is first transmitted from CH to SCH and then forwarded to BS.This new technique promises to enhance the network performance further. Another study in [38] introduces a new energy-efficient routing system that employs clustering and sink mobility techniques.The authors propose a two-step approach that involves classifying the region of interest (RoI) into sectors and selecting a CH for each sector based on the weight of each node member.Afterwards, each member calculates the power usage of numerous routing paths and selects the most energy-efficient option.Finally, CHs are linked in a chain via a greedy strategy for inter-cluster connectivity.The findings show, as demonstrated through simulations that this new routing strategy is better than similar approaches, like "Cluster-Chain Mobile Agent Routing" (CCMAR) and "Energy-efficient Cluster-based Dynamic Routing Algorithm" (ECDRA). According to a recent study [39], an auto-schedule routing algorithm relying on IoT connections was introduced to enhance the power usage of Software-defined networking (SDN) controlled embedded networks.The algorithm starts by constructing the "Neighbor Distance Discovery Protocol", which identifies the "minimum depletion path" by locating the closest node to the BS.Next, the algorithm executes the "Multipath Cooperative Self-Scheduling Protocol" to establish a non-traffic route.Additionally, the algorithm involves the routing communications of each IoT object in building the routing medium.It computes the average packet loss rate, node response rate, energy consumption, sensor absorption rate, and transmission delay.Finally, the algorithm employs the "Lifetime Duty Cycled Energy Efficient Protocol" to determine the network threshold latency and energy limits. The research discussed in [40] explores the latest routing algorithms used in sensor networks and proposes strategies for their development.This study highlights recent advances in the strategy used to reduce the energy required for information transmission.One key concern for IoT, which has gained much attention, is the energy requirements to extend the lifespan of IoT networks.One of the approaches that has gained traction is the design of routing protocols that minimize energy consumption during data transmission. In recent studies, optimization paradigms have been utilized to address the energy issues in WSNs by means of an energy-efficient multi-objective criterion as follows: The proposed clustering and optimization-based routing approach in [41] is used to improve the power efficiency and prolong the lifespan.The selection of CH is achieved in parallel with the minimization of power usage, which effectively reduces dead sensor nodes.The use of the "Sailfish optimizer algorithm" for optimal path selection also enhances the energy efficiency of data transmission between CH and BS.However, the study does not consider the node mobility in the proposed approach.In WSNs, nodes can move frequently due to environmental conditions or other factors.Hence, the network's topology varies, which may affect the performance of routing algorithms.Future research could address this limitation by incorporating mobility models into the proposed approach to improve its adaptability to dynamic network conditions. Moreover, some studies have attempted to enhance the network power usage and its lifespan via various optimization algorithms such as the PSO algorithm [42], bio-inspired ant colony [43], etc.Another research [44] introduced a hybrid ACO-PSO routing paradigm that employs mobile sinks to reduce overall power usage. Hence, the research gap can be summarized as follows: Despite the potential benefits of using multiple mobile sinks in WSNs, research gaps should be addressed in this domain.One of the significant gaps is the establishment of performant and robust routing paradigms for multiple mobile sinks since routing protocols determine the efficiency of WSNs.The challenge of multiple mobile sinks is to design a routing protocol to handle the changing positions of sinks and ensure efficient data delivery.Current routing protocols used for multiple mobile sinks are based on centralized approaches, which can lead to scalability issues and network congestion.Another research gap is related to the synchronization of mobile multiple sinks.When multiple sinks move in the RoI, it can be challenging to ensure that they are synchronized in terms of their locations and data collection schedules.Synchronization is essential to avoid collisions and ensure efficient data collection. Furthermore, the deployment methodology of multiple mobile sinks is another area where research is needed.Identifying an optimal number of mobile sinks, their placement, and their trajectories requires sophisticated algorithms and optimization techniques.Overall, the research gaps regarding using multiple mobile sinks in WSNs include establishing scalable and high-performance routing protocols, synchronization techniques, and optimal deployment methods that can enable efficient and reliable data collection in large-scale WSNs. QoS in WSNs has been an interesting research topic in recent years.Many WSN real-time-based applications require the support of QoS.However, the development of sensor networks needs to consider various factors such as fault Tolerance, resource allocation, adaptive routing, data reliability, Real-time communication, scalability, and energy efficiency [45,46]. Addressing these QoS challenges in WSNs often involves a combination of hardware and software solutions, including efficient protocols, energy-efficient algorithms, and adaptive strategies suitable to the specific application requirements.Researchers continue to develop innovative approaches to overcome these challenges and enhance the performance of WSNs in various domains [47,48]. Data aggregation is a method to effectively reduce the data transmission volume and improve network lifetime.However, the data waiting for processing in the queue are subject to an extra delay.In this paper [47], the authors propose an Adaptive Aggregation Routing (AAR) scheme to avoid this problem by dynamically changing the forwarding node according to the length of the data queue and balancing the aggregating and datasending load.Simulation results demonstrate that compared with the existing schemes, the proposed scheme reduces the delay by 14.91%, improves the lifetime by 30.91%, and increases energy efficiency by 76.40%. Coverage is a fundamental QoS metric in WSNs that assesses the ability of the network to adequately monitor a target area.It involves careful node deployment sensing range configuration and may require adaptation strategies to maintain coverage over time.In [48], authors propose an energy-efficient clustering routing protocol based on a high-QoS node deployment with an inter-cluster routing mechanism (EECRP-HQSND-ICRM) in WSNs.The new protocol introduces a node deployment strategy based on twofold coverage.The proposed strategy divides the monitoring area into four small areas centered on the base station (BS), and the CHs are selected in the respective cells to satisfy the uniformity of the CHs distribution.The simulation results show that, compared with the general node deployment strategies, the deployment strategy of the proposed protocol has higher information integrity and validity and lower redundancy. One of the important challenges is the uncertainty of the service of requests.Recently, intuitionistic fuzzy estimations of the QoS have been proposed, such as in this work [49], where three intuitionistic fuzzy characterizations of virtual service devices are specified: intuitionistic fuzzy traffic estimation, intuitionistic fuzzy flow estimation and intuitionistic fuzzy estimation about probability.Six intuitionistic fuzzy estimations of the uncertainty of comprise service devices are proposed.The proposed uncertainty estimations allow for the definition of new Quality of Service (QoS) indicators.They can determine the quality-of-service compositions across a wide range of service systems. Simulation Setup The current research aims to assess the sensors using cluster-based routing protocols.The evaluation will investigate throughput, reliability, packet latency time, and energy consumption with four mobility models.The study compares the results of different sink positions, ranging from one to eight static and mobile sinks.To ensure accuracy and reliability, the simulation will be repeated 100 times for each scenario in different topologies.The Castalia/OMNET++ simulator [50] will be utilized for the simulation process.The "Throughput Test" application is implemented and used for this purpose. Castalia is a discrete-event simulator specifically designed for WSNs and is built on the OMNeT++ simulation framework.To evaluate the energy consumption and the QoS in a large-scale WSN using Castalia, we follow these steps to set up the simulation, define parameters, and select appropriate metrics: • Use the collected data to analyze the QoS performance of the WSN. • Generate graphs, plots, and statistics to visualize and interpret the results. • Draw conclusions based on the evaluation of QoS metrics and how they relate to our research objectives. 3.1.7. Step 7: Iterate and Refine • Depending on our findings, we repeat and refine the simulation to further investigate or optimize QoS in our WSN. Cluster-Based Routing Protocols Many recent articles have treated the impact of sink mobility in the WSN with multihop routing mechanisms [15,16].In such a type of routing, the nodes closest to the sink dissipate their energies rapidly since they retransmit the collected data from distinct sensors to the sink, which divides the network, isolates the sink, and creates energy holes. The use of mobile sinks has considerably alleviated these concerns in terms of reliability, throughput, and consumed energy.However, the data delivery delay was still modest due to the accumulated delay for each hop.Furthermore, this routing technique is not suitable for larger networks since the required number of hops is influenced by the number of deployed nodes.In such case, other than the number of hops, the delivery delay increases, and interferences between packets also increases, which rapidly and significantly degrades the throughput [15,16]. The cluster-based routing protocols have proven good energy conservation results and low data delivery latency [11][12][13][14].For these reasons, in this study, we investigate the effect of using multiple mobile sinks with a cluster routing paradigm in large-scale WSNs.We will introduce the technique of such a routing protocol with the LEACH [14], P-LEACH [51], and EA-CRP [52].The Table 1 provides a brief comparison between the key features of the three routing protocols to be studied. Sink Mobility Patterns To ensure unbiased results towards any mobility model, the authors of this study chose to compare statics sinks outside the RoI with four random mobility models: 1. Random WayPoint Mobility Model (RWP) [53,54]: A model that includes pause times between changes in directions and speed. 2. Random Walk Mobility Model (RW) [53,54]: A simple mobility model based on random directions and speeds. 3. Random Direction Mobility Model (RD) [53,54]: A model that forces MNs to travel to the edge of the simulation area before changing direction and speed.4. Gauss Markov Mobility Model (GM) [53,54]: A memory model that uses one tuning parameter to vary the degree of randomness in the mobility pattern. The basis for selecting these models for evaluating network Quality of Service (QoS) metrics is that they support unpredictable and random changes, like real-time scenarios.Subsequently, we will delve into how each of these models operates.The functioning of each mobility model is described in the article [21]. Figure 1a shows an example of a traveling path of a sink, which begins in the center of the RoI, using the RW Mobility Model.At each point, the sink randomly chooses a direction between 0 and 2 and a speed between 0 and 10m/s.At every 60, the sink changes direction and speed.This Model is a memoryless mobility pattern because it retains no knowledge concerning its past locations and speed values [14].Figure 1c shows an example path of a sink, which begins in the center of th using the RD Mobility Model.In this model, the sink chooses a random direction to t Figure 1c shows an example path of a sink, which begins in the center of the RoI, using the RD Mobility Model.In this model, the sink chooses a random direction to travel, similar to the RW Mobility Model.The sink then travels to the border of the simulation area in that direction.Once the RoI boundary is reached (represented by dots in the figure), the sink pauses for a specified time, chooses another angular direction (between 0 and 180 degrees) and continues the process.Figure 1d illustrates an example traveling pattern of a sink using the GM Mobility Model; the sink begins its movement in the center of the RoI and moves for 1000 s.The Gauss-Markov Mobility Model was designed to adapt to different levels of randomness via one tuning parameter.Initially, the sink is assigned a current speed and direction.At fixed intervals, n, movement occurs by updating the speed and direction.Specifically, the value of speed and direction at the nth instance is calculated based on the value of speed and direction at the n-1 instance and a random variable.This model can eliminate the sudden stops and sharp turns encountered in the RW Mobility Model by allowing past velocities (and directions) to influence future velocities (and directions). Simulation Scenarios and Evaluation In WSNs with cluster-based routing [55][56][57], the shorter the distance separating the sink from the CH is, the more the power is conserved, and the lower the packet collection latency is.Using numerous sinks to replace a single sink can significantly decrease these distances.Using multiple sinks, every cluster head can communicate with the nearest sink.Therefore, it is possible to enhance the QoS performances by deploying multiple sinks [58] or relay nodes [59,60] to gather sub-regional data.This technique has been proven effective in reducing distances and improving overall performance.Hence, it is a preferred solution for achieving a better quality of service performance. As a result, the primary sensor network is partitioned into smaller networks with a low diameter.These sub-networks consist of sensors and a static or mobile sink, forming a cluster.Cluster heads transfer information to the respective sink of the corresponding sub-region. The study's initial scenario will focus on a simulation of a field measuring 400 m × 400 m, hosting eight hundred nodes with random positions.This used static sink is situated beyond the RoI.Further, the simulation will be repeated four times using a mobile sink, which follows one of the four designated mobility models in Figure 2a.Another scenario implies the same RoI and number of nodes, but the sensing field is split into two 400 m × 200 m sections. Each half will be controlled, once with a static sink positioned beyond the supervised RoI, and four times with a mobile sink that follows one of the four mobility models, so we will have two fixed sinks and two mobile sinks, as illustrated in Figure 2b in the third scenario, it is the same principle as the second scenario except that we divide the main field into four subfields sized of 200 m × 100 m.In this case, we will have four fixed and four mobile sinks, as illustrated in Figure 2c.Finally, for the fourth scenario, we also keep the same principle as the other scenarios, but this time, we divide the initial field into eight subfields of 200 m × 100 m.In this case, we will have eight static and eight mobile sinks, as shown in Figure 2d. The assessment analysis of the introduced system relies on various assumptions, including the supposed stationary state of all deployed sensor nodes, coupled with their location awareness.The utilization of a cluster-based routing protocol has also been considered, where only CHs are authorized to transfer gathered information to the designated sink.The latter changes its position through the network following a designated mobility model that facilitates data collection from the respective cluster heads.Each sensor node uniformly issues a standard amount of data per unit of time (i.e., one packet per second) with an equivalent data length of 100 bytes.low diameter.These sub-networks consist of sensors and a static or mobile sink, formi a cluster.Cluster heads transfer information to the respective sink of the correspondi sub-region. The study's initial scenario will focus on a simulation of a field measuring 400 m 400 m, hosting eight hundred nodes with random positions.This used static sink is si ated beyond the RoI.Further, the simulation will be repeated four times using a mob sink, which follows one of the four designated mobility models in Figure 2a.Another s nario implies the same RoI and number of nodes, but the sensing field is split into two 4 m × 200 m sections.Each half will be controlled, once with a static sink positioned beyond the supervis RoI, and four times with a mobile sink that follows one of the four mobility models, so will have two fixed sinks and two mobile sinks, as illustrated in Figure 2b in the th scenario, it is the same principle as the second scenario except that we divide the m field into four subfields sized of 200 m × 100 m.In this case, we will have four fixed a four mobile sinks, as illustrated in Figure 2c.Finally, for the fourth scenario, we also ke the same principle as the other scenarios, but this time, we divide the initial field into eig subfields of 200 m × 100 m.In this case, we will have eight static and eight mobile sin as shown in Figure 2d. The assessment analysis of the introduced system relies on various assumptions, cluding the supposed stationary state of all deployed sensor nodes, coupled with th location awareness.The utilization of a cluster-based routing protocol has also been co sidered, where only CHs are authorized to transfer gathered information to the des nated sink.The latter changes its position through the network following a designat mobility model that facilitates data collection from the respective cluster heads.Each se sor node uniformly issues a standard amount of data per unit of time (i.e., one packet p second) with an equivalent data length of 100 bytes. The energy of transmission of each sensor is adapted to the adjacent nodes' relat distances.The mobile sink is supposed to hold enough power reserves to communic and relocate at any point within the network.The static base stations in each scenario The energy of transmission of each sensor is adapted to the adjacent nodes' relative distances.The mobile sink is supposed to hold enough power reserves to communicate and relocate at any point within the network.The static base stations in each scenario are positioned 20 m beyond the field margins.According to the operating principles of the three hierarchical protocols described above, an election phase is planned each period (i.e., 20 s for LEACH) to choose a new CH. Additionally, all sensor nodes have identical communication capacity and computing resources.Table 2 highlights the relevant, considered parameters.The metrics chosen for evaluation are as follows: • Energy consumption (The consumed energy by all the sensor nodes) • Throughput (Total data collected by sink) • Data delivery rate (Reliability) • Delay or Packet Latency To achieve accurate simulation results, we will use four random mobility models of the sink (GM, RW, RWP and RD) and a static model (a static sink located outside the region of interest; Fixed Sink) with three cluster-based routing protocols (LEACH, P-LEACH, and EA-CRP). Energy Consumption Evaluation In the WSN, one can never talk about network performance evaluation without studying the major concern of energy consumption.For that, in this simulation scenario, we compared the energy consumed by all sensor nodes (Network Energy) by varying the number of mobile and static sinks that monitor the network to study the impact of using multiple mobile sinks on energy consumption in large-scale WSNs (LS-WS). The simulation results illustrated in Figure 3a show that the use of a single mobile sink, regardless of the model, decreases the average energy consumed compared to a fixed sink by −12.5% for RW up to −19% for RD, and the best result obtained with the use of the RD mobility model and the P-LEACH routing protocol with a power gain of −20%.The simulation results illustrated in the other figures show that the use of multiple mobile sinks decreases the average energy consumption compared to a single static sink, as illustrated in Figure 3a:  By using two mobile sinks (Figure 3b) from −25.6% for RW up to −31.3% for RD and the best result obtained with the RD model and the P-LEACH routing protocol −36.1%. By using four mobile sinks (Figure 3c) from −44% for RW to −48.3% for RD, the best The simulation results illustrated in the other figures show that the use of multiple mobile sinks decreases the average energy consumption compared to a single static sink, as illustrated in Figure 3a: • By using two mobile sinks (Figure 3b) from −25.6% for RW up to −31.3% for RD and the best result obtained with the RD model and the P-LEACH routing protocol −36.1%. • By using four mobile sinks (Figure 3c) from −44% for RW to −48.3% for RD, the best result was obtained with the RD model and the P-LEACH routing protocol −57.5%. • Using eight mobile sinks (Figure 3d) from −48% for RW to −52% for RD, the best result was obtained with the RD model and the P-LEACH routing protocol −59.5%. It can be drawn from the previous results that the best mobility model is RD, which offers better energy conservation, around 60% less, with eight mobile sinks, compared to a single fixed sink.More precisely, the sink with RD mobility model moves towards the clusters of sensor nodes to reduce the distance between the sink and the CH and consequently reduce the transmission energy consumption. On the other hand, by comparing the different energy scenarios, we notice that using four mobile sinks gives results very close to that of eight mobile sinks. So, we can conclude that in terms of profitability, the use of four mobile sinks establishes a good compromise between energy conservation and investment cost (budget of mobile sinks), and we can obtain better energy conservation of 57.5% less with only four mobile sinks using the RD model and the P-LEACH routing protocol.Since consumed power is expected to increase exponentially as the communication range increases, utilizing a shorter transmission distance can significantly optimize the power consumed by mobile sinks.This implies that the power conservation of the network will be higher, and its lifetime will be extended as the subnet area becomes smaller.Nonetheless, it will incur a higher cost of deploying adequate mobile sinks to cover the area. Throughput Evaluation When deploying WSN applications with service quality constraints, the amount of collected data by the sink becomes important to consider. To address this, in the second phase of the assessment, we analyzed the packets received by the sink (throughput) of the network with multiple mobile and static sinks. In this simulation scenario, we compared the amount of data collected (Throughput) by varying the number of mobile and static sinks monitoring the network to study the impact of multiple mobile sinks on the throughput in the LS-WSNs. The simulation results illustrated in Figure 4a show that the use of a single mobile sink, whatever the model, increases the average flow rate compared to a fixed sink by +8.6% for RWP up to +14% for RD, and the best result obtained with the RD model and the EA-CRP routing protocol with a throughput gain of +15.5%. The simulation results illustrated in the other figures show that the use of multiple mobile sinks increases the average flow compared to that of a single static sink, as illustrated in Figure 4a respectively: • Using two mobile sinks (Figure 4b) from +25% for RWP up to +31% for RD, the best result is obtained with the RD model and the EA-CRP routing protocol + 38%. • Using four mobile sinks (Figure 4c) from +49% for RWP to +57% for RD, the best result is obtained with the RD model and the EA-CRP routing protocol + 73%. • By using eight mobile sinks (Figure 4d) from 60.5% for RWP to 68.5% for RD, the best result is obtained with the RD model and the EA-CRP routing protocol +90.2%. It can be deduced from the previous results that the best mobility model is RD, which offers almost double the throughput compared to a single fixed, using eight mobile sinks. More precisely, the RD moves in the network by realistically approaching the clusters of SN to reduce the distance between the sink and the CH, consequently reducing the phenomena of collisions and packet retransmission and increasing the throughput. Reliability Evaluation In this simulation scenario, we compared the network's reliability by varying the number of mobile and static sinks monitoring the network to study the impact of using multiple mobile sinks on reliability in LS-WSN. The simulation results illustrated in Figure 5a show that the use of a single mobile sink, whatever the model, increases the average reliability compared to a fixed sink from +17% for RWP up to + 20% for RD, and the best result is obtained with the RD model and the EA-CRP routing protocol with a gain of +22%. The simulation results illustrated in the other figures show that the use of multiple mobile sinks increases the average reliability of the network compared to that of a single static sink (Figure 5a):  By using two mobile sinks (Figure 5b) from +43% for RWP up to +52% for RD, the best result is obtained with the RD model and the EA-CRP routing protocol + 65%.On the other hand, by comparing the flow scenarios, we note that the use of four mobile sinks gives results very close to that of eight mobile sinks.So, we can conclude that in terms of profitability, the use of four mobile sinks establishes a good compromise between the flow rate and the investment cost, and we can obtain a better throughput of 73% more with only four mobile sinks by using RD mobility model and EA-CRP routing protocol. Reliability Evaluation In this simulation scenario, we compared the network's reliability by varying the number of mobile and static sinks monitoring the network to study the impact of using multiple mobile sinks on reliability in LS-WSN. The simulation results illustrated in Figure 5a show that the use of a single mobile sink, whatever the model, increases the average reliability compared to a fixed sink from +17% for RWP up to + 20% for RD, and the best result is obtained with the RD model and the EA-CRP routing protocol with a gain of +22%. The simulation results illustrated in the other figures show that the use of multiple mobile sinks increases the average reliability of the network compared to that of a single static sink (Figure 5a): • By using two mobile sinks (Figure 5b) from +43% for RWP up to +52% for RD, the best result is obtained with the RD model and the EA-CRP routing protocol + 65%. • By using four mobile sinks (Figure 5c) from +73% for RWK up to +88% for RD, the best result is obtained with the RD model and the EA-CRP routing protocol + 105%. • By using eight mobile sinks (Figure 5d) from 91.5% for RWK up to 110% for RD, the best result is obtained with the RD model and the EA-CRP routing protocol + 140%. Sensors 2023, 23, x FOR PEER REVIEW 17 of 25  By using four mobile sinks (Figure 5c) from +73% for RWK up to +88% for RD, the best result is obtained with the RD model and the EA-CRP routing protocol + 105%. By using eight mobile sinks (Figure 5d) from 91.5% for RWK up to 110% for RD, the best result is obtained with the RD model and the EA-CRP routing protocol + 140%.It can be deduced from the previous results that the best mobility model is RD, which offers more than twice more throughput compared to a single fixed sink for the whole network, using eight mobile sinks.More precisely, the RD moves in the network by approaching in a realistic way towards the SN clusters to reduce the distance between the sink and the CH and consequently reduce the phenomena of collisions and packet retransmission, consequently increasing the network reliability, especially with the EA-CRP routing protocol which uses the combination of two routing techniques, clustering, and multihop.It can be deduced from the previous results that the best mobility model is RD, which offers more than twice more throughput compared to a single fixed sink for the whole network, using eight mobile sinks.More precisely, the RD moves in the network by approaching in a realistic way towards the SN clusters to reduce the distance between the sink and the CH and consequently reduce the phenomena of collisions and packet retransmission, consequently increasing the network reliability, especially with the EA-CRP routing protocol which uses the combination of two routing techniques, clustering, and multi-hop. On the other hand, by comparing the flow scenarios, we note that the use of four mobile sinks gives results very close to that of eight mobile sinks.So, we can conclude that in terms of profitability, the use of four mobile sinks establishes a good tradeoff between the network reliability and the cost of investment, and we can obtain better reliability of +105% more with only four mobile sinks by using the RD mobility model and the EA-CRP routing protocol. Packets Latency Time (End-to-End Delay) Evaluation In this simulation scenario, we compared packet latency (the percentage of packets that arrive at the sink with less than 1ms delay) by varying the number of mobile and static sinks monitoring the network to study the impact of the use of multiple mobile sinks on packet latency in LS-WSNs. The simulation results illustrated in Figure 6a show that the use of a mobile sink, whatever the model, increases the mean percentage of fast packets (packets with delay less then 1ms) compared to that of a single fixed sink from +8% for RWP up to +28% for RD, and the best result is obtained with the RD model and the LEACH routing protocol +73%.On the other hand, by comparing the flow scenarios, we note that the use of four mobile sinks gives results very close to that of eight mobile sinks.So, we can conclude that in terms of profitability, the use of four mobile sinks establishes a good tradeoff between the network reliability and the cost of investment, and we can obtain better reliability of +105% more with only four mobile sinks by using the RD mobility model and the EA-CRP routing protocol. Packets Latency Time (End-to-End Delay) Evaluation In this simulation scenario, we compared packet latency (the percentage of packets that arrive at the sink with less than 1ms delay) by varying the number of mobile and static sinks monitoring the network to study the impact of the use of multiple mobile sinks on packet latency in LS-WSNs. The simulation results illustrated in Figure 6a show that the use of a mobile sink, whatever the model, increases the mean percentage of fast packets (packets with delay less then 1ms) compared to that of a single fixed sink from +8% for RWP up to +28% for RD, and the best result is obtained with the RD model and the LEACH routing protocol +73%.The simulation results shown in the other figures show that the use of multiple mobile sinks increases the mean percentage of fast packets compared to the use of a single static sink (Figure 6a): • By using two mobile sinks (Figure 6b) from +21% for RWP up to +43% for RD, the best result is obtained with the RD model and the LEACH routing protocol +99.5%. • By using four mobile sinks (Figure 6c) from +43% for RWP up to +69% for RD, the best result is obtained with the RD model and the LEACH routing protocol +153%. • Using eight mobile sinks (Figure 6d) from 51% for RWP to 78% for RD, the best result is obtained with the RD model and the LEACH routing protocol +175%. We can draw from the previous results that the best mobility model is RD, which offers almost triple results and 2.73 times faster packets compared to a single fixed sink by using eight mobile sinks.Therefore, we conclude that using multiple mobile sinks enhances the packet latency, particularly in smaller areas. More precisely, the RD moves in the network by approaching them in a realistic way towards the SN clusters to avoid packet retransmission, decreases the distance between the sink and the CH, and consequently increases the number of packets that reach sinks with low delays due to the single hop routing technique used by LEACH. On the other hand, by comparing the latency scenarios, we note that the use of four mobile sinks gives results very close to that of eight mobile sinks.So, we can conclude that in terms of profitability, using four mobile sinks establishes a good tradeoff between the latency time and the cost of multiple mobile sinks.So, we can obtain a better latency of +153% for fast packets with only four mobile sinks by using the RD model and the LEACH routing protocol. In conclusion, when the set of static or mobile sinks increases their proximity to sensor nodes, fewer nodes are associated with each sink.This results in the reduction of the packet interference and the un-saturation of the sink buffer.This avoids packet retransmission and subsequently improves the number of packets that hold their sinks without additive delay. Comparing the effect of static and mobile sinks, we can deduce that mobile sinks offer gains in energy conservation of −59.5%, throughput of +90.2%, reliability of +140% and latency of 175% with eight mobile sinks with the best RD the best mobility model for our scenarios.The use of eight mobile sinks offers a non-significant gain compared to the use of four mobile sinks.However, supplementary expenses were incurred due to the use of a high number of mobile sinks.So, we can just be satisfied with four mobile sinks with the RD mobility model, seeing the high costs of eight mobile sinks.Otherwise, it will be a waste. Limitations and Potential Solutions Conducting a simulation study to improve QoS in large-scale WSNs using multiple mobile sinks and random mobility models is a valuable approach.However, like any research methodology, it has limitations.The Table 3 below resumes the approach and the potential solutions or areas for future research: The use of several mobile sinks in simulation can improve the QoS easily and give good results.However, in the real world, there is a significant investment cost behind it. Consider a hierarchical routing protocol with random mobility awareness of the sinks.The protocol allows dynamic and adaptive coordination between sinks and CHs to optimize routes and reduce the number of mobile sinks and associated costs while respecting important QoS metrics such as delay and coverage. Optimization of Mobility Patterns Random mobility models might not adequately represent the movement patterns of mobile sinks.Optimization of the behaviors of these models can be challenging. Conduct empirical studies to optimize the mobility models with machine learning to ensure they accurately reflect the movement of sinks in realistic WSN applications with QoS constraints. Conclusions This paper investigated the impact of using multiple mobile sinks on network energy efficiency and QoS metrics using a cluster-based routing approach and random mobility patterns. More specifically, this type of protocol uses hierarchical routing, which offers good energy conservation and latency time results.Moreover, random mobility models run through the network in the context of a simulated battlefield observation where fast and unexpected events reoccur by getting closer to CHs to reduce power consumption during the transmission phase, reduce delays during data collection and increase the number of packets collected. The simulation results obtained demonstrate that the Random Direction mobility model with four sinks has a significant impact on power consumption and QoS metrics.In particular, the EA-CRP and P-LEACH protocols achieve a significant improvement in terms of energy consumption, throughput, and reliability, while the latency time gains better with the LEACH protocol. In addition, the simulation results show that RD is more suitable for LSWSN because it maintains good performance in terms of power consumption and all the QoS criteria studied for the large supervised RoI and that one can optimize the number of mobile sinks according to the real-time constraints of the RCSF application and the allocated budget. After an in-depth discussion of the current approach limits regarding certain uncertainties in the operation of the mobility models used.We planned for future work to first, investigate the use of mobility traces collected from real-world deployments to create more accurate random mobility models.Secondly, consider a hierarchical routing protocol with random mobility awareness of the sinks to reduce the number of mobile sinks and associated costs while respecting QoS.Finally, optimize the mobility models to ensure they accurately reflect the movement of sinks in realistic WSN applications with QoS constraints.The proposed model may also be applied to numerous real-world engineering problems [61,62] 3. 1 . 1 . Step 1: Install Castalia • Download and install the Castalia simulator v3.2 and OMNeT++ framework v5.0 according to the installation instructions provided on the Castalia website [50].3.1.2.Step 2: Create the Simulation Scenario • Define the geographical area or environment where the WSN will be deployed.• Define the number and initial positions of SNs and the static sink in the network.• Define the random or deterministic deployment strategy.• Define the mobility patterns of sinks.3.1.3.Step 3: Configure Simulation Parameters Figure 1 . Figure 1.Traveling patterns of a Mobile Sink using (a) the Random Walk MM (RW), (b) the Ra WayPoint MM (RWP), (c) the Random Direction MM (RD) and (d) the Gauss Markov MM (G Figure 1b shows an example of a traveling path of a sink, which begins in the c of the RoI, using the RWP Mobility Model.The movement pattern of a sink using the Mobility Model is similar to the RW Mobility Model if pause time is zero and [min-s max-speed] = [speed-min, speed-max].Figure1cshows an example path of a sink, which begins in the center of th using the RD Mobility Model.In this model, the sink chooses a random direction to t Figure 1 . Figure 1.Traveling patterns of a Mobile Sink using (a) the Random Walk MM (RW), (b) the Random WayPoint MM (RWP), (c) the Random Direction MM (RD) and (d) the Gauss Markov MM (GM). Figure Figure 1bshows an example of a traveling path of a sink, which begins in the center of the RoI, using the RWP Mobility Model.The movement pattern of a sink using the RWP Figure 2 . Figure 2. Deployment of static and mobile sinks around the RoI, (a) Single Sink for the entire n work, (b) 2 Sinks for two sub-networks, (c) four Sinks for four sub-networks, (d) eight Sinks for ei sub-networks. Figure 2 . Figure 2. Deployment of static and mobile sinks around the RoI, (a) Single Sink for the entire network, (b) 2 Sinks for two sub-networks, (c) four Sinks for four sub-networks, (d) eight Sinks for eight sub-networks. Sensors 2023 , 25 Figure 3 . Figure 3. Nodes Energy Consumption with different routing protocols by using multiple statics and mobile sinks, (a) Single Sink for the entire network, (b) 2 Sinks for 2 sub-networks, (c) 4 Sinks for four sub-networks, (d) 8 Sinks for 8 sub-networks. Figure 3 . Figure 3. Nodes Energy Consumption with different routing protocols by using multiple statics and mobile sinks, (a) Single Sink for the entire network, (b) 2 Sinks for 2 sub-networks, (c) 4 Sinks for four sub-networks, (d) 8 Sinks for 8 sub-networks. Figure 4 . Figure 4. Total data collected by multiple statics and mobiles sinks with different routing protocols, (a) Single Sink for the entire network, (b) two Sinks for two sub-networks, (c) four Sinks for four sub-networks, (d) eight Sinks for eight sub-networks. Figure 4 . Figure 4. Total data collected by multiple statics and mobiles sinks with different routing protocols, (a) Single Sink for the entire network, (b) two Sinks for two sub-networks, (c) four Sinks for four (d) eight Sinks for eight sub-networks. Figure 5 . Figure 5. Network Reliability with different routing protocols by using multiple statics and mobiles sinks, (a) Single Sink for the entire network, (b) two Sinks for two sub-networks, (c) four Sinks for four sub-networks, (d) eight Sinks for eight sub-networks. Figure 5 . Figure 5. Network Reliability with different routing protocols by using multiple statics and mobiles sinks, (a) Single Sink for the entire network, (b) two Sinks for two sub-networks, (c) four Sinks for four sub-networks, (d) eight Sinks for eight sub-networks. Figure 6 . Figure 6.Packets latency with different routing protocols by using multiple statics and mobiles sinks, (a) Single Sink for the entire network, (b) two Sinks for two sub-networks, (c) four Sinks for four sub-networks, (d) eight Sinks for eight sub-networks. Build and run the simulation using the OMNeT++ IDE or command-line tools as per the Castalia documentation [50].•Monitor and collect simulation results, which include the QoS metrics you defined in step 4. •Select the specific QoS metrics you want to evaluate based on our research goals.Common QoS metrics in WSN simulations include Packet Delivery Ratio (Reliability), End-to-End Delay (Latency), Throughput, Network Lifetime and Coverage. Table 1 . Brief Comparison between cluster-based routing protocols. CH selection methodSelect the random number between zero and one and compare it with the threshold to select the CH The node with the highest battery capacity becomes the cluster center CH The weight function of energy and distance is calculated for each node.The node with the highest weight function value becomes the CH Sensors 2023, 23, x FOR PEER REVIEW 11 Table 2 . Parameters of simulations. Table 3 . Limitations and Potential Solutions.
2023-10-20T15:12:16.080Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "b4b4484fb5f111b72ab8304c7bd3dc97214d549e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/20/8534/pdf?version=1697611821", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f20b9e9e070f73ed03431ddef7cedfc796ea1d8", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }